Friday, December 27, 2013

Release the sperm!

While preparing a class about synthetic biology, I came across this older paper that actually shows a practical application for synthetic biology.  Kemmer et al. describe a new technique for artificial insemination of cows in the Journal of Controlled Release (in 2011).  I’m not condoning these practices in cows; that is a debate for another day.  I am much more interested in the biology behind this ingenious way of improving the timing of artificial insemination.  Let’s get into it.

Luteinizing hormone
Before I describe the synthetic circuits, we have to go over what the luteinizing hormone (LH) does.  LH is released from the pituitary gland in the brain and travels through the blood to the gonads (in males and females).  In females, there is a huge surge of LH release once a month, which triggers the release of an oocyte (an egg) from the mature follicle in the ovaries.  In other words, increased LH causes ovulation.  The LH hormone binds to LH receptors (LHR) which are expressed on the surface of the target cells in the ovary.  When LH binds its receptors, it triggers a molecular cascade inside the target cell, which leads to the production of another molecule called cyclic AMP (cAMP).  cAMP is a versatile molecule that can initiate lots of cellular responses, like changes in gene expression or activation of enzymes.

The current practice in cow farming is to keep an eye on the female cows and when they appear to be in estrus, then the farmers inject sperm into the cow and hope for the best.  Different cows, though, will have different durations of estrus, so it is sort of a guessing game to time the insemination perfectly.  The LH surge regulates release of the oocytes, so what if you could design a synthetic system that also releases sperm in response to LH?  The sperm will be encapsulated and inert until the LH surge initiates the release of the sperm from their holding cell.  The farmer could inseminate the female cow when estrus appears to be close at hand and the female’s own LH will release the sperm at just the right time when the oocyte is naturally released. 

How can the researchers design a holding cell for sperm that is responsive to LH?

The synthetic circuit
The holding cell is going to be a little hollow bead of cellulose (diameter = 350-400 um).  Cellulose is a naturally occurring molecule made up of lots of glucose sugars hooked together.  The cellulose beads will stay intact unless there is an enzyme called cellulase to break all those bonds between the sugars.  The researchers envelop living sperm and modified mammalian cells inside the microbeads and these get injected into the uterus of the female cow.  The sperm seem to be happy inside the cellulose and are still functional when they are later released.

 
The modified cells have two engineered transgenes:
1) We want these cells to be responsive to LH, so the cells must express the LH receptor.  The researchers find that the rat LHR actually works best, so these cells will have the gene for making the rat LHR.
2) Remember that when LH binds to LHR, there will be a rise of cAMP inside the cell.  cAMP will activate a protein called CREB that binds to DNA and activates expression of genes (I’m skipping a few steps here).  Okay, so LH will bind LHR, cAMP levels will increase, CREB will be activated and will bind to specific DNA sequences in front of genes.  The researchers put the cellulase gene right after a CREB binding sequence in the second transgene.  CREB should bind to the DNA and activate expression of the cellulase gene.

Hopefully you can see where this going now.  When LH is released during ovulation, it will also bind to these modified cells and cause expression of cellulase (the enzyme that breaks down cellulose).  The cellulose surrounding the sperm will be destroyed and the sperm will be released at the same time as the egg.  Bam!

The two pathways initiated by the LH surge.  On the left is one of the modified cells inside the cellulose capsule.

Does it work?  The researchers inserted the cellulose implants into the uterus of Swiss dairy cows.  Next they injected the cows with a hormone that triggers release of LH.  The capsules were degraded and sperm released at the same time as the cow naturally released an oocyte.  Fertilization occurred and embryos developed via this well-timed artificial insemination.  The sperm capsules significantly increase the time window for artificial insemination, which takes the guess work out of insemination. 

Look, synthetic biology working in a useful setting, rather than in bacteria or mice.

Monday, December 23, 2013

Probiotics for autism


The human microbiome is a hot topic in biology these days.  It is becoming clear that the microbes living in and on our body can have major consequences for our health and happiness.  In fact, abnormalities in the gut microbiome may underlie one of the great medical mysteries of our time: autism.   That some bacteria in our intestines could affect our behaviors and brain development is mind blowing.

Hsiao et al. recently published a study in the journal Cell that investigated the connection between the gut microbiome and autism using a mouse model of autism.  They were drawn to this subject based on the fact that individuals with autism spectrum disorder (ASD) often have gastrointestinal abnormalities, like irritable bowel syndrome and increased intestine permeability.

Autistic mice?
Apparently you can produce mice that exhibit the “core communicative, social and stereotyped impairments” associated with ASD, by injecting their pregnant mothers with a molecule that stimulates an immune response.  In humans, maternal infection is linked to increased risk of autism in their children.  The production of these mice was the most questionable part of the paper in my opinion.  They never call these mice autistic, and the mice do show impairments associated with neurological diseases.  So perhaps we should think of it as a model of a generic neurological disorder.  For the sake of simplicity, though, I will refer to them as “autistic mice”, but remember that it is not a perfect model system.

They find that the autistic mice have various defects in their gastrointestinal (GI) tract.  For instance, their intestinal walls are leaky, so molecules that are not supposed to be absorbed can cross from the gut into the blood stream.  This problem seems to be caused by the fact that these mice express less of the proteins that make the tight junctions between cells.  Think of these as fences between cells, so molecules can’t sneak through there into the body.  In an ideal situation, all molecules that are absorbed from the gut must go through the cells, a process which is highly regulated. 

Tight junctions prevent molecules from passing from the gut into the blood.  Image adapted from dbriers.com

They find a number of metabolites that are produced in the intestine from bacteria, which end up in the blood of autistic mice, but not in the normal mice.  In other words, these are potentially toxic molecules that they need to get rid of, but the toxins are leaking into the blood of the autistic mice.  That’s not good.  In fact, if you inject one of these molecules into a normal mouse, it will become more anxious, similar to the autistic mice.  They couldn’t reproduce all of the behaviors of the autistic mice just with this one molecule, but it’s a good proof of principle.  Presumably it’s the build up of all of these metabolites in the blood that cause impairments of the nervous system.

Dysbiosis of the intestinal flora
I love that word “dysbiosis”.  It means that the intestinal microbiome is out of whack.  The wrong types of bacteria are in there messing stuff up.  Hsiao et al. found a number of species present in the autistic mice that were not in normal mice and vice versa.  Presumably this imbalance in the microbiome is what is making the gut leaky. 

To prove this, the authors fed the autistic mice a probiotic (a “good” type of bacteria) called Bacteroides fragilis (B. frag).  Interestingly, B. frag never actually colonized the guts of the mice, but just having it pass through helped to restore the normal microbiome.  Some of the species that were only present in autistic mice disappeared after they consumed B. frag.  The leakiness of the gut was almost completely reversed, including expression of tight junction proteins.  It wasn’t a perfect reversal, but a number of those metabolites in the blood decreased back to normal.

Behavior affected by microbiome
To review: when a pregnant mouse has an infection, her offspring show signs of autism (a mouse-version).  Somehow this infection causes the wrong bacteria to colonize the guts of the offspring.  The dysbiosis leads to changes in gene expression and a leaky gut that allows toxic molecules into the blood stream, thus affecting the development of the nervous system.  Consumption of a probiotic at weaning age fixes a lot of the gut issues.  Does it also reverse some of the behavior impairments associated with autism?

The short answer is yes!  Autistic mice fed B. frag were less anxious, less obsessive, more communicative and interacted more with other mice.  The test for obsessive behavior was kind of cute.  The mice were put in a cage filled with sand with marbles sitting on top.  The autistic-like mice bury a greater percentage of the marbles, demonstrating a stereotyped behavior.

Yogurt from everyone!
If I had an autistic child and read this paper, I would start them on probiotics right away.  I mean probiotics are good for everyone, right, so it definitely seems worth trying.  In fact, the authors say that B. fragilis is depleted in human ASD children compared to matched controls.  Furthermore, probiotics have already been shown to be beneficial in treating chronic fatigue syndrome.   

The authors end their paper with this bold statement: “We propose the transformative concept that autism, and likely other behavioral conditions, are potentially diseases involving the gut that ultimately impact the immune, metabolic, and nervous systems, and that microbiome-mediated therapies may be a safe and effective treatment for these neurodevelopmental disorders.”

Thursday, July 11, 2013

Throw another adipocyte on the fire

Humans are able to live in so many different climates, in a wide range of temperatures and yet our inner core body temperature remains nearly constant.  This ability to thermoregulate has something to do, of course, with clothing and the ability to cool and heat our living spaces, but our bodies also offer many adaptations to regulate body temperature.  If it’s too hot, we sweat, releasing excess heat through evaporative cooling.  If it’s too cold, we shiver, producing heat in our working muscles.  The production of heat through physiological mechanisms is called thermogenesis and also includes a non-shivering version.  Today’s paper is about non-shivering thermogenesis, which is when our fat cells produce heat.

Non-shivering thermogenesis
To understand how non-shivering thermogenesis works, we need to take a step back and discuss cellular respiration.  The cells of our body store energy from food in the chemical bonds of a molecule called ATP.  During cellular respiration, a cell will convert glucose or fat into carbon dioxide, while slowly tapping into the energy in those food molecules in order to make ATP.  The final step of cellular respiration is that the energy from the electrons in glucose are passed from protein to protein, releasing energy that is used to pump protons into a membrane-bound cellular space.  You can think of these protons as a form of potential energy, like stuffing a closet full of balls.  When you open up the closet door, all the balls come tumbling out, releasing their potential energy in the process.  During cellular respiration, this potential energy is used by an enzyme to make ATP.  During non-shivering thermogenesis, though, the potential energy stored in all those protons stuffed into a small space is released by the cell as heat.  Thus, the energy from food is used to heat the body rather than being stored in ATP.

The main type of cell that does non-shivering thermogenesis is brown adipocytes, or fat cells.  Brown fat is very common in infants, but is also found in adult humans in the upper chest and neck.  The purpose of brown fat is to provide heat for the body.  Thus, non-shivering thermogenesis is activated by a drop in body temperature.  The cold temperature is sensed by the brain, which activates the sympathetic nervous system (the “fight or flight” response), which signals to the brown fat cells to express the genes necessary to bypass ATP production and release heat instead.  In a recent paper published in PNAS, Ye et al. describe how a different type of fat cell is able to skip all the nervous system steps and sense the cold directly (red arrow in diagram).  It is pretty cool that the fat cells are able to sense temperature, as if they were neurons, and can act autonomously to remedy the situation.  No need for a brain here!


Independent thermogenesis
Through a series of experiments, the authors demonstrate that a particular type of fat cell will express genes necessary for non-shivering thermogenesis when exposed to cold, independent of sympathetic nervous system activation.

In one experiment, they grew fat cells at different temperatures and measured gene expression using a technique called quantitative PCR (qPCR).  The idea behind this technique is that if a gene is highly expressed, there will be a lot of mRNA in the cell (remember the “central dogma” of molecular biology) and qPCR is a method for measuring the concentration of mRNA for a particular gene.  They focused their measurements on thermogenic genes that are known to be part of the non-shivering thermogenesis mechanism, such as Ucp1, which is the enzyme that actually allows the protons to fall back across the membrane, thereby releasing their energy as heat.  They found that these fat cells that were exposed to the cold expressed more Ucp1 mRNA, even in the absence of any nervous system.  These are just cells in a dish, so this must be an intrinsic property of fat cells.

It wasn’t just any fat cell that had this response.  In fact, brown adipocytes did not express more Ucp1 in the cold.  It was a different type of fat cell called a white adipocyte.  What is white fat?  The majority of fat in our body is white fat and its purpose is to store fat for energy (for cellular respiration) and to act as a thermal insulator, so we don’t lose as much heat through our skin.  There is one subtype of white fat that has been shown to do non-shivering thermogenesis and it was this type that could express thermogenic genes, like Ucp1, in the cold, independent of the nervous system.

Okay, so these white fat cells don’t need input from the nervous system, but do they still use the same intracellular pathway to turn on expression of these genes?  Normally, when a fat cell is activated by the sympathetic nervous system, it sets off a molecular cascade of events inside the cell, which involves activation of molecules in a pathway called the cAMP pathway (as shown in the diagram).  The authors inhibited this pathway in various ways and found that the cells could still respond to the cold as before, so this effect must use a different pathway.

There are still a number of open questions, such as: how do fat cells sense temperature?  Do they use the same types of receptors as temperature-sensitive neurons?  Why are some white fat cells independent, but brown fat cells need the nervous system to activate thermogenesis?  One thing that is clear, however, is that white fat cells are clearly important for temperature regulation as well as fat storage.  The authors suggest that tapping into thermogenesis might be a good way to help obese patients get rid of excess energy storage by releasing it as heat.  This pathway that is independent of the sympathetic nervous system could allow medications to target only the fat cells without involving the sympathetic nervous system which controls so many other functions in the body.

Something to think about as the cold Bay Area summer sets in.

Friday, June 21, 2013

(Insert mildly provocative title here)

Ever seen a pair of pigeons going at it?  And did you notice a penis on the male pigeon?  The answer is no, because most birds do not have external genitalia large enough for penetration.  And yet birds reproduce via internal fertilization.  Why would evolution favor male genitalia too small to actually enter into the female?  This just seems so inefficient. 

There are a few birds that do have well developed phalluses, such as the duck and goose.  What happened during evolution that caused some birds to retain a phallus, whereas most other birds lost it?  A paper appeared this week in Current Biology by Herrera et al., which addresses these questions from a developmental point of view.

Developmental arrest
The authors started this study by comparing the development of the phallus in embryos of two different birds.  They chose to look at (1) chick embryos, which are part of the galliformes group of birds and have reduced phalluses and (2) duck embryos, which are part of the anseriforms group, which have well developed, penetrating penises.  They followed the growth of the genital tubercle, the tissue that will form the penis.  As the duck and chick embryos grow, so do their genital tubercles, with no noticeable difference between the two species during the early stages of development.  At a later time period, though, the tubercle stops growing and regresses in the chicks, while the duck keeps on growing.  This shows that the tissue that makes the two different types of phalluses has the same developmental origin.

Why does the genital tubercle stop growing in the chick?
From a molecular stand point, the chick embryos could either lose the “growth” signal or they could gain expression of a “stop” signal not present in ducks.  From work in other animals, the authors knew that there are two major growth signals responsible for guiding the development of the external genitalia – Sonic Hedgehog (Shh) [see my other post about this protein] and Hox13.  These two genes are strongly expressed in the duck genital tubercle throughout embryonic development, as expected.  Surprisingly, though, they are also strongly expressed in the chick embryos.  This means that the chickens haven’t lost the growth signal.

The authors then investigated if there is some sort of a “stop” signal in the chicks.  They found that in chicks and quails, with reduced phalluses, there is a lot of cell death in the genital tubercle in the later stages of development.  This could account for the regression of the genital tubercle.  They then found that the chicks highly express a protein called BMP4 at the tip of the tubercle, which induces cell death, whereas ducks do not. 

In fact, by overexpressing BMPs in the duck, they induced cell death in the genital tubercle.  In the opposite experiment, they inhibited BMPs in the chick and their genital tubercles increased growth, as if they were ducks.

In summary:

            Chicken: + BMP --> increased cell death --> reduced phallus
            Duck:       - BMP --> no cell death, so continued tissue development --> large phallus

Evolution of reduced phallus
So what does this mean?  Chicks and quails have reduced phalluses, because during development, they express BMP4, which tells the developing cells of the penis to die off.  One really cool thing that the authors did next was to look at cell death in the closest relative to birds-- the alligator.  Yah, they got alligator embryos for this research!  Alligators have developed phalluses and they show hardly any cell death in the genital tubercle.  From this work, they could create an evolutionary tree, which shows that chicks and quails most likely evolved the BMP4 signal after their group separated from ducks.  Although the authors didn’t test any members (ha ha) from the neoaves group, which includes most other birds, we can presume that they also have a similar cell death mechanism to reduce the development of their phalluses.

Phylogenetic tree of birds, showing when the BMP signal evolved. (Adapted from Herrera et al., 2013)

This still begs the question of why would natural selection favor a reduced phallus so much so that it evolved independently in different lineages?  The authors propose two different theories, both of which may have occurred:

1) Sexual selection – sure, it may not be favorable for the males to have reduced phalluses, but it might be advantageous for the females.  In order for insemination to occur in these species, the female has to be a willing participant to allow the male to shimmy up next to her and release the sperm in very close proximity.  This gives the females the power to select their mates.  As opposed to species with large penises, where the male could basically rape the female and still successfully pass on his genes to the next generation.

2) Pleiotropy – this term refers to when a single gene mutation can lead to multiple noticeable changes in the body.  BMPs are a major signal during development of animals.  BMPs are involved in a number of bird-only innovations such as feathers and beaks.  Maybe increased BMP expression gave an advantage to these birds, but also lead to reduced phalluses, as a secondary effect.  This may have occurred first in evolution, but sexual selection may have stabilized this characteristic in the population.

This article was so clear and interesting.  I’m sure it will catch people’s attention because of the subject matter, but it’s a great example of using development to solve an evolutionary question.  Plus it gives reviewers and bloggers a great opportunity to think up clever titles and puns for their articles.  The review that was published alongside this article was titled “Cock-a-doodle-don’t”. How can I compete with that?

Tuesday, May 28, 2013

Stop seizures with a brain graft


There are two types of neurons in the brain: excitatory and inhibitory neurons.  They do exactly what you think they would.  Excitatory neurons release chemical messengers, which activate other neurons, which may eventually lead to some sort of perception or action.  Inhibitory neurons release chemicals that silence other neurons.  Why would you want inhibitory neurons in your brain?  Well, if all your neurons were excitatory and interconnected, all your neurons would be active all the time and the signals would be meaningless.  In fact, this sort of overactivation in the brain can lead to seizures.  It’s been shown in numerous cases of epilepsy that there is some sort of dysfunction of the inhibitory neurons.  The excitatory neurons have free reign and go crazy, leading to a seizure.

How is epilepsy treated?  Medications that potentiate the inhibitory neurons can help, but they activate all inhibitory neurons throughout the brain, when maybe the problem is more localized to one spot.  Just as all excitatory neurons is a bad thing, too much inhibition is also bad and can lead to cognitive side effects.  Another treatment is to open up the patient’s head, try to find the overactive area and cut it out or zap those neurons with a laser.  Destroying brain cells is always a last resort, though.

In a recent paper published in Nature Neuroscience by Hunt et al., the authors propose another potential treatment: adding new inhibitory neurons into the epileptic brain.  Like all new medical ideas, the story starts with mice.  They can create a model of human epilepsy in these mice by treating them with a potent drug.  These epileptic mice have seizures just like humans do.

Where do you get new inhibitory neurons?

The researchers obtained progenitor cells from mice embryos.  In other words, these weren’t inhibitory neurons yet, but they were destined to turn into them as the mice developed.  They grafted these progenitors into adult epileptic mice in the hippocampal region of the brain (a common area for seizures).   Amazingly, these pre-neurons migrated throughout the brain region, as far as 1.5 mm (that’s a lot… think about how small a mouse brain is).  Then the progenitors differentiated into inhibitory neurons, as if they were in a normal developing brain.  One week later, the epileptic mice with extra inhibitory neurons had hardly any seizures, whereas the untreated mice were having about 2 a day.  Not only that, but the treated mice showed cognitive improvements compared to the untreated epileptic mice. 

So they seemed to “cure” the epileptic mice by giving them some new inhibitory neurons that were able to make functional connections with the existing neurons.  This isn’t as invasive as brain surgery and it’s much more localized than medication.  If the epilepsy were focused in a different part of the brain, then they could transplant the cells there instead.

Is this possible to try in humans?  Maybe so, but the first problem is that we can’t take inhibitory progenitor cells from human embryos.  There are some ethical issues with growing clones to harvest parts from them.  However, you could use embryonic stem cells, or induced pluripotent stem cells.  Pluri-what?  Recent technology allows researchers to take a skin biopsy, do some genetic engineering to these cells and push them back in developmental time to a stem cell.  Pluripotent means that these stem cells have the potential to become any type of cell, like an inhibitory neuron.  All it takes is turning on the right genes in these cells to push them to a particular fate, and if that isn’t already known for inhibitory neurons, I bet it’s not too far off.  Plus there’s the benefit that the transplanted cells will have the same genome as all the patient’s other cells, because they originated from their skin cells.  Just wait, regenerative medicine is moving ahead at lightning speed.

Friday, May 17, 2013

Go go gadget extendo filopodia

I’m back from an intense semester of learning and teaching Developmental Biology.  One theme that emerged from my studies was that the development of organisms is centered around gene expression and cell to cell signaling.  Often times, one cell will differentiate into its mature form, and then release a signaling protein that tells neighboring cells what to develop into.  For instance, the nervous system is induced by signals released from the embryonic backbone.  There are a number of common signals that are used over and over throughout development, like BMP, Wnt and Shh.

A recent paper by Sanders et al., published in Nature, looked at how distant cells can signal to each other via the Shh pathway.  Unfortunately for Developmental Biology teachers everywhere, Shh stands for Sonic Hedgehog.  Oftentimes, strange or humorous gene names like this can be blamed on the fruit fly researchers who first discovered the gene, but in this case everyone is to blame.  This gene was originally discovered by researchers studying fruit fly embryonic development; they named the gene hedgehog because the mutant embryos had lots of tiny bristles all over, kind of like a hedgehog.  The mammalian researchers took it to the next ridiculous level, by naming the mammalian version of this gene Sonic Hedgehog.  The Shh protein is a secreted signal that binds receptors on other cells, which activate gene expression in the receiving cell.  Shh signaling is important for specifying many different cell fates, such as the different neurons in the spinal cord, the cells that become the vertebrae, as well as the formation of the digits of the hand.

Although Shh is secreted from the cell, it has chemical modifications that make it stick to the plasma membrane that surrounds the cell that released Shh.  How then can Shh induce the development of cells that are located at a distance?  Well, the answer is by stretching out long cellular extensions with Shh localized at the tip.

Shh Filopodia
Sanders et al. did live imaging of cells in the developing limb of the chicken using fluorescent proteins.  They did some genetic trickery so only a few cells were labeled in red and others in green.  This way they could detect individual cells in a sea of unlabeled cells and examine their structure in real time.  They observed individual cells extending long protrusions, called filopodia, from the cell bodies.  These filopodia could stretch long distances (150 micrometers, like 3-5 cell widths) and were dynamic-- retracting and growing over time. 

How to think about filopodia?  Imagine a stretchy balloon with a stick inside of it.  If you could push that stick into the wall of the balloon, the balloon would protrude from that one spot as the stick pushes it out.  That is like a filopodia, where the balloon wall is the plasma membrane and the stick is a protein called Actin.  Actin forms long chains that can grow, pushing out the membrane in front of it. 

The thin, string-like extensions from this cell are filopodia and are filled with Actin.  Image from proteopedia.org

The authors then labeled the Shh protein with another fluorescent marker and saw that it localized to the tips of filopodia.  Not only that, but the filopodia expressing Shh were more stable and did not retract as often.  In order for Shh to act as a signaling molecule, it has to bind a receptor on another cell.  Using a different color, the authors observed two co-receptors for Shh localized to filopodia from other cells.  They even saw filopodia from two different cells make contact with each other, where one cell expressed Shh and the other expressed the receptors.

This is amazing!  Instead of releasing a signal out of the cell with the hope that it goes to the right place and isn’t degraded, the cells literally grow to the right place with the signal on their membranes.  This is like hand delivering a note to your coworker, rather than making the note into a paper airplane and throwing it in the direction of their desk.

This is how I imagine this working.  Two cells that are located at a distance, reach out extensions and meet somewhere in the middle. The Shh signal would bind the receptor, causing changes to the pink cell.

A study like this could not have been done before recent innovations in live imaging and molecular biology to introduce the fluorescent proteins into the cells.  The filopodia are not preserved during the more traditional, static method of fixing cells with formaldehyde and then staining them.  Who knows what other tricks live cells use during embryonic development.  I suspect this is only the beginning.

Sunday, February 3, 2013

Swapping eggs

This week’s paper describes a new technique that could be used to manipulate human oocytes (i.e. eggs) to prevent a group of diseases called mitochondrial diseases.  The paper was presented by Tachibana et al. in Nature along with a similar paper by Paull et al.  For the sake of brevity, I will only discuss the findings from the first paper.

Mitochondria
So what are mitochondria?  Mitochondria are little compartments in the cell that make cellular energy.  They convert the energy stored in food into an energy source that the cell can use to drive chemical reactions.  In other words, they are absolutely essential for our survival.  The oxygen that we breathe in goes to the mitochondria to aid in this energy conversion, and we all know how vital oxygen is. 

There are two other interesting facts about mitochondria that relate to our story:

1) All the mitochondria in our body are duplicates of the mitochondria that were in our mother’s egg.  In other words, embryonic mitochondria are not made from our genomic DNA (gDNA) or from sperm contributions.

2) Mitochondria have their own DNA , which directs the synthesis of proteins that are necessary for their function.  This DNA is known as mitochondrial DNA (mtDNA) and it is only inherited from the mother, since all mitochondria originate from the egg.

If there are mutations in the mtDNA, then this can lead to problems with the synthesis of cellular energy, which can lead to human diseases known as mitochondrial diseases.  There are different types of mutations, which can affect people in different ways and with differing severities.  In this paper, the authors propose a way to prevent mitochondrial diseases from being inherited from generation to generation.  Let’s see how that works.

Nuclear transplantation
Let’s say you have a female patient with a mitochondrial disease, who wants to have a healthy child.  She is guaranteed to pass this disease on to her child via the mitochondria in her oocytes.  However, most of what makes the child “hers” is what lies in the mother’s genomic DNA, not in the mitochondrial DNA.  What if you could take the mother’s genomic DNA (plus the DNA from the father) and stick it into a healthy “enucleated” oocyte from a donor who has good, functioning mitochondria?  All the genomic DNA will have to be cleared out of the donated oocyte first, creating an enucleated egg.  The embryo that results from this nuclear transplantation will have genomic DNA from its mother and father, but its mitochondria will originate from the donor oocyte.  This would circumvent the mutated mtDNA that is in the real mother’s oocyte.


Tachibana et al. obtained human oocytes from volunteers and transfered the genomic DNA from one into another.  They then injected these oocytes with sperm DNA (like during real fertilization) and observed what happened.  Some oocytes failed to be fertilized and others died soon after, but a handful of oocytes survived into the blastula stage of development.  You can’t really grow a human embryo in a dish beyond the blastula stage and they are not allowed (yet) to implant these into women, so we don’t know what would happen to a child born from this procedure. 

They did carry out the above scenario with monkeys.  They transplanted the genomic DNA from one oocyte into another and implanted the blastula into another female monkey who carried the embryo to term.  The monkey youths are 3 years old now and doing just fine.  Their maternal genomic DNA is from one mother and their mitochondria are from a different oocyte donor. 

Isn’t this amazing?  I seriously doubt this procedure will be approved for human use anytime soon, because it’s too much like cloning, which basically follows the same procedure of putting genomic DNA into an enucleated egg.  It's a cool idea, though. 

Friday, January 25, 2013

Isolation and drug addiction

We all know that adverse, early life experiences can affect normal development and the ability to lead a happy and healthy adult life.  A number of recent studies have shown that rodents which are mistreated as pups have long lasting changes to their gene expression (i.e. epigenetics).  They are more anxious and have a harder time forming new memories.  A paper this week in Neuron builds upon these results, by studying the effects of social isolation on the “reward pathway” in the brain.

Reward pathway
What is a reward pathway?  Deep in the brain is a region known as the ventral tegmental area (VTA), which makes connections to the nucleus accumbens and prefrontal cortex.  When we do something that is naturally good, like eating or sex, the neurons in the VTA release dopamine onto the nucleus accumbens and we interpret that as “feeling good”.  This is our reward for doing something that will help us survive and procreate. 

The Reward Pathway in a brain cross section (from brainfacts.org)

Many drugs of abuse like cocaine, amphetamines and alcohol increase the amount of dopamine signaling in this pathway; this is one reason why drugs produce a “high”.  When this pathway gets overstimulated by increased drug use, the brain will try to compensate by making the pathway less efficient.  This is why drug users feel depressed when not on drugs and why higher and higher concentrations of drugs are necessary to produce the same high feeling.  This is a neurological explanation for drug addiction.  Drug abusers also start to make connections in their lives, and in their brains, between environments (a certain room, certain people, etc) and the feeling of reward.  Getting sober is so difficult because the brain has to unlearn these connections and the reward system has to recover back to its normal level of activity.

Plasticity
Before we talk about the paper, I need to introduce one more concept.  Neurons become activated when channels in their membranes open and positive ions rush in.  They can then pass on this signal to another neuron by releasing neurotransmitters (like dopamine) onto the next neuron.  The activity in the neuron and the amount of transmitter it releases into the synapse can change over time, based on that neuron’s previous experiences.  This is known as synaptic plasticity.  There are short-term changes, like facilitation, and longer-term changes (we’re talking hours and days here).  One of the more famous types of long-term plasticity is called long term potentiation (LTP) and is thought to underlie learning and memory.  When drug users start to become addicted, these types of long term changes to neuronal activity are occurring throughout the reward pathway.

Social isolation and VTA neurons
The experiment begins when young male rats were either housed together in groups of 3 or alone.  The researchers then recorded neuronal activity of VTA neurons under various conditions.  They found that rats that were isolated for more than 3 weeks, specifically during the equivalent of the rats’ early adolescence, can more easily induce LTP in the VTA neurons.  In other words, rats that had no social interactions during a critical period had more sensitive VTA neurons.  That is to say, their reward pathway is primed to be overstimulated, just like during repeated drug use.

What are the behavioral manifestations of having a sensitive reward pathway?

The next set of experiments they did is called conditioned place preference.  The rats were placed in a cage that had two different compartments, with different wall colors and floor textures.  The rats were then injected with amphetamine in one of those particular compartments, so they learned to associate the drug high with that environment.  The rats were then given a choice between the two compartments and inevitably they went to the room that was associated with the drug.  The researchers found that isolated rats had a greater preference for the drug room and developed the preference sooner than the control rats.  Social isolation as an adolescent causes an increased rate of learning an association between drugs and environment.  This could make these rats more vulnerable to drug addiction. 

What about unlearning the drug association?

After the drug testing, the rats were exposed repeatedly to the drug room, but this time they didn’t receive any drugs.  This is called extinction of a memory and it is measured by the rats losing their preference for the former drug compartment.   Socially isolated animals had a significantly slower rate of unlearning the preference.  Their memory was more resistant to extinction.  If their VTA neurons are overly sensitive, then it may be harder to rewrite that connection in the brain between environment and reward.

In the context of drug addiction, these findings are big.  An adverse early adolescence can prime the brain to develop addiction more easily and make it harder to sober up.  If the VTA neurons start firing every time you go through an environment associated with drugs, you’re going to want to take a hit again.  The authors bring up an interesting point that social isolation generally causes a depression of neuronal activity in places like the hippocampus (the site of learning), so maybe the increased activity in the VTA is the way for the brain to maintain some sort of homeostasis – some areas increase, some decrease, but overall the brain may have normal amounts of activity.  This is an interesting way of looking at this problem.  I suspect that social isolation offers little in the way of rewards, so the reward pathway is trying to compensate by getting more sensitive.  It will be interesting to see if there is also a connection with changes in gene expression.  The authors explain how the VTA neurons get overactive, from a cellular point of view, but what actually initiates those changes?  And how can social interactions feed into the biology of the cell?