Inherited Memory

Scientists have discovered a molecular mechanism by which a parent’s experiences can alter the genes of its children.

Do you dare to learn the secrets of...The Chromosome?!

Several recent studies have demonstrated connections between environmental factors and inherited genetic traits – for instance, people whose parents lived through famine tend to have higher rates of diabetes and heart disease – but this latest research, published in the journal Nature, marks the first hard evidence of such a modification process at work.

Before we dive into the details, though, let’s back up a bit, and take a look at why this is such a Big Freakin’ Deal.

See, the whole concept of inheritance via experience has been – and will probably continue to be – a very hard sell to modern biologists. As you might remember from science class, the 18th-century theory of Lamarckism, which proposed that traits could be modified through an individual’s “use and disuse” of them, was blown out of the water by Darwin’s theory of evolution by natural selection, which demonstrated – through mountains of hard data – that traits are actually shaped by millions of tiny variations, and the degree to which those variations help an individual create successful offspring that carry them on.

For more than a century, the whole Lamarck debate seemed to be settled – but lately, it looks like some elements of Lamarckism may be making a surprise comeback, in a new field called epigenetics – the study of gene changes caused by chemical factors other than modifications to the DNA sequence itself:1

Epigenetic memory comes in various guises, but one important form involves histones — the proteins around which DNA is wrapped. Particular chemical modifications can be attached to histones and these modifications can then affect the expression of nearby genes, turning them on or off.

For example, in 2007, the journal Nature published data showing that exposure to extended periods of cold could cause plants to alter the molecular process by which their DNA was copied – thus “silencing” certain genes in their offspring. Then in 2009, MIT’s Technology Review discussed some animal studies with even more shocking results: apparently, a mouse’s environment can affect the memory capacity it passes on to its descendants.

But whereas those studies were mainly concerned with the effects of epigenetic alteration, this new research has gone a step further, and mapped a specific epigenetic process that turns genes on and off. A team led by Professor Martin Howard and Professor Caroline Dean at the John Innes Centre performed a mathematical analysis of experimental data, and discovered the mechanism by which plants exposed to extended cold periods produce descendants with delayed flowering:

Professor Howard produced a mathematical model of the FLC system. The model predicted that inside each individual cell, [a gene known as] FLC should be either completely activated or completely silenced, with the fraction of cells switching to the silenced state increasing with longer periods of cold.

Not only did the plants’ gene expression pattern line up with the mathematical predictions – the team’s research also showed that histone proteins were modified in a way that would alter the FLC gene, and that this alteration happened during the period of cold. This is the first true demonstration of histones’ role of epigenetic modification.

At first glance, this might not seem to have a whole lot to do with neuroscience – but in a wider context, the implications of epigenetics look pretty incredible.

For plenty of obvious reasons, modern biological science has never been big on the idea “genetic” or “ancestral” memories – recollections of specific events passed down to us from our ancestors (e.g., the Jungian collective unconscious). And yet, while epigenetic modifications don’t offer much support for that idea, they do seem to suggest that our ancestors’ experiences can affect the biological development – and thus, the minds – of future generations.

We’ve still got a long way to go before we understand all the ways in which gene expression affects brain development. But if histones can help mice pass their enriched learning capacity on to their children, the question of what humans can pass on seems, at the very least, worth investigating.


1. Just to be safe, I want to point out that, on the whole, the theory of evolution through natural selection still explains a great deal about how biological systems work, and its principles have been verified in a multitude of fields. At the risk of being over-obvious: these new discoveries don’t threaten the main trunk of evolutionary theory; they’re just pruning a few of its branches.

Consistent Networks

A new study has established that functional networks are highly consistent across the brains of many individuals.

Functional networks look delicious, don't they?

The research, published in the journal Cerebral Cortexcorrelated huge amounts of data on the connectivity of macaque visual cortices – particularly the layers known as V1, V3, and V4 – and confirmed that between one monkey’s brain and another, these functional networks follow extremely similar patterns.

The question of how consistent primate functional networks are has been a controversial issue for years. Most brains are reasonably similar to others from the same species on a structural level – but the complexity of their synaptic connections is mind blowing: a single macaque brain contains more than 100 billion neurons, which form more than 100 trillion synapses.

Thus, it’s only in the past few years that scientists have even been able to map the functional connectivity of a single brain – much less compare functional connections across a whole group of primates. It’s also led to a lot of debate about how similar such complex networks can possible be be to one another.

Well, the answer is (in technical terminology), “really really similar.”

The brain is characterized by a highly consistent, weighted network among the functional areas of the cortex, which are responsible for such functions as vision, hearing, touch, movement control and complex associations. The study revealed that such cortical networks and their properties are reproducible from individual to individual.

This network-comparison adventure began when a group of neuroanatomists in France, led by Henry Kennedy, contacted the University of Notre Dame’s Interdisciplinary Center for Network Science and Applications (iCeNSA). The French scientists commissioned the Center – known for its expertise at analyzing complex networks for fields as diverse as sociology and epidemiology – to check out the brain-to-brain consistency of a massive amount of macaque functional connectivity data they’d gathered. The iCeNSA assembled a team led by Dr. Maria Ercsey-Ravasz and Dr. Zoltan Toroczkai, two physicists who tore into the data like King Henry I at a lamprey-eating contest:

A top-down approach called functional decomposition, identifying bundles within the brain, helps overcome the sheer data volume. The macaque brain has 83 [major functional] areas; the human brain more than 120.

One of the reasons functional connectivity is so consistent from one brain to another, the teams found, is that neural connections seem to organize themselves based on a consistent set of principles across a variety of scales – in other words, they exhibit some fractal characteristics.

“It looks like there is some sort of general algorithm that is being run in this brain network,” [Toroczai] says. “The wiring is very strange.”

What these strange organizational principles are – and why primate brains have evolved to rely on them – are questions the teams plan to explore in the near future.

Even more exciting, they hope these new methods of data analysis may help them unravel one of neuroscience’s ultimate mysteries: how, exactly, brains encode information at all.

Recognition and Localization

New research has identified a wide range of brain areas that help us recognize objects we see – and it’s also revealed some surprises about how the brain distributes processing power.

Superman's brilliant disguise fools us all.

A recent study published in the journal Neuron focuses on a patient – known as “SM” – who suffered a lesion in the right lateral fusiform gyrus (LFG), an area known to be involved in recognition of objects and faces. This has created a disorder known as “visual agnosia,” in which the patient can see just fine, but has serious trouble identifying objects. Decades of research on similar cases have shown that this isn’t a language difficulty – it’s a difficulty associating familiar objects with their names.

Visual agnosia is a pretty fascinating disorder, but it’s actually not the big news here – the unexpected part is that this new study has revealed two surprises about how object recognition works. First, it turns out that many more brain regions than scientists expected are involved in object recognition. And second,  it seems that the recognition process involves symmetrical activation patterns across both brain hemispheres, one of which can “fill in” when its partner suffers damage.

By correlating fMRI scans with behavioral observations, a team led by Princeton psychologist Dr. Christina Konen discovered that the functional connections involved in object recognition extend throughout the temporal and visual cortices of both the left and right hemispheres:

Visual responses, object-related, and -selective responses were reduced in regions immediately surrounding the lesion in the right hemisphere, and also, surprisingly, in corresponding locations in the structurally intact left hemisphere. In contrast, hV4 of the right hemisphere showed expanded response properties.

Carnegie Mellon University’s Dr. Marlene Behrmann, one of the study’s other leaders, was pretty stunned by this new data. As she puts it:

These results will force us in the field to step back a little and rethink the way we understand the relationship between brain and behavior. We now need to take into account that there are multiple parts of the brain that underlie object recognition, and damage to any one of those parts can essentially impair or decrease the ability to normally recognize objects.

It’ll probably help to back up a bit, and explain exactly why neuroscientists are so amazed by this information.

See, even though the two hemispheres of our brain might look symmetrical, there are also plenty of asymmetries between them – both in structure and in function. One of the most obvious examples is handedness: the majority of people are right-handed, even though there doesn’t seem to be any particular anatomical reason why this should be so – and in fact, plenty of other animals have about a 50/50 distribution between right- and left-handedness.

And then there’s language: it definitely seems to be localized in the left hemisphere. This is fairly easy to verify – the Wada Test involves injecting an anesthetic into one hemisphere of a patient whose corpus callosum, which acts as a communication bridge between the two hemispheres, has been severed. When the two hemispheres can’t communicate, anesthetizing the left hemisphere seriously impairs speech and language comprehension.

In short, the idea that certain functions are localized in one hemisphere is largely accepted today. Even so, fMRI scans show that plenty of tasks activate our brains in symmetrical patterns. What exactly this means is still up for debate – some neuroscientists think it might reflect a “safety net” of functional redundancy, while others say the two hemispheres may complement each others’ processing.

Whatever the case, experiments like the one above demonstrate that even when one hemisphere is damaged, the brain does its best to work around the problem:

There … appeared to be some functional reorganization in intact regions of SM’s damaged right hemisphere, suggesting that neural plasticity is possible even when the brain is damaged in adulthood.

Now, this ties into a much more exciting set of implications about our brains. For example, what about children who’ve had one of their brain hemispheres surgically removed, yet go on to complete college and lead productive lives? Or people who – as fMRI scans have confirmed – learn to literally see via sound or touch?

As long as the brain in question is young enough, or is given enough time to adapt, many areas can take over the functions of others. Perhaps our functional processing is distributed more widely in childhood, and “solidifies” more as we age – but even grown-ups can teach an old brain region new tricks.

This is why it’s not very accurate to say there’s “a” human connectome – each of our brains is wired in a unique way, and is constantly rewiring itself every second. New neurons are being born all the time – and existing ones are always forming new connections and being co-opted for new tasks.

In the unfortunate case of “SM,” not all the IFG’s functionality could be preserved; but even so, the case is a striking example of how versatile our brains can be – and how much we still have to learn about the ways they process information.

Measuring Maturity

New data is enabling neuroscientists to make accurate predictions about a young connectome’s future development.

A groovy-looking chart of functional connectivity modifications - green pathways weaken with maturity; orange ones grow stronger.

By comparing the resting-state functional networks in pre-adolescent brains with connectivity’ patterns found in adult brains, neuroscientists have developed a brain maturity growth curve that charts functional connectivity changes as the brain matures.

A report published in the journal Science explains that nodes in these networks are a bit like high-schoolers, because they join, branch, and and rejoin in a series of predictable “cliques” as an individual ages. Many of these cliques involve brain areas that influence a person’s ability to sustain attention, and to quickly come up with a reasonable response to a novel situation.

A team led by Dr. Bradley Schlaggar at the Washington State University School of Medicine in St. Louis began the study by asking whether fMRI scans could provide enough data to predict certain aspects of a person’s brain development. They discovered that, despite individual variations, these functional changes follow a regular pattern:

The researchers used functional connectivity data to determine measure the subject’s “brain age,” and to chart the maturation process from birth to full adulthood (age 30). This type of data visualization allows researchers to characterize the typical trajectory of maturation as a biological growth curve.

By correlating this data with maps of the actual functional networks, the researchers were able to predict which specific changes were likely to occur at various stages of a person’s climb toward maturity. In particular, this study focused on resting-state functional connectivity – the connective networks that take shape when the mind is “idling.”

These network maps might look pretty tangled, but their changes follow a logical pattern. In early childhood, the “fast/adaptive” network, which is centered in the frontal and parietal lobes and allows us to rapidly adapt to new situations, works closely with the “slow/maintenance” network, which is centered in the cingulate cortex and the operculum and helps us sustain mental activity – but that close partnership gradually changes:

In children there is marked connectivity [between these networks]. In fact, the two networks, which have not yet differentiated, are in active conversation as a single amalgam of regions. In adolescents, the brain regions are in an intermediate state.

The scientists speculate childhood integration between these two networks influences the short attention span of children, and that the gradual separation may allow our maturing brains to control our mental focus more precisely. By our mid-20s, these networks have gradually separated from each other while strengthening connections within themselves – further improving our ability to mentally shift gears.

The next step in this maturity research, Schlaggar says, is to study the brains of children with diseases like Tourette’s syndrome and ADHD, in an effort to understand just where this functional connectivity maturation process goes awry.

The better we’re able to map the details of these dynamic connectivity maps, the more we’ll be able to create effective therapies that target specific aspects of cognitive development. In fact, the latest evidence points toward the idea that these maps continue to reshape themselves well into adulthood – which is a hopeful sign not just for therapists, but also for those of us who love to hack our own connectomes.

Storytelling Synchrony

Research is revealing that the brains of storytellers and listeners fall in sync with each others’ synaptic activity.

"Hey Mom, guess what - I'm totally syncing with your brain right now."

A team led by Princeton neuroscientist Uri Hasson studied fMRI scans of speakers and listeners during a storytelling session, then asked the listeners a series of follow-up questions to check how well they comprehended the story:

Hasson and his colleagues recorded the brain responses of a woman who was telling a story about her prom and those of people who were listening to her … The recordings showed that the listeners’ brains started to resemble the speaker’s brain, or “couple” with it.

The listeners who reported a strong understanding of the story showed a high degree of coupling with the speaker’s brain – particularly in the primary auditory cortex (A1), as well as in the temporoparietal junction (TPJ), which helps us process distinctions between ourselves and others, and imagine what others are thinking and feeling (a process known as theory of mind).

On the whole, the storyteller’s brain activity tended to precede the listeners’, especially in regions associated with self-reflection, such as the insula and precuneus – though the listeners’ brain activity occasionally leaped ahead in these regions as well. And in a few areas, the listeners‘ brain activity preceded the storyteller’s – especially in certain parts of the prefrontal cortex (PFC) that are associated with anticipation and planning.

This all points toward the idea that participants in a conversation come to occupy a sort of shared conceptual space:

“If I say, ‘Do you want a coffee?’ you say, ‘Yes please, two sugars.’ You don’t say, ‘Yes, please put two sugars in the cup of coffee that is between us,’” said Hasson. “You’re sharing the same lexical items, grammatical constructs and contextual framework. And this is happening not just abstractly, but literally in the brain.”

Hasson plans to go on to study the process of more involved dialogue, to watch how people’s brains fall in and out of sync as they have a discussion or a debate. In fact, it seems that debates might interest him most of all, because he says the team is planning to focus on interactions “where there’s a failure to communicate” – in particular, one of the oldest communication breakdowns of all:

Future work may look at whether gender differences affect how we understand each other.

If this sort of technology could be applied to debates about other hot topics – like regional politics and religion – we might finally see some hard neurological data on why exactly opposing groups always seem to miss (or ignore) each others’ points. Such data probably won’t present ready-made solutions, but it may give the participants some starting points for a new kind of dialogue.

Inside the Vegetative Mind

For the first time, scientists have successfully communicated with patients trapped in vegetative bodies.

A vegetative patient, thinkin' about tennis and stuff.

As a report published in the journal NeuroImage explains, these patients’ thought patterns come through quite clearly on an fMRI scanner, and they’re able to respond to questions in ways that demonstrate that they understand what’s being asked:

To answer yes, [one patient] was told to think of playing tennis, a motor activity. To answer no, he was told to think of wandering from room to room in his home, visualising everything he would expect to see there, creating activity in the part of the brain governing spatial awareness.

Four out of 23 vegetative patients proved their responsiveness to a team led by Cambridge neuroscientists Dr. Adrian Owen and Dr. Steven Laureys, by correctly answering a series of “yes” or “no” questions about their families. Based on these results, the scientists speculate that as many as one fifth of vegetative patients may be able to communicate in this way.

The line between comatose and vegetative is often blurry. Patients in comas typically have no response to stimuli and no sleep/wake cycles, whereas vegetative patients return to partial arousal, exhibiting some normal brain activity – such as sleep and wakefulness – but no responsiveness to the external world.

For years, many scientists had assumed that all vegetative patients were unconscious – but this new data calls that assumption into question: at least some patients who seem vegetative on the outside may still have conscious awareness.  Owen says that perhaps 40% of unresponsive patients are misdiagnosed as vegetative, when they may actually be conscious.

And consciousness is only the beginning – as Owen points out, thinking of things like tennis and one’s house requires the ability to use working memory and other high-level cognitive functions.

Laureys adds that this study is only the beginning – he and his colleagues hope to refine their system of communication soon:

It’s early days, but in the future we hope to develop this technique to allow some patients to express their feelings and thoughts, control their environment and increase their quality of life.

Another intriguing set of implications revolves around the idea of synaptic plasticity. If these patients are taught to communicate more effectively, and given real-time feedback from the external world, it’s possible that they could become more responsive overall. This is still speculation – but studies on Alzheimer’s sufferers have yielded encouraging results in this direction.

Since doctors often deny any hope of recovery for most vegetative patients, it’s exciting to think that they may finally get a chance to speak their minds.

The Quiet Cells

New research is unlocking the secret role of glia, the brain cells that were long considered more structural than functional. As it turns out, though, glia may be even more responsive to certain types of stimuli than neurons are.

An astrocyte strikes a pose for the camera.

One type of glial cells – known as astrocytes because of their star-like shape – compose far more of the brain’s cell population than neurons do. Because glia didn’t seem to be synapsing with neurons, most scientists had assumed their roles were to hold neuronal structures together, and to help shape neurons’ structural development – in fact, the word “glia” itself comes from the Greek word for glue.

“Electrically, astrocytes are pretty silent,” [said MIT neuroscientist] James Schummers. “A lot of what we know about neurons is from sticking electrodes in them. We couldn’t record from astrocytes, so we ignored them.”

But a 2008 MIT study called that idea into question, by showing that glia help regulate cerebral blood flow. More recently, studies led by Dr. Alexander Gourine at the University of Bristol and Dr. James Schummers at the Max Planck Florida Institute have killed off the old notion of passive glia for good.

Gourine’s and Schummers’ research has proven that astrocytes respond to slight decreases in blood pH – which reflect a rising level of carbon dioxide – by releasing calcium ions (Ca2+) and adenosine triphosphate (ATP), both of which play crucial parts in neurons’ synaptic signaling:

ATP propagated astrocytic Ca2+ excitation, activated chemoreceptor neurons, and induced adaptive increases in breathing … This demonstrates a potentially crucial role for brain glial cells in mediating a fundamental physiological reflex.

In other words, astrocytes directly signal neurons to let us know when and how much we need to breathe – that’s about as fundamental as reflexes get.

And researchers suspect astrocytes may be keeping other secrets too. Some think they may play a role in memory formation, while others think they may play a more general role in regulating the levels of neurotransmitters that hang out around synapses, and in determining where the brain’s blood supply should be focused.

This could be the beginning of a major paradigm shift, because many of today’s functional studies of the brain use fMRI scans, which measure changes in blood flow to different brain areas. If astrocytes are actively involved in controlling this process, fMRI may not be telling us exactly what we thought:

Questions have plagued [fMRI] studies, as it is difficult to know what is happening when a particular part of the brain “lights up” in MRI images. [One fMRI researcher] says that it’s important for scientists to be aware that MRI images reflect the status of astrocytes, and that “things that influence astrocytes will influence the signal.”

There’s an even more positive side to these developments, too: understanding the roles of glia may help neuroscientists gain a much clearer understanding of baffling disorders like autism and schizophrenia, because genes linked to these problems seem to be commonly expressed in astrocytes. All in all, we seem to watching the birth of a fascinating new field of neuroscience research.

In the meantime, you might take a second to pause, breathe, and thank your astrocytes for making it all happen.


Get every new post delivered to your Inbox.

Join 79 other followers