Archive for the ‘ News ’ Category

Celebrity Neuroscience Blog Makeover

We’ve got our own dedicated hosting account now!

That means our official URLs from now on are…

the-connectome.com

theconnecto.me

Though I’ll keep this blog around as an archive, all posts from now on will be on the new site only.

Here are some things I think you’ll like about the new site:

- More bandwidth for faster load times
– More storage space for high-quality podcasts and videos
– A new design with lots more cool features

So head over and check it out – I think our new look is quite slimming!

Forget Me Not

Having trouble remembering where you left your keys? You can improve with a little practice, says a new study.

"I've forgotten more than you'll ever...wait, what was I saying?"

It’s an idea that had never occurred to me before, but one that seems weirdly obvious once you think about it: people who train their brains to recall the locations of objects for a few minutes each day show greatly improved ability to remember where they’ve left things.

No matter what age you are, you’ve probably had your share of “Alzheimer’s moments,” when you’ve walked into a room only to forget why you’re there, or set something down and immediately forgotten where you put it. Attention is a limited resource, and when you’re multitasking, there’s not always enough of it to go around.

For people with real Alzheimer’s disease, though, these little moments of forgetfulness can add up to a frustrating inability to complete even simple tasks from start to finish. This is known as mild cognitive impairment (MCI), and its symptoms can range from amnesia to problems with counting and logical reasoning.

That’s because all these tasks depend on memory – even if it’s just the working memory that holds our sense of the present moment together – and most of our memories are dependent on a brain structure called the hippocampus, which is one of the major areas attacked by Alzheimer’s.

What exactly the hippocampus does is still a hotly debated question, but it seems to help sync up neural activity when new memories are “written down” in the brain, as well as when they’re recalled (a process that rewrites the memory anew each time). So it makes sense that the more we associate a particular memory with other memories – and with strong emotions - the more easily even a damaged hippocampus will be able to help retrieve it.

But now, a team led by Benjamin Hampstead at the Emory University School of Medicine has made a significant breakthrough in rehabilitating people with impaired memories, the journal Hippocampus reports: the researchers have demonstrated that Alzheimer’s patients suffering from MCI can learn to remember better with practice.

The team took a group of volunteers with MCI and taught them a three-step memory-training strategy: 1) the subjects focused their attention on a visual feature of the room that was near the object they wanted to remember, 2) they memorized a short explanation for why the object was there, and 3) they imagined a mental picture that contained all that information.

Not only did the patients’ memory measurably improve after a few training sessions – fMRI scans showed that the training physically changed their brains:

Before training, MCI patients showed reduced hippocampal activity during both encoding and retrieval, relative to HEC. Following training, the MCI MS group demonstrated increased activity during both encoding and retrieval. There were significant differences between the MCI MS and MCI XP groups during retrieval, especially within the right hippocampus.

In other words, the hippocampus in these patients became much more active during memory storage and retrieval than it had been before the training.

Now, it’s important to point out that that finding doesn’t necessarily imply improvement – studies have shown that decreased neural activity is often more strongly correlated with mastery of a task than increased activity is – but it does show that these people’s brains were learning to work differently as their memories improved.

So next time you experience a memory slipup, think of it as an opportunity to learn something new. You’d be surprised what you can train your brain to do with a bit of practice.

That is, as long as you remember to practice.

Connection Clusters

As our brains learn something, our neurons form new connections in clustered groups, says a new study.

Some clusters are juicier than others.

In other words, synapses – connections between neurons – are much more likely to form near other brand-new synapses than they are to emerge near older ones.

As our neuroscience friends like to say: “Cells that fire together wire together” – and that process of rewiring never stops. From before you were born right up until this moment, the synaptic pathways in your brain have been transforming, hooking up new electrochemical connections and trimming away the ones that aren’t needed. Even when you’re sound asleep, your brain’s still burning the midnight oil, looking for ever-sleeker ways to do its many jobs.

I like to imagine that this happens to the sound of a really pumped-up drumbeat, as my brain says things like, “We can rebuild this pathway – we have the technology! We can make it better! Faster! Stronger!”

What’s even more amazing is how delicate these adjustments can be. We’re not just talking about growing dendrites here – we’re talking about dendritic spines, the tiny knobs that branch off from dendrites and bloom into postsynaptic densities – molecular interfaces that allow one neuron to receive information from its neighbors.

Back in 2005, a team led by Yi Zuo at the University of California Santa Cruz found that as a mouse learns a new task, thousands of fresh dendritic spines blossom from the dendrites of neurons in the motor cortex (an area of the brain that helps control movement). In short, they actually observed neurons learning to communicate better.

And now Zuo’s back with another hit, the journal Nature reports. This time, Zuo and her team have shown that those new dendritic spines aren’t just popping up at random – they grow in bunches:

A third of new dendritic spines (postsynaptic structures of most excitatory synapses) formed during the acquisition phase of learning emerge in clusters, and that most such clusters are neighbouring spine pairs.

The team discovered this by studying fluorescent mouse neurons under a microscope (Oh, did you know there are mice with glowing neurons? Because there are mice with glowing neurons.). As in Zuo’s earlier study, they focused on neurons in the motor cortex:

We followed apical dendrites of layer 5 pyramidal neurons in the motor cortex while mice practised novel forelimb skills.

But as it turned out, their discovery about clustered spines was just the tip of the iceberg – the researchers also found that when a second dendritic spine formed close to one that was already there, the first spine grew larger, strengthening the connection even more. And they learned that clustered spines were much more likely to persist than non-clustered ones were, which just goes to show the importance of a solid support network. And finally, they found that the new spines don’t form when just any signal passes through – new connections only blossom when a brain is learning through repetition.

Can you imagine how many new dendritic spines were bursting to life in the researchers‘ brains as they learned all this? And what about in your brain, right now?

It’s kinda strange to think about this stuff, I know – even stranger is the realization that your brain isn’t so much an object as it is a process – a constantly evolving system of interconnections. You could say that instead of human beings, we’re really human becomings – and thanks to your adaptable neurons, each moment is a new opportunity to decide who – or what – you’d like to become.

Beyond Perfection

If you continue to practice a skill even after you’ve achieved mastery of it, your brain keeps learning to perform it more and more efficiently, says a new study.

Believing you've reached perfection can lead you to engage in some...interesting...behavior.

As we perform a task – say, dunking a basketball or playing a sweet guitar solo – over and over again, we eventually reach a point that some psychologists call “unconscious competence,” where we execute each movement perfectly without devoting any conscious attention to it at all. But even after this point, our bodies keep finding ways to perform the task more and more efficiently, burning less energy with each repetition.

This story’s got it all – brain-hacks, mysterious discoveries, robots – but to put it all in perspective, we’ve gotta start by talking about this idea we call perfection.

“Practice makes perfect,” the old saying goes – but what’s this “perfect” we’re trying to reach? Isn’t it often a matter of opinion? What I mean is, how do we judge, say, a “perfect” backflip or a “perfect” dive? We compare it to others we’ve seen, and decide that it meets certain criteria better than those examples did; that it was performed with less error.

But where do these criteria for perfection come from? Well, some have said there’s a Platonic realm of “perfect forms” that our minds are somehow tapping into – a realm that contains not only “The Perfect Chair” but “the perfect version of that chair” and “the perfect version of that other chair” and “the perfect version of that molecule” and so on, ad infinitum. Kinda weird, I know – but a lot of smart people believed in ideas like this for thousands of years, and some still do.

Science, though, works in a different way: Instead of trying to tap into a world of perfect forms, scientists (and engineers and mathematicians and programmers and so on) work to find errors and fix them.

And it turns out that the human body is quite talented at doing exactly that. A team led by Alaa Ahmed at the University of Colorado at Boulder found this out firsthand, with the help of robots, the Journal of Neuroscience reports:

Seated subjects made horizontal planar reaching movements toward a target using a robotic arm.

These researchers weren’t interested in brain activity – instead, as the volunteers practiced moving the arm, the researchers measured their oxygen consumption, their carbon dioxide output, and their muscle activity.

As you might expect, the scientists found that as people got better at moving the arm, their consumption of oxygen and production of carbon dioxide, and their overall muscle activity, steadily decreased:

Subjects decreased movement error and learned the novel dynamics. By the end of learning, net metabolic power decreased by ∼20% from initial learning. Muscle activity and coactivation also decreased with motor learning.

But the volunteers’ bodies didn’t stop there. As people kept practicing, their gas consumption and output continued to decrease – and so did their muscle activation. In short, their bodies kept learning to move the arm with measurably less and less physical effort.

Though this study didn’t record any data from the subjects’ brains, it’s easy to see how this continual improvement is just one reflection of a very versatile ability. For instance, we know that when two neurons get really friendly, they become more sensitive to each others’ signals – and we also know that underused neural pathways gradually fade away, making room for new ones. Self-improvement impulses are woven deeply into our bodies – into our cells.

When I say that our brains and bodies are cities, I’m not just speaking metaphorically – you are, quite literally, a vast community – an ecosystem composed of trillions of interdependent microorganisms, each one constantly struggling for its own nourishment and safety.

And though your conscious mind is one part – a very significant part – of this great microscopic nation, it’s not the only part that can learn. At this moment, all throughout the lightless highways and chambers of your body, far below your conscious access, networks of cells are changing, adapting, learning, adjusting - finding errors and fixing them.

So, you can think about “perfection” all you want – but even at that magical moment when you achieve it, the multitudes within you are still hard at work, figuring out how to reach beyond that ideal.

What do you think they’re up to right now?

Learning Expectations

Researchers have isolated a specific pathway our brains use when learning new beliefs about others’ motivations, a new study says.

"M'lord! 'Tis improper to influence the lady's anterior cingulate!"

Though this type of learning, like many others, depends heavily on the neurotransmitter chemical dopamine‘s influence in a set of ancient brain structures called the basal ganglia, it’s also influenced by the rostral anterior cingulate cortex (ACC) – a structure that helps us weigh certain emotional reactions against others – indicating that emotions like empathy also play crucial roles.

As we play competitively against other people, our brains get to work constructing mental models that aim to predict our opponents’ future actions. This means we’re not only learning from the consequences of our own actions, but figuring out the reasons behind others‘ actions as well. This ability is known as theory of mind, and it’s thought to be one of the major mental skills that separates the minds of humans – and of our closest primate cousins – from those of other animals.

Though plenty of studies have examined the neural correlates of straightforward cause-and-effect learning, the process by which we learn from the actions of other people still remains somewhat unclear – largely because complex emotions like empathy and regret seem to involve many areas of the brain, including parts of the temporal, parietal and prefrontal cortices, as well as more ancient structures like the basal ganglia and cingulate cortex.

That’s why a team led by the University of Illinois’ Kyle Mathewson set out to track exactly what happens in our brains as we learn new ideas about other’s motivations, the journal Proceedings of the National Academy of Sciences reports.

The team used functional magnetic resonance imaging (fMRI) to study activity deep within volunteers’ brains as they played a competitive betting game against one another – focusing especially on moments when players learned whether they’d won or lost a round, and how much their opponents had wagered.

The researchers then used a computational model to match up patterns of brain activity with patterns of play – and found that the volunteers’ brains learned others’ behaviors and motivations through a complex interplay of several regions:

We found that the reinforcement learning (RL) prediction error was correlated with activity in the ventral striatum.

In other words, the ventral striatum – an area of the basal ganglia – was crucial for learning by reinforcement, much as the researchers expected…

In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning.

…while the anterior cingulate, on the other hand, seemed to dictate how attentively players watched their opponents’ patterns of play, and how much thought they put into predicting those patterns.

Thus, it appears that theory of mind is built atop an ancient “substructure” of simple reinforcement learning, which supports layers of more emotionally complex attitudes and beliefs about others’ thoughts, feelings and motivations – many of which are influenced by our perceptions of our own internal feelings.

And that points back to an important aspect of subjective experience in general: Many of our perceptions of the external world are extrapolated from our perceptions of our internal states. When we say, “It’s hot,” we really mean, “I feel hot;” when we say, “It’s loud in here,” we really mean, “It sounds loud to me.” In fact, the great philosopher Bertrand Russell has gone so far as to suggest that instead of saying, “I think,” it’d be more accurate to say “It thinks in me,” the same way we say “It’s raining.”

Anyway, no matter how you choose to phrase it, the point is that thinking isn’t a single process, but a relationship of many processes to one another. Which means that no matter how much we think we know, there’s always plenty left to learn.

Musical Learning

A new study throws some light on how musical aptitude can offset one very specific aspect of the aging process.

The question of Why Those Young Men Always Sound So Angry remains ripe for investigation.

In research comparing older patients with musical training to those without, older people who’d spent time regularly practicing or teaching music consistently displayed much faster neural reaction times to certain kinds of sounds.

The idea that the human brain has a deep relationship with music is obviously nothing new – but lately, research has been demonstrating more and more ways in which music is a major ingredient in mental health. For example, a 2007 study found that the brain reacts to music by automatically heightening attention, and one in 2010 found that an ear for harmony was correlated with a better ability to distinguish speech from noise.

The therapeutic implications of all this haven’t gone unnoticed. The neuroscientist Michael Merzenich has cured patients of chronic tinnitus (ear-ringing) by prescribing them musical training – and he’s had remarkable success using it to improve the responsiveness of autistic children.

Inspired by Merzenich’s work, a team led by Northwestern University’s Nina Kraus made up an experiment: They decided to record the reaction times of musicians‘ brains when they heard certain sounds, and compare those against the reaction times of people with no musical training.

As the journal Neurobiology of Aging reports, the team inserted electrodes directly into the patients’ brains during surgery, like this (WARNING – the following image is a very cool but very bloody photo of brain surgery): here, and recorded exactly how quickly their auditory cortex reacted to a variety of speech sounds.

They found that older musicians’ brains seemed to keep their youthful reaction speeds; at least when it came to a certain kind of sound: The syllable “da” – one of the “hard” vowel sounds known as formant transitions in science slang:

Although younger and older musicians exhibited equivalent response timing for the formant transition, older nonmusicians demonstrated significantly later re-sponse timing relative to younger nonmusicians … The main effect of musicianship observed for the neural response to the onset and the transition was driven solely by group differences in the older participants.

In other words, a musicians’ brain responds to the “da” sound just as quickly as it did in youth – but a nonmusician’s response time slows down significantly as it ages.

The slowdown isn’t much – only a few milliseconds – but in brain time, that can be enough to cause problems. See, we’re not talking about conscious reaction time here – this is electrophysiological reaction time – the speed at which information travels in the brain.

Why does this matter? Because mental issues like autism, senile dementia and schizophrenia are all related to very slight timing errors in the brain’s elaborate communication patterns. An aging brain isn’t so much an old clock as an old city. Ever notice how the most ancient cities tend to be the ones with the weirdest cultures? Well, there ya go.

Just like old cities, though, autism and dementia and schizophrenia – and aging – can be scary sometimes, but they’re also the sources of great breakthroughs, and remarkable insights, and all sorts of conversations that couldn’t have happened otherwise.

What I’m saying is, the only measurable difference between a disorder and a gift is that one is helpful and the other isn’t. And in most cases, that difference really comes down to timing.

Sacred Values

Principles on which we refuse to change our stance are processed via separate neural pathways from those we’re more flexible on, says a new study.

Some of our values can be more flexible than others...

Our minds process many decisions in moral “gray areas” by weighing the risks and rewards involved – so if the risk is lessened or the reward increased, we’re sometimes willing to change our stance. However, some of our moral stances are tied to much more primal feelings – “gut reactions” that remind us of our most iron-clad principles: don’t hurt innocent children, don’t steal from the elderly, and so on.

These fundamental values – what the study calls “sacred” values (whether they’re inspired by religious views or not) – are processed heavily by the left temporoparietal junction (TPJ), which is involved in imagining others’ minds; and by the left ventrolateral prefrontal cortex (vlPFC), which is important for remembering rules. When especially strong sacred values are called into question, the amygdala – an ancient brain region crucial for processing negative “gut” reactions like disgust and fear – also shows high levels of activation.

These results provide some intriguing new wrinkles to age-old debates about how the human mind processes the concepts of right and wrong. See, in many ancient religions (and some modern ones) rightness and wrongness are believed to be self-evident rules, or declarations passed down from on high. Even schools that emphasized independent rational thought – such as Pythagoreanism in Greece and Buddhism in Asia – still had a tendency to codify their moral doctrines into lists of rules and precepts.

But as scientists and philosophers like Jeremy Bentham and David Hume began to turn more analytical eyes on these concepts, it became clear that exceptions could be found for many “absolute” moral principles – and that our decisions about rightness and wrongness are often based on our personal emotions about specific situations.

The epic battle between moral absolutism and moral relativism is still in full swing today. The absolutist arguments essentially boil down to the claim that without some bedrock set of unshakable rules, it’s impossible to know for certain whether any of our actions are right or wrong. The relativists, on the other hand, claim that without some room for practical exceptions, no moral system is adaptable enough for the complex realities of this universe.

But now, as the journal Philosophical Transactions of the Royal Society B: Biological Sciences reports, a team led by Emory University’s Gregory Berns has analysed moral decision-making from a neuroscientific perspective – and found that our minds rely on rule-based ethics in some situations, and practical ethics in others.

The team used fMRI scans to study patterns of brain activity in 32 volunteers as the subjects responded “yes” or “no” to various statements, ranging from the mundane (e.g., “You are a tea drinker”) to the incendiary (e.g., “You are pro-life.”).

At the end of the questionnaire, the volunteers were offered the option of changing their stances for cash rewards. As you can imagine, many people had no problem changing their stance on, say, tea drinking for a cash reward. But when they were pressed to change their stances on hot-button issues, something very different happened in their brains:

We found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval.

In other words, people have learned to process certain moral decisions by bypassing their risk/reward pathways and directly retrieving stored “hard and fast” rules.

This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Of course, this makes it much easier to understand why “there’s no reasoning” with some people about certain issues – because it wasn’t reason that brought them to their stance in the first place. You might as well try to argue a person out of feeling hungry.

That doesn’t mean, though, that there’s no hope for intelligent discourse about “sacred” topics – what it does mean is that instead of trying to change people’s stances on them through logical argument, we need to work to understand why these values are sacred to them.

For example, the necessity of slavery was considered a sacred value all across the world for thousands of years – but today slavery is illegal (and considered morally heinous) in almost every country on earth. What changed? Quite a few things, actually – industrialization made hard manual labor less necessary for daily survival; overseas slaving expeditions became less profitable; the idea of racial equality became more popular…the list could go on and on, but it all boils down to a central concept: over time, the needs slavery had been meeting were addressed in modern, creative ways – until at last, most people felt better not owning slaves than owning them.

My point is, if we want to make moral progress, we’ve got to start by putting ourselves in the other side’s shoes – and perhaps taking a more thoughtful look at out own sacred values while we’re at it.

Taking Vision Apart

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says a new study.

At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.

If other labs can confirm these results, this will mean we’re very close to being able to predict exactly which neurons will fire when an animal looks at a specific object.

Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cellsneurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists like Alan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator – started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.

See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information in connections and gradientsdegrees to which one piece of neural architecture influences others. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.

So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and how those connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.

But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports. (By the way, if you guys are reading this, I live in L.A. and would love to visit your lab.)

The team injected mouse brains with a special dye that’s chemically formulated to glow fluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown various shapes. And the researchers confirmed something wonderfully weird about the way a brain works:

Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.

In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – its roundness, its largeness, its color, and so on. As signals pass through each layer, they’re influenced by the neurons they’ve connected with before. Each layer is like a section of a choir, adding its own voice to the song with perfect timing.

Now, other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing about this particular study? The level of detail:

Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.

Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.

The gap between mind and machine is getting narrower every day. How does that make you feel?

It’s Official!

The Connectome has two new URLs:

the-connectome.com

theconnecto.me

Much easier to remember, I think.

Have a wonderful weekend, everyone – I’ve got some fun stuff coming up for you next week.

The Memory Master

A gene that may underlie the molecular mechanisms of memory has been identified, says a new study.

Some of us feel that "yellow" and "red" are open to interpretation...

The gene’s called neuronal PAS domain protein 4 (Npas4 to its friends). When a brain has a new experience, Npas4 leaps into action, activating a whole series of other genes that modify the strength of synapses – the connections that allow neurons to pass electrochemical signals around.

You can think of synapses as being a bit like traffic lights: a very strong synapse is like a green light, allowing lots of traffic (i.e., signals) to pass down a particular neural path when the neuron fires. A weaker synapse is like a yellow light – some signals might slip through now and then, but most won’t make it. Some synapses can inhibit others, acting like red lights – stopping any signals from getting through. And if a particular synapse goes untraveled for long enough, the road starts to crumble away – until finally, there’s no synapse left.

There’s a saying in neuroscience: “Cells that fire together wire together.” (And vice versa.) In other words, synaptic plasticity – the ability of neurons to modify their connectivity patterns – is what allows neural networks to physically change as they take in new information.  It’s what gives our brains the ability to learn.

In fact, millions of neurons are delicately tinkering with their connectivity patterns right now, inside your head, as you learn this stuff. Pretty cool, huh?

Anyway, synaptic plasticity’s not exactly breaking news – scientists have been studying it in animals like squid and sea slugs since the 1970s. Neurons in those animals are pretty easy to study with electrodes and a microscope, because a) the animals are anatomically simple compared to humans, and b) some of their neurons are so huge they can be seen with the naked eye.

Studying synapses in humans isn’t quite so simple, though. For one thing, most people wouldn’t like it if you cut open their brain and started poking around while they were alive and conscious – and besides, a lot of the really interesting stuff happens down at the molecular level.

That brings up an important point: though you normally hear about genes in connection with traits – say, a “gene for baldness” and so on – these complex molecular strands actually play all sorts of roles in the body, from building cells to adjusting chemical levels to telling other genes what to do.

That’s why MIT’s Yingxi Lin and her team set out to study the functions of certain genes found in the hippocampus – a brain structure central to memory formation – the journal Science reports. The researchers taught a group of mice to avoid a little room in which they received a mild electric shock, then used a precise chemical tracking technique to isolate which genes in the mouse hippocampus were activated right when the mice learned which room to avoid.

In particular, they focused on a hippocampal region with the sci-fi-sounding name of Cornu Ammonis 3or CA3 for short:

We found that the activity-dependent transcription factor Npas4 regulates a transcriptional program in CA3 that is required for contextual memory formation. Npas4 was specifically expressed in CA3 after contextual learning.

By “transcriptional program,” the paper’s authors mean a series of genetic “switches” – genes that Npas4 activates – which in turn make chemical adjustments that strengthen or weaken synaptic connections. In short, Npas4 appears to be part of the master “traffic conductor program” for many of the brain’s synapses.

Though they were pretty excited by this discovery (who wouldn’t be?) the researchers took a deep breath, calmed down, and double-checked their results, by testing memory formation in mice whose brains were unable to produce Npas4:

Global knockout or selective deletion of Npas4 in CA3 both resulted in impaired contextual memory, and restoration of Npas4 in CA3 was sufficient to reverse the deficit in global knockout mice.

In short, they make a pretty convincing argument that Npas4 is a necessary ingredient in a mouse’s ability – and probably our ability – to form certain types of new memories.

Exactly how that program relates to our experience of memory remains unclear, but it’s a promising starting point for fine-tuning future memory research. I don’t know about you, but I’d be thrilled to green-light such a project.

Follow

Get every new post delivered to your Inbox.

Join 74 other followers