Posts Tagged ‘ neurophysiology ’

Neuroscience Friends!

I’ve just returned from a thrilling weekend at the BIL Conference in Long Beach, California (yes, the pun on “TED” is very intentional) where I met all kinds of smart, fun people – including lots of folks who share my love for braaaiiins!

The conference was held in... The Future!

So I thought I’d introduce you guys to some of the friends I made. I think you’ll be as surprised – and as excited – as I am.

Backyard Brains
Their motto is “neuroscience for everyone” – how cool is that? They sell affordable kits that let you experiment at home with the nervous systems of insects and other creatures. They gave a super-fun presentation where I got to help dissect a cockroach and send electrical signals through its nerves.

Interaxon
They build all kinds of cutting-edge tools that let home users study their brain activity, and even control machines and art projects with it. Their founder, Ariel Garten, has a great TED talk here – I’ve rarely met anyone else who was so excited to have weird new neuroscience adventures.

Deltaself and Dangerously Hardcore
Two blogs by the very smart Naomi Most – the first is about how scientific data is changing the way we all understand our minds and bodies; the second is about hacking your own behavior to stay healthier and live better.

Halcyon Molecular
Their aim is to put the power to sequence and modify genomes in everyone’s hands within the next few decades. They’re getting some huge funding lately, and lots of attention in major science journals.

Bonus – XCOR Aerospace
They’re building a privately-funded suborbital spacecraft for independent science missions. If there’s anybody who can help us all join the search for alien life in the near future, I bet it’s these guys.

So check those links out and let me know what you think. I’d love to get these folks involved in future videos, especially if you’re interested in any of them.

Forget Me Not

Having trouble remembering where you left your keys? You can improve with a little practice, says a new study.

"I've forgotten more than you'll ever...wait, what was I saying?"

It’s an idea that had never occurred to me before, but one that seems weirdly obvious once you think about it: people who train their brains to recall the locations of objects for a few minutes each day show greatly improved ability to remember where they’ve left things.

No matter what age you are, you’ve probably had your share of “Alzheimer’s moments,” when you’ve walked into a room only to forget why you’re there, or set something down and immediately forgotten where you put it. Attention is a limited resource, and when you’re multitasking, there’s not always enough of it to go around.

For people with real Alzheimer’s disease, though, these little moments of forgetfulness can add up to a frustrating inability to complete even simple tasks from start to finish. This is known as mild cognitive impairment (MCI), and its symptoms can range from amnesia to problems with counting and logical reasoning.

That’s because all these tasks depend on memory – even if it’s just the working memory that holds our sense of the present moment together – and most of our memories are dependent on a brain structure called the hippocampus, which is one of the major areas attacked by Alzheimer’s.

What exactly the hippocampus does is still a hotly debated question, but it seems to help sync up neural activity when new memories are “written down” in the brain, as well as when they’re recalled (a process that rewrites the memory anew each time). So it makes sense that the more we associate a particular memory with other memories – and with strong emotions - the more easily even a damaged hippocampus will be able to help retrieve it.

But now, a team led by Benjamin Hampstead at the Emory University School of Medicine has made a significant breakthrough in rehabilitating people with impaired memories, the journal Hippocampus reports: the researchers have demonstrated that Alzheimer’s patients suffering from MCI can learn to remember better with practice.

The team took a group of volunteers with MCI and taught them a three-step memory-training strategy: 1) the subjects focused their attention on a visual feature of the room that was near the object they wanted to remember, 2) they memorized a short explanation for why the object was there, and 3) they imagined a mental picture that contained all that information.

Not only did the patients’ memory measurably improve after a few training sessions – fMRI scans showed that the training physically changed their brains:

Before training, MCI patients showed reduced hippocampal activity during both encoding and retrieval, relative to HEC. Following training, the MCI MS group demonstrated increased activity during both encoding and retrieval. There were significant differences between the MCI MS and MCI XP groups during retrieval, especially within the right hippocampus.

In other words, the hippocampus in these patients became much more active during memory storage and retrieval than it had been before the training.

Now, it’s important to point out that that finding doesn’t necessarily imply improvement – studies have shown that decreased neural activity is often more strongly correlated with mastery of a task than increased activity is – but it does show that these people’s brains were learning to work differently as their memories improved.

So next time you experience a memory slipup, think of it as an opportunity to learn something new. You’d be surprised what you can train your brain to do with a bit of practice.

That is, as long as you remember to practice.

Connection Clusters

As our brains learn something, our neurons form new connections in clustered groups, says a new study.

Some clusters are juicier than others.

In other words, synapses – connections between neurons – are much more likely to form near other brand-new synapses than they are to emerge near older ones.

As our neuroscience friends like to say: “Cells that fire together wire together” – and that process of rewiring never stops. From before you were born right up until this moment, the synaptic pathways in your brain have been transforming, hooking up new electrochemical connections and trimming away the ones that aren’t needed. Even when you’re sound asleep, your brain’s still burning the midnight oil, looking for ever-sleeker ways to do its many jobs.

I like to imagine that this happens to the sound of a really pumped-up drumbeat, as my brain says things like, “We can rebuild this pathway – we have the technology! We can make it better! Faster! Stronger!”

What’s even more amazing is how delicate these adjustments can be. We’re not just talking about growing dendrites here – we’re talking about dendritic spines, the tiny knobs that branch off from dendrites and bloom into postsynaptic densities – molecular interfaces that allow one neuron to receive information from its neighbors.

Back in 2005, a team led by Yi Zuo at the University of California Santa Cruz found that as a mouse learns a new task, thousands of fresh dendritic spines blossom from the dendrites of neurons in the motor cortex (an area of the brain that helps control movement). In short, they actually observed neurons learning to communicate better.

And now Zuo’s back with another hit, the journal Nature reports. This time, Zuo and her team have shown that those new dendritic spines aren’t just popping up at random – they grow in bunches:

A third of new dendritic spines (postsynaptic structures of most excitatory synapses) formed during the acquisition phase of learning emerge in clusters, and that most such clusters are neighbouring spine pairs.

The team discovered this by studying fluorescent mouse neurons under a microscope (Oh, did you know there are mice with glowing neurons? Because there are mice with glowing neurons.). As in Zuo’s earlier study, they focused on neurons in the motor cortex:

We followed apical dendrites of layer 5 pyramidal neurons in the motor cortex while mice practised novel forelimb skills.

But as it turned out, their discovery about clustered spines was just the tip of the iceberg – the researchers also found that when a second dendritic spine formed close to one that was already there, the first spine grew larger, strengthening the connection even more. And they learned that clustered spines were much more likely to persist than non-clustered ones were, which just goes to show the importance of a solid support network. And finally, they found that the new spines don’t form when just any signal passes through – new connections only blossom when a brain is learning through repetition.

Can you imagine how many new dendritic spines were bursting to life in the researchers‘ brains as they learned all this? And what about in your brain, right now?

It’s kinda strange to think about this stuff, I know – even stranger is the realization that your brain isn’t so much an object as it is a process – a constantly evolving system of interconnections. You could say that instead of human beings, we’re really human becomings – and thanks to your adaptable neurons, each moment is a new opportunity to decide who – or what – you’d like to become.

Learning Expectations

Researchers have isolated a specific pathway our brains use when learning new beliefs about others’ motivations, a new study says.

"M'lord! 'Tis improper to influence the lady's anterior cingulate!"

Though this type of learning, like many others, depends heavily on the neurotransmitter chemical dopamine‘s influence in a set of ancient brain structures called the basal ganglia, it’s also influenced by the rostral anterior cingulate cortex (ACC) – a structure that helps us weigh certain emotional reactions against others – indicating that emotions like empathy also play crucial roles.

As we play competitively against other people, our brains get to work constructing mental models that aim to predict our opponents’ future actions. This means we’re not only learning from the consequences of our own actions, but figuring out the reasons behind others‘ actions as well. This ability is known as theory of mind, and it’s thought to be one of the major mental skills that separates the minds of humans – and of our closest primate cousins – from those of other animals.

Though plenty of studies have examined the neural correlates of straightforward cause-and-effect learning, the process by which we learn from the actions of other people still remains somewhat unclear – largely because complex emotions like empathy and regret seem to involve many areas of the brain, including parts of the temporal, parietal and prefrontal cortices, as well as more ancient structures like the basal ganglia and cingulate cortex.

That’s why a team led by the University of Illinois’ Kyle Mathewson set out to track exactly what happens in our brains as we learn new ideas about other’s motivations, the journal Proceedings of the National Academy of Sciences reports.

The team used functional magnetic resonance imaging (fMRI) to study activity deep within volunteers’ brains as they played a competitive betting game against one another – focusing especially on moments when players learned whether they’d won or lost a round, and how much their opponents had wagered.

The researchers then used a computational model to match up patterns of brain activity with patterns of play – and found that the volunteers’ brains learned others’ behaviors and motivations through a complex interplay of several regions:

We found that the reinforcement learning (RL) prediction error was correlated with activity in the ventral striatum.

In other words, the ventral striatum – an area of the basal ganglia – was crucial for learning by reinforcement, much as the researchers expected…

In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning.

…while the anterior cingulate, on the other hand, seemed to dictate how attentively players watched their opponents’ patterns of play, and how much thought they put into predicting those patterns.

Thus, it appears that theory of mind is built atop an ancient “substructure” of simple reinforcement learning, which supports layers of more emotionally complex attitudes and beliefs about others’ thoughts, feelings and motivations – many of which are influenced by our perceptions of our own internal feelings.

And that points back to an important aspect of subjective experience in general: Many of our perceptions of the external world are extrapolated from our perceptions of our internal states. When we say, “It’s hot,” we really mean, “I feel hot;” when we say, “It’s loud in here,” we really mean, “It sounds loud to me.” In fact, the great philosopher Bertrand Russell has gone so far as to suggest that instead of saying, “I think,” it’d be more accurate to say “It thinks in me,” the same way we say “It’s raining.”

Anyway, no matter how you choose to phrase it, the point is that thinking isn’t a single process, but a relationship of many processes to one another. Which means that no matter how much we think we know, there’s always plenty left to learn.

Musical Learning

A new study throws some light on how musical aptitude can offset one very specific aspect of the aging process.

The question of Why Those Young Men Always Sound So Angry remains ripe for investigation.

In research comparing older patients with musical training to those without, older people who’d spent time regularly practicing or teaching music consistently displayed much faster neural reaction times to certain kinds of sounds.

The idea that the human brain has a deep relationship with music is obviously nothing new – but lately, research has been demonstrating more and more ways in which music is a major ingredient in mental health. For example, a 2007 study found that the brain reacts to music by automatically heightening attention, and one in 2010 found that an ear for harmony was correlated with a better ability to distinguish speech from noise.

The therapeutic implications of all this haven’t gone unnoticed. The neuroscientist Michael Merzenich has cured patients of chronic tinnitus (ear-ringing) by prescribing them musical training – and he’s had remarkable success using it to improve the responsiveness of autistic children.

Inspired by Merzenich’s work, a team led by Northwestern University’s Nina Kraus made up an experiment: They decided to record the reaction times of musicians‘ brains when they heard certain sounds, and compare those against the reaction times of people with no musical training.

As the journal Neurobiology of Aging reports, the team inserted electrodes directly into the patients’ brains during surgery, like this (WARNING – the following image is a very cool but very bloody photo of brain surgery): here, and recorded exactly how quickly their auditory cortex reacted to a variety of speech sounds.

They found that older musicians’ brains seemed to keep their youthful reaction speeds; at least when it came to a certain kind of sound: The syllable “da” – one of the “hard” vowel sounds known as formant transitions in science slang:

Although younger and older musicians exhibited equivalent response timing for the formant transition, older nonmusicians demonstrated significantly later re-sponse timing relative to younger nonmusicians … The main effect of musicianship observed for the neural response to the onset and the transition was driven solely by group differences in the older participants.

In other words, a musicians’ brain responds to the “da” sound just as quickly as it did in youth – but a nonmusician’s response time slows down significantly as it ages.

The slowdown isn’t much – only a few milliseconds – but in brain time, that can be enough to cause problems. See, we’re not talking about conscious reaction time here – this is electrophysiological reaction time – the speed at which information travels in the brain.

Why does this matter? Because mental issues like autism, senile dementia and schizophrenia are all related to very slight timing errors in the brain’s elaborate communication patterns. An aging brain isn’t so much an old clock as an old city. Ever notice how the most ancient cities tend to be the ones with the weirdest cultures? Well, there ya go.

Just like old cities, though, autism and dementia and schizophrenia – and aging – can be scary sometimes, but they’re also the sources of great breakthroughs, and remarkable insights, and all sorts of conversations that couldn’t have happened otherwise.

What I’m saying is, the only measurable difference between a disorder and a gift is that one is helpful and the other isn’t. And in most cases, that difference really comes down to timing.

Sacred Values

Principles on which we refuse to change our stance are processed via separate neural pathways from those we’re more flexible on, says a new study.

Some of our values can be more flexible than others...

Our minds process many decisions in moral “gray areas” by weighing the risks and rewards involved – so if the risk is lessened or the reward increased, we’re sometimes willing to change our stance. However, some of our moral stances are tied to much more primal feelings – “gut reactions” that remind us of our most iron-clad principles: don’t hurt innocent children, don’t steal from the elderly, and so on.

These fundamental values – what the study calls “sacred” values (whether they’re inspired by religious views or not) – are processed heavily by the left temporoparietal junction (TPJ), which is involved in imagining others’ minds; and by the left ventrolateral prefrontal cortex (vlPFC), which is important for remembering rules. When especially strong sacred values are called into question, the amygdala – an ancient brain region crucial for processing negative “gut” reactions like disgust and fear – also shows high levels of activation.

These results provide some intriguing new wrinkles to age-old debates about how the human mind processes the concepts of right and wrong. See, in many ancient religions (and some modern ones) rightness and wrongness are believed to be self-evident rules, or declarations passed down from on high. Even schools that emphasized independent rational thought – such as Pythagoreanism in Greece and Buddhism in Asia – still had a tendency to codify their moral doctrines into lists of rules and precepts.

But as scientists and philosophers like Jeremy Bentham and David Hume began to turn more analytical eyes on these concepts, it became clear that exceptions could be found for many “absolute” moral principles – and that our decisions about rightness and wrongness are often based on our personal emotions about specific situations.

The epic battle between moral absolutism and moral relativism is still in full swing today. The absolutist arguments essentially boil down to the claim that without some bedrock set of unshakable rules, it’s impossible to know for certain whether any of our actions are right or wrong. The relativists, on the other hand, claim that without some room for practical exceptions, no moral system is adaptable enough for the complex realities of this universe.

But now, as the journal Philosophical Transactions of the Royal Society B: Biological Sciences reports, a team led by Emory University’s Gregory Berns has analysed moral decision-making from a neuroscientific perspective – and found that our minds rely on rule-based ethics in some situations, and practical ethics in others.

The team used fMRI scans to study patterns of brain activity in 32 volunteers as the subjects responded “yes” or “no” to various statements, ranging from the mundane (e.g., “You are a tea drinker”) to the incendiary (e.g., “You are pro-life.”).

At the end of the questionnaire, the volunteers were offered the option of changing their stances for cash rewards. As you can imagine, many people had no problem changing their stance on, say, tea drinking for a cash reward. But when they were pressed to change their stances on hot-button issues, something very different happened in their brains:

We found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval.

In other words, people have learned to process certain moral decisions by bypassing their risk/reward pathways and directly retrieving stored “hard and fast” rules.

This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Of course, this makes it much easier to understand why “there’s no reasoning” with some people about certain issues – because it wasn’t reason that brought them to their stance in the first place. You might as well try to argue a person out of feeling hungry.

That doesn’t mean, though, that there’s no hope for intelligent discourse about “sacred” topics – what it does mean is that instead of trying to change people’s stances on them through logical argument, we need to work to understand why these values are sacred to them.

For example, the necessity of slavery was considered a sacred value all across the world for thousands of years - but today slavery is illegal (and considered morally heinous) in almost every country on earth. What changed? Quite a few things, actually – industrialization made hard manual labor less necessary for daily survival; overseas slaving expeditions became less profitable; the idea of racial equality became more popular…the list could go on and on, but it all boils down to a central concept: over time, the needs slavery had been meeting were addressed in modern, creative ways – until at last, most people felt better not owning slaves than owning them.

My point is, if we want to make moral progress, we’ve got to start by putting ourselves in the other side’s shoes – and perhaps taking a more thoughtful look at out own sacred values while we’re at it.

Taking Vision Apart

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says a new study.

At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.

If other labs can confirm these results, this will mean we’re very close to being able to predict exactly which neurons will fire when an animal looks at a specific object.

Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cellsneurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists like Alan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator – started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.

See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information in connections and gradientsdegrees to which one piece of neural architecture influences others. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.

So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and how those connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.

But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports. (By the way, if you guys are reading this, I live in L.A. and would love to visit your lab.)

The team injected mouse brains with a special dye that’s chemically formulated to glow fluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown various shapes. And the researchers confirmed something wonderfully weird about the way a brain works:

Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.

In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – its roundness, its largeness, its color, and so on. As signals pass through each layer, they’re influenced by the neurons they’ve connected with before. Each layer is like a section of a choir, adding its own voice to the song with perfect timing.

Now, other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing about this particular study? The level of detail:

Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.

Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.

The gap between mind and machine is getting narrower every day. How does that make you feel?

The Memory Master

A gene that may underlie the molecular mechanisms of memory has been identified, says a new study.

Some of us feel that "yellow" and "red" are open to interpretation...

The gene’s called neuronal PAS domain protein 4 (Npas4 to its friends). When a brain has a new experience, Npas4 leaps into action, activating a whole series of other genes that modify the strength of synapses – the connections that allow neurons to pass electrochemical signals around.

You can think of synapses as being a bit like traffic lights: a very strong synapse is like a green light, allowing lots of traffic (i.e., signals) to pass down a particular neural path when the neuron fires. A weaker synapse is like a yellow light – some signals might slip through now and then, but most won’t make it. Some synapses can inhibit others, acting like red lights – stopping any signals from getting through. And if a particular synapse goes untraveled for long enough, the road starts to crumble away – until finally, there’s no synapse left.

There’s a saying in neuroscience: “Cells that fire together wire together.” (And vice versa.) In other words, synaptic plasticity – the ability of neurons to modify their connectivity patterns – is what allows neural networks to physically change as they take in new information.  It’s what gives our brains the ability to learn.

In fact, millions of neurons are delicately tinkering with their connectivity patterns right now, inside your head, as you learn this stuff. Pretty cool, huh?

Anyway, synaptic plasticity’s not exactly breaking news – scientists have been studying it in animals like squid and sea slugs since the 1970s. Neurons in those animals are pretty easy to study with electrodes and a microscope, because a) the animals are anatomically simple compared to humans, and b) some of their neurons are so huge they can be seen with the naked eye.

Studying synapses in humans isn’t quite so simple, though. For one thing, most people wouldn’t like it if you cut open their brain and started poking around while they were alive and conscious – and besides, a lot of the really interesting stuff happens down at the molecular level.

That brings up an important point: though you normally hear about genes in connection with traits – say, a “gene for baldness” and so on – these complex molecular strands actually play all sorts of roles in the body, from building cells to adjusting chemical levels to telling other genes what to do.

That’s why MIT’s Yingxi Lin and her team set out to study the functions of certain genes found in the hippocampus – a brain structure central to memory formation – the journal Science reports. The researchers taught a group of mice to avoid a little room in which they received a mild electric shock, then used a precise chemical tracking technique to isolate which genes in the mouse hippocampus were activated right when the mice learned which room to avoid.

In particular, they focused on a hippocampal region with the sci-fi-sounding name of Cornu Ammonis 3or CA3 for short:

We found that the activity-dependent transcription factor Npas4 regulates a transcriptional program in CA3 that is required for contextual memory formation. Npas4 was specifically expressed in CA3 after contextual learning.

By “transcriptional program,” the paper’s authors mean a series of genetic “switches” – genes that Npas4 activates – which in turn make chemical adjustments that strengthen or weaken synaptic connections. In short, Npas4 appears to be part of the master “traffic conductor program” for many of the brain’s synapses.

Though they were pretty excited by this discovery (who wouldn’t be?) the researchers took a deep breath, calmed down, and double-checked their results, by testing memory formation in mice whose brains were unable to produce Npas4:

Global knockout or selective deletion of Npas4 in CA3 both resulted in impaired contextual memory, and restoration of Npas4 in CA3 was sufficient to reverse the deficit in global knockout mice.

In short, they make a pretty convincing argument that Npas4 is a necessary ingredient in a mouse’s ability – and probably our ability – to form certain types of new memories.

Exactly how that program relates to our experience of memory remains unclear, but it’s a promising starting point for fine-tuning future memory research. I don’t know about you, but I’d be thrilled to green-light such a project.

Saving Faces

A brain area that’s specialized to recognize faces has a unique structure in each of our brains – and mapping that area’s connectivity patterns can tell us how each of our brains use it, says a new study.

The fusiform gyrus - home of the Brain Face Database.

The fusiform gyrus in the temporal lobe plays a part in our recognition of words, numbers, faces, colors, and other visual specifics – but it’s becoming increasingly clear that no two people’s fusiform gyrus structure is identical. By studying this region in a larger connectomic framework, though, researchers can now predict which parts of a certain person’s fusiform gyrus are specialized for face recognition.

Since the early days of neurophysiology – way back into the 1800s – scientists have been working to pinpoint certain types of brain activity to certain structures within the brain. Simple experiments and lesion studies – many of them pretty crude by today’s standards – demonstrated that, for instance, the cerebellum is necessary for coordinating bodily movement; and that the inferior frontal gyrus (IFG) is involved in speech production.

Things get trickier, though, when we try to study more abstract mental tasks. For example, debates over the possible existence of “grandmother cells” – groups of neurons whose activity might represent complex concepts like “my grandmother” – have raged for decades, with no clear resolution in sight. The story’s similar for “mirror neurons” – networks of cells that some scientists think are responsible for our ability to understand and predict the intent of another person’s action.

All these debates reflect a far more fundamental gap in our understanding – one that many scientists seem reluctant to acknowledge: To this day, no one’s been able to demonstrate exactly what a “concept” is in neurological terms – or even if it’s a single type of “thing” at all.

This is why you’ll sometimes hear theoretical psychologists talk about “engrams” – hypothetical means by which neural networks might store memories – a bit like computer files in the brain. But the fact is, no one’s sure if the brain organizes information in a way that’s at all analogous to the way a computer does. In fact, a growing body of research points toward the idea that our memories are highly interdependent and dynamic - more like ripples in a pond than files in a computer.

This is where connectomics comes in. As researchers become increasingly aware that no two brains are quite alike, they’ve begun to focus on mapping the neural networks that connect various processing hubs to one another. As an analogy, you might say they’ve begun to study traffic patterns by mapping a country’s entire highway system, rather than just focusing on the stoplights in individual cities.

And now, as the journal Nature Neuroscience reports, a team led by MIT’s David Osher has mapped a variety of connectivity patterns linking the fusiform gyrus to other brain areas.

They accomplished this through a technique called diffusion imaging, which is based on a brain-scanning technology known as diffusion MRI (dMRI). Diffusion imaging applies a magnetic field to the brain, causing water to flow along axons – the long “tails” of neurons that connect them to other areas – allowing the MRI scan to detect which areas are sending out lots of signals to others during certain mental activities. As you can imagine, this technique has been revealing all sorts of surprising new facts about the brain’s functionality.

In this particular study, the researchers found that during face-recognition tasks, certain parts of the fusiform gyrus lit up with active connections to areas like the superior and inferior temporal cortices, which are also known to be involved in face recognition. Intriguingly, they also detected connectivity with parts of the cerebellum – an ancient brain structure involved in bodily balance and movement, which no one expected to be part of any visual recognition pathway. Sounds like a Science Mystery to me!

The team even discovered that they could use the connectivity patterns they found to predict which faces a person would recognize:

By using only structural connectivity, as measured through diffusion-weighted imaging, we were able to predict functional activation to faces in the fusiform gyrus … The structure-function relationship discovered from the initial participants was highly robust in predicting activation in a second group of participants, despite differences in acquisition parameters and stimuli.

In short, they’ve discovered patterns of functional connectivity that directly correspond to our ability to recognize a particular face.

It’s still a far cry from an engram – after all, we still don’t know exactly what information these connections encode, or how the brain encodes that data, or what other conditions might need to be met for an “a-ha!” recognition to take place – but still, network mapping appears to be a very promising starting point for investigating questions like these.

The researchers plan to use this approach to study connectivity patterns in the brains of children with severe autism, and other patients who have trouble recognizing faces. They hope it’ll also be useful for understanding how we recognize scenes and other objects – eventually, a network-oriented approach may even offer clues about how we recognize familiar ideas.

In other words, for the first time in history, we’re using our brains’ natural love of connections to understand just how our brains form those connections in the first place. For those of us who love a good mystery, it’s an exciting time to be studying our own minds!

Catchin’ Some Waves

Our capacity for short-term memory depends on the synchronization of two types of brainwaves – rapid cycles of electrical activation – says a new study.

Theta and gamma waves try get their dance steps synced up.

When the patterns of theta waves (4-7 Hz) and gamma waves (25-50 Hz) are closely synchronized, pieces of verbal information seem to be “written” into our short-term memory. But it also turns out that longer theta cycles help us remember more bits of information, while longer gamma cycles are correlated with lower recall.

These patterns are measured using electroencephalography (EEG), a lab technique with a long and successful history. Back in the 1950s, it helped scientists unravel the distinct brainwave patterns associated with REM (rapid-eye movement) and deep sleep. More recently, it’s been used to help people with disabilities control computers, and it’s even helped home users get an up-close look at their own brain activity.

Though more modern techniques like fMRI and DTI are much better at mapping tiny activity patterns deep within the brain, EEG remains a useful tool for measuring the overall patterns of synchronized electrical activity that sweep across the entire brain in various wave-like patterns – hence the term “brainwaves.”

Several types of brainwaves have been well studied since the 1950s: alpha waves, which are correlated with active attention; beta and delta waves, which are associated with logical processing; theta waves, which are associated with meditation and acceptance; and gamma waves, which burst rapidly across the brain when we come to a realization or an understanding.

And now, as the International Journal of Psychophysiology reports, a team led by Jan Kamiński at the Polish Academy of Sciences has discovered a new way of mapping relationships between these patterns of wave activity, to arrive at a new understanding of how theta and gamma waves work together: they studied the lengths of these two cycles relative to one another - and what they found was pretty amazing:

We have observed that the longer the theta cycles, the more information ‘bites’ the subject was able to remember; the longer the gamma cycle, the less the subject remembered.

The researchers discovered this in a very straightforward way – they simply kept tabs on volunteers’ EEG activity as they sat with eyes closed and let their minds wander; then they compared these recordings against ones taken as the volunteers memorized longer and longer strings of numbers - from three digits up to nine.

The correlation between long theta cycles and greater memory for digits turned out to be quite strong – and for gamma waves, the reverse turned out to be true. This means that gamma waves are probably much more crucial for forming ideas than they are for rote memorization.

Though this finding might not seem all that revolutionary, it provides an elegant demonstration of how even older technologies like EEG can still be used to help us make brand-new discoveries. Which means that in the brains of those of us who keep pluggin’ away at home EEG experiments, there’s probably still a place of honor for those wonderful little gamma waves.

Follow

Get every new post delivered to your Inbox.

Join 74 other followers