Taking Vision Apart

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says a new study.

At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.

If other labs can confirm these results, this will mean we’re very close to being able to predict exactly which neurons will fire when an animal looks at a specific object.

Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cellsneurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists like Alan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator – started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.

See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information in connections and gradientsdegrees to which one piece of neural architecture influences others. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.

So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and how those connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.

But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports. (By the way, if you guys are reading this, I live in L.A. and would love to visit your lab.)

The team injected mouse brains with a special dye that’s chemically formulated to glow fluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown various shapes. And the researchers confirmed something wonderfully weird about the way a brain works:

Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.

In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – its roundness, its largeness, its color, and so on. As signals pass through each layer, they’re influenced by the neurons they’ve connected with before. Each layer is like a section of a choir, adding its own voice to the song with perfect timing.

Now, other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing about this particular study? The level of detail:

Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.

Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.

The gap between mind and machine is getting narrower every day. How does that make you feel?

It’s Official!

The Connectome has two new URLs:

the-connectome.com

theconnecto.me

Much easier to remember, I think.

Have a wonderful weekend, everyone – I’ve got some fun stuff coming up for you next week.

The Memory Master

A gene that may underlie the molecular mechanisms of memory has been identified, says a new study.

Some of us feel that "yellow" and "red" are open to interpretation...

The gene’s called neuronal PAS domain protein 4 (Npas4 to its friends). When a brain has a new experience, Npas4 leaps into action, activating a whole series of other genes that modify the strength of synapses – the connections that allow neurons to pass electrochemical signals around.

You can think of synapses as being a bit like traffic lights: a very strong synapse is like a green light, allowing lots of traffic (i.e., signals) to pass down a particular neural path when the neuron fires. A weaker synapse is like a yellow light – some signals might slip through now and then, but most won’t make it. Some synapses can inhibit others, acting like red lights – stopping any signals from getting through. And if a particular synapse goes untraveled for long enough, the road starts to crumble away – until finally, there’s no synapse left.

There’s a saying in neuroscience: “Cells that fire together wire together.” (And vice versa.) In other words, synaptic plasticity – the ability of neurons to modify their connectivity patterns – is what allows neural networks to physically change as they take in new information.  It’s what gives our brains the ability to learn.

In fact, millions of neurons are delicately tinkering with their connectivity patterns right now, inside your head, as you learn this stuff. Pretty cool, huh?

Anyway, synaptic plasticity’s not exactly breaking news – scientists have been studying it in animals like squid and sea slugs since the 1970s. Neurons in those animals are pretty easy to study with electrodes and a microscope, because a) the animals are anatomically simple compared to humans, and b) some of their neurons are so huge they can be seen with the naked eye.

Studying synapses in humans isn’t quite so simple, though. For one thing, most people wouldn’t like it if you cut open their brain and started poking around while they were alive and conscious – and besides, a lot of the really interesting stuff happens down at the molecular level.

That brings up an important point: though you normally hear about genes in connection with traits – say, a “gene for baldness” and so on – these complex molecular strands actually play all sorts of roles in the body, from building cells to adjusting chemical levels to telling other genes what to do.

That’s why MIT’s Yingxi Lin and her team set out to study the functions of certain genes found in the hippocampus – a brain structure central to memory formation – the journal Science reports. The researchers taught a group of mice to avoid a little room in which they received a mild electric shock, then used a precise chemical tracking technique to isolate which genes in the mouse hippocampus were activated right when the mice learned which room to avoid.

In particular, they focused on a hippocampal region with the sci-fi-sounding name of Cornu Ammonis 3or CA3 for short:

We found that the activity-dependent transcription factor Npas4 regulates a transcriptional program in CA3 that is required for contextual memory formation. Npas4 was specifically expressed in CA3 after contextual learning.

By “transcriptional program,” the paper’s authors mean a series of genetic “switches” – genes that Npas4 activates – which in turn make chemical adjustments that strengthen or weaken synaptic connections. In short, Npas4 appears to be part of the master “traffic conductor program” for many of the brain’s synapses.

Though they were pretty excited by this discovery (who wouldn’t be?) the researchers took a deep breath, calmed down, and double-checked their results, by testing memory formation in mice whose brains were unable to produce Npas4:

Global knockout or selective deletion of Npas4 in CA3 both resulted in impaired contextual memory, and restoration of Npas4 in CA3 was sufficient to reverse the deficit in global knockout mice.

In short, they make a pretty convincing argument that Npas4 is a necessary ingredient in a mouse’s ability – and probably our ability – to form certain types of new memories.

Exactly how that program relates to our experience of memory remains unclear, but it’s a promising starting point for fine-tuning future memory research. I don’t know about you, but I’d be thrilled to green-light such a project.

Saving Faces

A brain area that’s specialized to recognize faces has a unique structure in each of our brains – and mapping that area’s connectivity patterns can tell us how each of our brains use it, says a new study.

The fusiform gyrus - home of the Brain Face Database.

The fusiform gyrus in the temporal lobe plays a part in our recognition of words, numbers, faces, colors, and other visual specifics – but it’s becoming increasingly clear that no two people’s fusiform gyrus structure is identical. By studying this region in a larger connectomic framework, though, researchers can now predict which parts of a certain person’s fusiform gyrus are specialized for face recognition.

Since the early days of neurophysiology – way back into the 1800s – scientists have been working to pinpoint certain types of brain activity to certain structures within the brain. Simple experiments and lesion studies – many of them pretty crude by today’s standards – demonstrated that, for instance, the cerebellum is necessary for coordinating bodily movement; and that the inferior frontal gyrus (IFG) is involved in speech production.

Things get trickier, though, when we try to study more abstract mental tasks. For example, debates over the possible existence of “grandmother cells” – groups of neurons whose activity might represent complex concepts like “my grandmother” – have raged for decades, with no clear resolution in sight. The story’s similar for “mirror neurons” – networks of cells that some scientists think are responsible for our ability to understand and predict the intent of another person’s action.

All these debates reflect a far more fundamental gap in our understanding – one that many scientists seem reluctant to acknowledge: To this day, no one’s been able to demonstrate exactly what a “concept” is in neurological terms – or even if it’s a single type of “thing” at all.

This is why you’ll sometimes hear theoretical psychologists talk about “engrams” – hypothetical means by which neural networks might store memories – a bit like computer files in the brain. But the fact is, no one’s sure if the brain organizes information in a way that’s at all analogous to the way a computer does. In fact, a growing body of research points toward the idea that our memories are highly interdependent and dynamic – more like ripples in a pond than files in a computer.

This is where connectomics comes in. As researchers become increasingly aware that no two brains are quite alike, they’ve begun to focus on mapping the neural networks that connect various processing hubs to one another. As an analogy, you might say they’ve begun to study traffic patterns by mapping a country’s entire highway system, rather than just focusing on the stoplights in individual cities.

And now, as the journal Nature Neuroscience reports, a team led by MIT’s David Osher has mapped a variety of connectivity patterns linking the fusiform gyrus to other brain areas.

They accomplished this through a technique called diffusion imaging, which is based on a brain-scanning technology known as diffusion MRI (dMRI). Diffusion imaging applies a magnetic field to the brain, causing water to flow along axons – the long “tails” of neurons that connect them to other areas – allowing the MRI scan to detect which areas are sending out lots of signals to others during certain mental activities. As you can imagine, this technique has been revealing all sorts of surprising new facts about the brain’s functionality.

In this particular study, the researchers found that during face-recognition tasks, certain parts of the fusiform gyrus lit up with active connections to areas like the superior and inferior temporal cortices, which are also known to be involved in face recognition. Intriguingly, they also detected connectivity with parts of the cerebellum – an ancient brain structure involved in bodily balance and movement, which no one expected to be part of any visual recognition pathway. Sounds like a Science Mystery to me!

The team even discovered that they could use the connectivity patterns they found to predict which faces a person would recognize:

By using only structural connectivity, as measured through diffusion-weighted imaging, we were able to predict functional activation to faces in the fusiform gyrus … The structure-function relationship discovered from the initial participants was highly robust in predicting activation in a second group of participants, despite differences in acquisition parameters and stimuli.

In short, they’ve discovered patterns of functional connectivity that directly correspond to our ability to recognize a particular face.

It’s still a far cry from an engram – after all, we still don’t know exactly what information these connections encode, or how the brain encodes that data, or what other conditions might need to be met for an “a-ha!” recognition to take place – but still, network mapping appears to be a very promising starting point for investigating questions like these.

The researchers plan to use this approach to study connectivity patterns in the brains of children with severe autism, and other patients who have trouble recognizing faces. They hope it’ll also be useful for understanding how we recognize scenes and other objects – eventually, a network-oriented approach may even offer clues about how we recognize familiar ideas.

In other words, for the first time in history, we’re using our brains’ natural love of connections to understand just how our brains form those connections in the first place. For those of us who love a good mystery, it’s an exciting time to be studying our own minds!

Catchin’ Some Waves

Our capacity for short-term memory depends on the synchronization of two types of brainwaves – rapid cycles of electrical activation – says a new study.

Theta and gamma waves try get their dance steps synced up.

When the patterns of theta waves (4-7 Hz) and gamma waves (25-50 Hz) are closely synchronized, pieces of verbal information seem to be “written” into our short-term memory. But it also turns out that longer theta cycles help us remember more bits of information, while longer gamma cycles are correlated with lower recall.

These patterns are measured using electroencephalography (EEG), a lab technique with a long and successful history. Back in the 1950s, it helped scientists unravel the distinct brainwave patterns associated with REM (rapid-eye movement) and deep sleep. More recently, it’s been used to help people with disabilities control computers, and it’s even helped home users get an up-close look at their own brain activity.

Though more modern techniques like fMRI and DTI are much better at mapping tiny activity patterns deep within the brain, EEG remains a useful tool for measuring the overall patterns of synchronized electrical activity that sweep across the entire brain in various wave-like patterns – hence the term “brainwaves.”

Several types of brainwaves have been well studied since the 1950s: alpha waves, which are correlated with active attention; beta and delta waves, which are associated with logical processing; theta waves, which are associated with meditation and acceptance; and gamma waves, which burst rapidly across the brain when we come to a realization or an understanding.

And now, as the International Journal of Psychophysiology reports, a team led by Jan Kamiński at the Polish Academy of Sciences has discovered a new way of mapping relationships between these patterns of wave activity, to arrive at a new understanding of how theta and gamma waves work together: they studied the lengths of these two cycles relative to one another – and what they found was pretty amazing:

We have observed that the longer the theta cycles, the more information ‘bites’ the subject was able to remember; the longer the gamma cycle, the less the subject remembered.

The researchers discovered this in a very straightforward way – they simply kept tabs on volunteers’ EEG activity as they sat with eyes closed and let their minds wander; then they compared these recordings against ones taken as the volunteers memorized longer and longer strings of numbers – from three digits up to nine.

The correlation between long theta cycles and greater memory for digits turned out to be quite strong – and for gamma waves, the reverse turned out to be true. This means that gamma waves are probably much more crucial for forming ideas than they are for rote memorization.

Though this finding might not seem all that revolutionary, it provides an elegant demonstration of how even older technologies like EEG can still be used to help us make brand-new discoveries. Which means that in the brains of those of us who keep pluggin’ away at home EEG experiments, there’s probably still a place of honor for those wonderful little gamma waves.

Quick Question For You

Guiding Neuron Growth

Our neurons’ growth can be shaped by tiny cues from spinning microparticles in the fluids that surround them, a new study reports.

An axon gets all friendly with a spinnin' microparticle.

The branching and growth of neurons is based on several kinds of guides, including their chemical environment, their location within the brain, and the dense network of glial cells that support and protect them. But as it turns out, they’re also surprisingly responsive to fluid dynamics, turning in response to the rotation of nearby microparticles – a bit like the way a vine can climb a fence-post.

Since the early days of neuroscience, researchers have dreamed of growing and shaping neurons for specific purposes – to patch gaps in damaged neural networks, for example; or just to test their workings under controlled lab conditions.

But it’s only in the past few years that technologies like microfluidic chambers and pluripotent stem cells have enabled researchers to grow healthy, living neurons according to precise specifications, and study those cells’ responses to all kinds of stimuli. In fact, it looks like it won’t be much longer ’til doctors can implant networks of artificially grown neurons directly into living adult brains.

But as the journal Nature Photonics reports, the big breakthrough this time comes from Samarendra Mohanty at The University of Texas at Arlington, who found that neuron growth can respond to physical cues – spinning particles in fluid, for instance – as well as to chemical ones.

Mohanty’s team discovered this by using a tiny laser to direct the spin of a microparticle positioned next to the axon of a growing neuron. The spinning particle generated a miniature counterclockwise vortex in the fluid – and wouldn’t ya know it; the axon started wrapping around the spinning particle as the neuron grew:

Circularly polarized light with angular momentum causes the trapped bead to spin. This creates a localized microfluidic flow … against the growth cone that turns in response to the shear.

In short, this is the first time a scientific team has used a mechanical device – a “micro-motor,” as they call it – to directly control and precisely adjust the growth of a single axon:

The direction of axonal growth can be precisely manipulated by changing the rotation direction and position of this optically driven micromotor.

So far, the micromotor only works 42 percent of the time – but the team is optimistic that future tests will lead to greater reliability and more precise control. In the near future, micromotors like this one could be used to turn the growth of an axon back and forth – or even to funnel growth through “gauntlets” of spinning particles.

Most conveniently of all, the particles could be injected, re-positioned, and removed as needed – providing a much simpler, more modifiable architecture than any other neuron-shaping technology in use today.

And for the slightly more distant future, Mohanty’s lab is hard at work on a system for providing long-range, long-term guidance to entire neural networks through completely non-invasive optical methods.

Until then, though, isn’t it amazing to stop and think about all the neurons that are growing and reshaping themselves – all the delicate intertwining lattices relaying millions of mysterious coded messages, right now, within the lightless interior of your own head?

Call me self-centered, but I think it’s just about the coolest thing on planet Earth.

Follow

Get every new post delivered to your Inbox.

Join 75 other followers