Posts Tagged ‘ neurons ’

Connection Clusters

As our brains learn something, our neurons form new connections in clustered groups, says a new study.

Some clusters are juicier than others.

In other words, synapses – connections between neurons – are much more likely to form near other brand-new synapses than they are to emerge near older ones.

As our neuroscience friends like to say: “Cells that fire together wire together” – and that process of rewiring never stops. From before you were born right up until this moment, the synaptic pathways in your brain have been transforming, hooking up new electrochemical connections and trimming away the ones that aren’t needed. Even when you’re sound asleep, your brain’s still burning the midnight oil, looking for ever-sleeker ways to do its many jobs.

I like to imagine that this happens to the sound of a really pumped-up drumbeat, as my brain says things like, “We can rebuild this pathway – we have the technology! We can make it better! Faster! Stronger!”

What’s even more amazing is how delicate these adjustments can be. We’re not just talking about growing dendrites here – we’re talking about dendritic spines, the tiny knobs that branch off from dendrites and bloom into postsynaptic densities – molecular interfaces that allow one neuron to receive information from its neighbors.

Back in 2005, a team led by Yi Zuo at the University of California Santa Cruz found that as a mouse learns a new task, thousands of fresh dendritic spines blossom from the dendrites of neurons in the motor cortex (an area of the brain that helps control movement). In short, they actually observed neurons learning to communicate better.

And now Zuo’s back with another hit, the journal Nature reports. This time, Zuo and her team have shown that those new dendritic spines aren’t just popping up at random – they grow in bunches:

A third of new dendritic spines (postsynaptic structures of most excitatory synapses) formed during the acquisition phase of learning emerge in clusters, and that most such clusters are neighbouring spine pairs.

The team discovered this by studying fluorescent mouse neurons under a microscope (Oh, did you know there are mice with glowing neurons? Because there are mice with glowing neurons.). As in Zuo’s earlier study, they focused on neurons in the motor cortex:

We followed apical dendrites of layer 5 pyramidal neurons in the motor cortex while mice practised novel forelimb skills.

But as it turned out, their discovery about clustered spines was just the tip of the iceberg – the researchers also found that when a second dendritic spine formed close to one that was already there, the first spine grew larger, strengthening the connection even more. And they learned that clustered spines were much more likely to persist than non-clustered ones were, which just goes to show the importance of a solid support network. And finally, they found that the new spines don’t form when just any signal passes through – new connections only blossom when a brain is learning through repetition.

Can you imagine how many new dendritic spines were bursting to life in the researchers‘ brains as they learned all this? And what about in your brain, right now?

It’s kinda strange to think about this stuff, I know – even stranger is the realization that your brain isn’t so much an object as it is a process – a constantly evolving system of interconnections. You could say that instead of human beings, we’re really human becomings – and thanks to your adaptable neurons, each moment is a new opportunity to decide who – or what – you’d like to become.

Beyond Perfection

If you continue to practice a skill even after you’ve achieved mastery of it, your brain keeps learning to perform it more and more efficiently, says a new study.

Believing you've reached perfection can lead you to engage in some...interesting...behavior.

As we perform a task – say, dunking a basketball or playing a sweet guitar solo – over and over again, we eventually reach a point that some psychologists call “unconscious competence,” where we execute each movement perfectly without devoting any conscious attention to it at all. But even after this point, our bodies keep finding ways to perform the task more and more efficiently, burning less energy with each repetition.

This story’s got it all – brain-hacks, mysterious discoveries, robots – but to put it all in perspective, we’ve gotta start by talking about this idea we call perfection.

“Practice makes perfect,” the old saying goes – but what’s this “perfect” we’re trying to reach? Isn’t it often a matter of opinion? What I mean is, how do we judge, say, a “perfect” backflip or a “perfect” dive? We compare it to others we’ve seen, and decide that it meets certain criteria better than those examples did; that it was performed with less error.

But where do these criteria for perfection come from? Well, some have said there’s a Platonic realm of “perfect forms” that our minds are somehow tapping into – a realm that contains not only “The Perfect Chair” but “the perfect version of that chair” and “the perfect version of that other chair” and “the perfect version of that molecule” and so on, ad infinitum. Kinda weird, I know – but a lot of smart people believed in ideas like this for thousands of years, and some still do.

Science, though, works in a different way: Instead of trying to tap into a world of perfect forms, scientists (and engineers and mathematicians and programmers and so on) work to find errors and fix them.

And it turns out that the human body is quite talented at doing exactly that. A team led by Alaa Ahmed at the University of Colorado at Boulder found this out firsthand, with the help of robots, the Journal of Neuroscience reports:

Seated subjects made horizontal planar reaching movements toward a target using a robotic arm.

These researchers weren’t interested in brain activity – instead, as the volunteers practiced moving the arm, the researchers measured their oxygen consumption, their carbon dioxide output, and their muscle activity.

As you might expect, the scientists found that as people got better at moving the arm, their consumption of oxygen and production of carbon dioxide, and their overall muscle activity, steadily decreased:

Subjects decreased movement error and learned the novel dynamics. By the end of learning, net metabolic power decreased by ∼20% from initial learning. Muscle activity and coactivation also decreased with motor learning.

But the volunteers’ bodies didn’t stop there. As people kept practicing, their gas consumption and output continued to decrease – and so did their muscle activation. In short, their bodies kept learning to move the arm with measurably less and less physical effort.

Though this study didn’t record any data from the subjects’ brains, it’s easy to see how this continual improvement is just one reflection of a very versatile ability. For instance, we know that when two neurons get really friendly, they become more sensitive to each others’ signals – and we also know that underused neural pathways gradually fade away, making room for new ones. Self-improvement impulses are woven deeply into our bodies – into our cells.

When I say that our brains and bodies are cities, I’m not just speaking metaphorically – you are, quite literally, a vast community – an ecosystem composed of trillions of interdependent microorganisms, each one constantly struggling for its own nourishment and safety.

And though your conscious mind is one part – a very significant part – of this great microscopic nation, it’s not the only part that can learn. At this moment, all throughout the lightless highways and chambers of your body, far below your conscious access, networks of cells are changing, adapting, learning, adjusting - finding errors and fixing them.

So, you can think about “perfection” all you want – but even at that magical moment when you achieve it, the multitudes within you are still hard at work, figuring out how to reach beyond that ideal.

What do you think they’re up to right now?

Taking Vision Apart

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says a new study.

At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.

If other labs can confirm these results, this will mean we’re very close to being able to predict exactly which neurons will fire when an animal looks at a specific object.

Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cellsneurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists like Alan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator – started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.

See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information in connections and gradientsdegrees to which one piece of neural architecture influences others. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.

So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and how those connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.

But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports. (By the way, if you guys are reading this, I live in L.A. and would love to visit your lab.)

The team injected mouse brains with a special dye that’s chemically formulated to glow fluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown various shapes. And the researchers confirmed something wonderfully weird about the way a brain works:

Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.

In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – its roundness, its largeness, its color, and so on. As signals pass through each layer, they’re influenced by the neurons they’ve connected with before. Each layer is like a section of a choir, adding its own voice to the song with perfect timing.

Now, other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing about this particular study? The level of detail:

Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.

Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.

The gap between mind and machine is getting narrower every day. How does that make you feel?

The Memory Master

A gene that may underlie the molecular mechanisms of memory has been identified, says a new study.

Some of us feel that "yellow" and "red" are open to interpretation...

The gene’s called neuronal PAS domain protein 4 (Npas4 to its friends). When a brain has a new experience, Npas4 leaps into action, activating a whole series of other genes that modify the strength of synapses – the connections that allow neurons to pass electrochemical signals around.

You can think of synapses as being a bit like traffic lights: a very strong synapse is like a green light, allowing lots of traffic (i.e., signals) to pass down a particular neural path when the neuron fires. A weaker synapse is like a yellow light – some signals might slip through now and then, but most won’t make it. Some synapses can inhibit others, acting like red lights – stopping any signals from getting through. And if a particular synapse goes untraveled for long enough, the road starts to crumble away – until finally, there’s no synapse left.

There’s a saying in neuroscience: “Cells that fire together wire together.” (And vice versa.) In other words, synaptic plasticity – the ability of neurons to modify their connectivity patterns – is what allows neural networks to physically change as they take in new information.  It’s what gives our brains the ability to learn.

In fact, millions of neurons are delicately tinkering with their connectivity patterns right now, inside your head, as you learn this stuff. Pretty cool, huh?

Anyway, synaptic plasticity’s not exactly breaking news – scientists have been studying it in animals like squid and sea slugs since the 1970s. Neurons in those animals are pretty easy to study with electrodes and a microscope, because a) the animals are anatomically simple compared to humans, and b) some of their neurons are so huge they can be seen with the naked eye.

Studying synapses in humans isn’t quite so simple, though. For one thing, most people wouldn’t like it if you cut open their brain and started poking around while they were alive and conscious – and besides, a lot of the really interesting stuff happens down at the molecular level.

That brings up an important point: though you normally hear about genes in connection with traits – say, a “gene for baldness” and so on – these complex molecular strands actually play all sorts of roles in the body, from building cells to adjusting chemical levels to telling other genes what to do.

That’s why MIT’s Yingxi Lin and her team set out to study the functions of certain genes found in the hippocampus – a brain structure central to memory formation – the journal Science reports. The researchers taught a group of mice to avoid a little room in which they received a mild electric shock, then used a precise chemical tracking technique to isolate which genes in the mouse hippocampus were activated right when the mice learned which room to avoid.

In particular, they focused on a hippocampal region with the sci-fi-sounding name of Cornu Ammonis 3or CA3 for short:

We found that the activity-dependent transcription factor Npas4 regulates a transcriptional program in CA3 that is required for contextual memory formation. Npas4 was specifically expressed in CA3 after contextual learning.

By “transcriptional program,” the paper’s authors mean a series of genetic “switches” – genes that Npas4 activates – which in turn make chemical adjustments that strengthen or weaken synaptic connections. In short, Npas4 appears to be part of the master “traffic conductor program” for many of the brain’s synapses.

Though they were pretty excited by this discovery (who wouldn’t be?) the researchers took a deep breath, calmed down, and double-checked their results, by testing memory formation in mice whose brains were unable to produce Npas4:

Global knockout or selective deletion of Npas4 in CA3 both resulted in impaired contextual memory, and restoration of Npas4 in CA3 was sufficient to reverse the deficit in global knockout mice.

In short, they make a pretty convincing argument that Npas4 is a necessary ingredient in a mouse’s ability – and probably our ability – to form certain types of new memories.

Exactly how that program relates to our experience of memory remains unclear, but it’s a promising starting point for fine-tuning future memory research. I don’t know about you, but I’d be thrilled to green-light such a project.

Guiding Neuron Growth

Our neurons’ growth can be shaped by tiny cues from spinning microparticles in the fluids that surround them, a new study reports.

An axon gets all friendly with a spinnin' microparticle.

The branching and growth of neurons is based on several kinds of guides, including their chemical environment, their location within the brain, and the dense network of glial cells that support and protect them. But as it turns out, they’re also surprisingly responsive to fluid dynamics, turning in response to the rotation of nearby microparticles – a bit like the way a vine can climb a fence-post.

Since the early days of neuroscience, researchers have dreamed of growing and shaping neurons for specific purposes – to patch gaps in damaged neural networks, for example; or just to test their workings under controlled lab conditions.

But it’s only in the past few years that technologies like microfluidic chambers and pluripotent stem cells have enabled researchers to grow healthy, living neurons according to precise specifications, and study those cells’ responses to all kinds of stimuli. In fact, it looks like it won’t be much longer ’til doctors can implant networks of artificially grown neurons directly into living adult brains.

But as the journal Nature Photonics reports, the big breakthrough this time comes from Samarendra Mohanty at The University of Texas at Arlington, who found that neuron growth can respond to physical cues – spinning particles in fluid, for instance – as well as to chemical ones.

Mohanty’s team discovered this by using a tiny laser to direct the spin of a microparticle positioned next to the axon of a growing neuron. The spinning particle generated a miniature counterclockwise vortex in the fluid – and wouldn’t ya know it; the axon started wrapping around the spinning particle as the neuron grew:

Circularly polarized light with angular momentum causes the trapped bead to spin. This creates a localized microfluidic flow … against the growth cone that turns in response to the shear.

In short, this is the first time a scientific team has used a mechanical device – a “micro-motor,” as they call it – to directly control and precisely adjust the growth of a single axon:

The direction of axonal growth can be precisely manipulated by changing the rotation direction and position of this optically driven micromotor.

So far, the micromotor only works 42 percent of the time – but the team is optimistic that future tests will lead to greater reliability and more precise control. In the near future, micromotors like this one could be used to turn the growth of an axon back and forth – or even to funnel growth through “gauntlets” of spinning particles.

Most conveniently of all, the particles could be injected, re-positioned, and removed as needed – providing a much simpler, more modifiable architecture than any other neuron-shaping technology in use today.

And for the slightly more distant future, Mohanty’s lab is hard at work on a system for providing long-range, long-term guidance to entire neural networks through completely non-invasive optical methods.

Until then, though, isn’t it amazing to stop and think about all the neurons that are growing and reshaping themselves – all the delicate intertwining lattices relaying millions of mysterious coded messages, right now, within the lightless interior of your own head?

Call me self-centered, but I think it’s just about the coolest thing on planet Earth.

Synaptic Changes

Synapses – the junctions where neurons communicate – are constantly growing and pruning themselves – and those two processes occur independently of one another, says a new study.

A neuron shows off its fancy green glowy-ness.

As a synapse sees more and more use, it tends to grow stronger, while synapses that fall out of use tend to grow weaker and eventually die off. Collectively, these processes are known as synaptic plasticity: the ability of synapses to change their connective properties. But as it turns out, the elimination of redundant synapses isn’t directly dependent on others being strengthened – instead, it seems to be triggered by its own independent chemical messaging system.

Like many mammals, we humans are born with brains that aren’t yet particularly well-adapted for anything other than – well, adaptation. Through our brains’ extraordinary plasticity, we’re able to learn, in a few short years, everything from how to walk upright to how to argue – and as we grow older, our brains learn how to learn more quickly and think more efficiently by pruning away unneeded synapses.

But all that speed and flexibility come at a price: the balance between synaptic growth and die-back must be delicately maintained. Excessive connectivity can lead to diseases like epilepsy, while insufficient communication can result in disorders like autism.

This kind of summarizing oversimplifies the situation, of course – brain disorders rarely have just a single cause, or a clearly-defined set of symptoms. Even so, they provide grim reminders of the precise electrochemical balancing act that continues in the dark backstage of our skulls throughout each moment of every day.

And now, as the Journal of Neuroscience reports, a team led by The Jackson Laboratory’s Zhong-wei Zhang has found that synaptic growth and pruning aren’t two parts of a single process after all – they’re two processes, coordinated through two different chemical signaling systems.

The team discovered this by studying a certain type of glutamatergic synapses - i.e., synapses that use the common neurotransmitter chemical glutamate – in mouse neurons. The specific type of synaptic receptor the team studied is called the AMPAR, which is short for (get ready for this) “α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor.”

The team discovered that when AMPARs of baby mice were deprived of glutamate stimulation, these receptors’ symapses didn’t grow stronger, as they did in normal mice:

Deletion of GluA3 or GluA4 caused significant reductions of synaptic AMPAR currents in thalamic neurons at P16–P17, with a greater reduction observed in GluA3-deficient mice.

No huge shock there. But what was surprising for the team was their other discovery – that even in these glutamate-deficient mice, pruning of AMPARs still proceeded normally:

Deletions of GluA3 or GluA4 or both had no effect on the elimination of relay inputs: the majority of thalamic neurons in these knock-out mice—as in wild-type mice—receive a single relay input.

Looking at this data, it’s hard to escape the conclusion that pruning of AMPARs is dependent on a separate mechanism from their strengthening. Just what that other mechanism is remains to be seen.

In short, this is another example of how even the most mundane workings of our brains continue to stun neuroscientists with brand-new revelations every day.

Silicon Synapses

A new kind of computer chip mimics the way a neuron learns, a new study reports.

Behold! The mighty Synapse Chip!

The 400-transistor chip simulates the activity of a single synapse – a connection between two neurons. Because of the chip’s complexity, it’s able to mimic a synapse’s plasticity – its ability to subtly change structure and function in response to new stimuli.

For example, a synapse that repeatedly responds to an electric shock might, over time, become less sensitive to that shock. Thus, synaptic plasticity forms the basis of neural learning, well below the level of conscious processing.

The human brain contains approximately 100 billion neurons, and more than 100 trillion synapses. Ever since the anatomist  Santiago Ramón y Cajal discovered the function of neurons back in the early 1900s, scientists have dreamed of building a device that replicated the behavior of even a single synapse. For decades, they had to content themselves with mathematical models and digital simulations.

But now, as the journal Proceedings of the National Academy of Sciences reports, a team led by MIT’s Chi-Sang Poon has constructed a working silicon model of a synapse in the physical world.

The chip uses transistors to mimic the activity of ion channels – small “ports” in the cell membranes of neurons that allow various amounts of neurotransmitter chemicals and ions (positively or negatively charged atoms) to pass in and out of the cell. These channels form the basis for synaptic communication – and for some of the most hotly researched topics in neuroscience.

While ordinary transistors act as binary “on/off” gates, neural synapses conduct signals along fairly smooth gradients, allowing the signals to increase in strength until they finally trigger the neuron to “fire” a signal on to its neighbor(s). It was this gradient property that Poon’s team sought to mimic with their new chip:

While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell.

Since the researchers can also tweak the chip’s properties to mimic different kinds of ion channels, this provides one of the most realistic models yet for studying how individual synapses actually work.

The researchers have already used the chip to study long-term depression (LTD), the process by which some ion channels can weaken the synaptic activity of others. They also hope they’ll soon be using chips like this one in conjunction with lab-grown biological neurons, to discover all kinds of exciting new things about how cells behave “in the wild.”

Who knows – by this time next year, we may be watching nature documentaries about Neuron Cyborgs – but my guess is that the SyFy channel will get there first.

Brief and tangentially relevant side-note: Connectome posts may be somewhat spotty over the next few weeks, as I’m currently launching a new project, the details of which need to be kept under wraps (or “on the DL,” as the kids say) for the time being. I’ll do my best to report on neuroscience breakthroughs as often as I can during this period, and things should be back to (relatively) normal soon. Thanks for stickin’ with me.

Follow

Get every new post delivered to your Inbox.

Join 75 other followers