Posts Tagged ‘ neurons ’

Connection Clusters

As our brains learn something, our neurons form new connections in clustered groups, says a new study.

Some clusters are juicier than others.

In other words, synapses – connections between neurons – are much more likely to form near other brand-new synapses than they are to emerge near older ones.

As our neuroscience friends like to say: “Cells that fire together wire together” – and that process of rewiring never stops. From before you were born right up until this moment, the synaptic pathways in your brain have been transforming, hooking up new electrochemical connections and trimming away the ones that aren’t needed. Even when you’re sound asleep, your brain’s still burning the midnight oil, looking for ever-sleeker ways to do its many jobs.

I like to imagine that this happens to the sound of a really pumped-up drumbeat, as my brain says things like, “We can rebuild this pathway – we have the technology! We can make it better! Faster! Stronger!”

What’s even more amazing is how delicate these adjustments can be. We’re not just talking about growing dendrites here – we’re talking about dendritic spines, the tiny knobs that branch off from dendrites and bloom into postsynaptic densities – molecular interfaces that allow one neuron to receive information from its neighbors.

Back in 2005, a team led by Yi Zuo at the University of California Santa Cruz found that as a mouse learns a new task, thousands of fresh dendritic spines blossom from the dendrites of neurons in the motor cortex (an area of the brain that helps control movement). In short, they actually observed neurons learning to communicate better.

And now Zuo’s back with another hit, the journal Nature reports. This time, Zuo and her team have shown that those new dendritic spines aren’t just popping up at random – they grow in bunches:

A third of new dendritic spines (postsynaptic structures of most excitatory synapses) formed during the acquisition phase of learning emerge in clusters, and that most such clusters are neighbouring spine pairs.

The team discovered this by studying fluorescent mouse neurons under a microscope (Oh, did you know there are mice with glowing neurons? Because there are mice with glowing neurons.). As in Zuo’s earlier study, they focused on neurons in the motor cortex:

We followed apical dendrites of layer 5 pyramidal neurons in the motor cortex while mice practised novel forelimb skills.

But as it turned out, their discovery about clustered spines was just the tip of the iceberg – the researchers also found that when a second dendritic spine formed close to one that was already there, the first spine grew larger, strengthening the connection even more. And they learned that clustered spines were much more likely to persist than non-clustered ones were, which just goes to show the importance of a solid support network. And finally, they found that the new spines don’t form when just any signal passes through – new connections only blossom when a brain is learning through repetition.

Can you imagine how many new dendritic spines were bursting to life in the researchers‘ brains as they learned all this? And what about in your brain, right now?

It’s kinda strange to think about this stuff, I know – even stranger is the realization that your brain isn’t so much an object as it is a process – a constantly evolving system of interconnections. You could say that instead of human beings, we’re really human becomings – and thanks to your adaptable neurons, each moment is a new opportunity to decide who – or what – you’d like to become.

Beyond Perfection

If you continue to practice a skill even after you’ve achieved mastery of it, your brain keeps learning to perform it more and more efficiently, says a new study.

Believing you've reached perfection can lead you to engage in some...interesting...behavior.

As we perform a task – say, dunking a basketball or playing a sweet guitar solo – over and over again, we eventually reach a point that some psychologists call “unconscious competence,” where we execute each movement perfectly without devoting any conscious attention to it at all. But even after this point, our bodies keep finding ways to perform the task more and more efficiently, burning less energy with each repetition.

This story’s got it all – brain-hacks, mysterious discoveries, robots – but to put it all in perspective, we’ve gotta start by talking about this idea we call perfection.

“Practice makes perfect,” the old saying goes – but what’s this “perfect” we’re trying to reach? Isn’t it often a matter of opinion? What I mean is, how do we judge, say, a “perfect” backflip or a “perfect” dive? We compare it to others we’ve seen, and decide that it meets certain criteria better than those examples did; that it was performed with less error.

But where do these criteria for perfection come from? Well, some have said there’s a Platonic realm of “perfect forms” that our minds are somehow tapping into – a realm that contains not only “The Perfect Chair” but “the perfect version of that chair” and “the perfect version of that other chair” and “the perfect version of that molecule” and so on, ad infinitum. Kinda weird, I know – but a lot of smart people believed in ideas like this for thousands of years, and some still do.

Science, though, works in a different way: Instead of trying to tap into a world of perfect forms, scientists (and engineers and mathematicians and programmers and so on) work to find errors and fix them.

And it turns out that the human body is quite talented at doing exactly that. A team led by Alaa Ahmed at the University of Colorado at Boulder found this out firsthand, with the help of robots, the Journal of Neuroscience reports:

Seated subjects made horizontal planar reaching movements toward a target using a robotic arm.

These researchers weren’t interested in brain activity – instead, as the volunteers practiced moving the arm, the researchers measured their oxygen consumption, their carbon dioxide output, and their muscle activity.

As you might expect, the scientists found that as people got better at moving the arm, their consumption of oxygen and production of carbon dioxide, and their overall muscle activity, steadily decreased:

Subjects decreased movement error and learned the novel dynamics. By the end of learning, net metabolic power decreased by ∼20% from initial learning. Muscle activity and coactivation also decreased with motor learning.

But the volunteers’ bodies didn’t stop there. As people kept practicing, their gas consumption and output continued to decrease – and so did their muscle activation. In short, their bodies kept learning to move the arm with measurably less and less physical effort.

Though this study didn’t record any data from the subjects’ brains, it’s easy to see how this continual improvement is just one reflection of a very versatile ability. For instance, we know that when two neurons get really friendly, they become more sensitive to each others’ signals – and we also know that underused neural pathways gradually fade away, making room for new ones. Self-improvement impulses are woven deeply into our bodies – into our cells.

When I say that our brains and bodies are cities, I’m not just speaking metaphorically – you are, quite literally, a vast community – an ecosystem composed of trillions of interdependent microorganisms, each one constantly struggling for its own nourishment and safety.

And though your conscious mind is one part – a very significant part – of this great microscopic nation, it’s not the only part that can learn. At this moment, all throughout the lightless highways and chambers of your body, far below your conscious access, networks of cells are changing, adapting, learning, adjusting - finding errors and fixing them.

So, you can think about “perfection” all you want – but even at that magical moment when you achieve it, the multitudes within you are still hard at work, figuring out how to reach beyond that ideal.

What do you think they’re up to right now?

Taking Vision Apart

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says a new study.

At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.

If other labs can confirm these results, this will mean we’re very close to being able to predict exactly which neurons will fire when an animal looks at a specific object.

Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cellsneurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists like Alan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator – started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.

See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information in connections and gradientsdegrees to which one piece of neural architecture influences others. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.

So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and how those connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.

But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports. (By the way, if you guys are reading this, I live in L.A. and would love to visit your lab.)

The team injected mouse brains with a special dye that’s chemically formulated to glow fluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown various shapes. And the researchers confirmed something wonderfully weird about the way a brain works:

Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.

In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – its roundness, its largeness, its color, and so on. As signals pass through each layer, they’re influenced by the neurons they’ve connected with before. Each layer is like a section of a choir, adding its own voice to the song with perfect timing.

Now, other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing about this particular study? The level of detail:

Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.

Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.

The gap between mind and machine is getting narrower every day. How does that make you feel?

The Memory Master

A gene that may underlie the molecular mechanisms of memory has been identified, says a new study.

Some of us feel that "yellow" and "red" are open to interpretation...

The gene’s called neuronal PAS domain protein 4 (Npas4 to its friends). When a brain has a new experience, Npas4 leaps into action, activating a whole series of other genes that modify the strength of synapses – the connections that allow neurons to pass electrochemical signals around.

You can think of synapses as being a bit like traffic lights: a very strong synapse is like a green light, allowing lots of traffic (i.e., signals) to pass down a particular neural path when the neuron fires. A weaker synapse is like a yellow light – some signals might slip through now and then, but most won’t make it. Some synapses can inhibit others, acting like red lights – stopping any signals from getting through. And if a particular synapse goes untraveled for long enough, the road starts to crumble away – until finally, there’s no synapse left.

There’s a saying in neuroscience: “Cells that fire together wire together.” (And vice versa.) In other words, synaptic plasticity – the ability of neurons to modify their connectivity patterns – is what allows neural networks to physically change as they take in new information.  It’s what gives our brains the ability to learn.

In fact, millions of neurons are delicately tinkering with their connectivity patterns right now, inside your head, as you learn this stuff. Pretty cool, huh?

Anyway, synaptic plasticity’s not exactly breaking news – scientists have been studying it in animals like squid and sea slugs since the 1970s. Neurons in those animals are pretty easy to study with electrodes and a microscope, because a) the animals are anatomically simple compared to humans, and b) some of their neurons are so huge they can be seen with the naked eye.

Studying synapses in humans isn’t quite so simple, though. For one thing, most people wouldn’t like it if you cut open their brain and started poking around while they were alive and conscious – and besides, a lot of the really interesting stuff happens down at the molecular level.

That brings up an important point: though you normally hear about genes in connection with traits – say, a “gene for baldness” and so on – these complex molecular strands actually play all sorts of roles in the body, from building cells to adjusting chemical levels to telling other genes what to do.

That’s why MIT’s Yingxi Lin and her team set out to study the functions of certain genes found in the hippocampus – a brain structure central to memory formation – the journal Science reports. The researchers taught a group of mice to avoid a little room in which they received a mild electric shock, then used a precise chemical tracking technique to isolate which genes in the mouse hippocampus were activated right when the mice learned which room to avoid.

In particular, they focused on a hippocampal region with the sci-fi-sounding name of Cornu Ammonis 3or CA3 for short:

We found that the activity-dependent transcription factor Npas4 regulates a transcriptional program in CA3 that is required for contextual memory formation. Npas4 was specifically expressed in CA3 after contextual learning.

By “transcriptional program,” the paper’s authors mean a series of genetic “switches” – genes that Npas4 activates – which in turn make chemical adjustments that strengthen or weaken synaptic connections. In short, Npas4 appears to be part of the master “traffic conductor program” for many of the brain’s synapses.

Though they were pretty excited by this discovery (who wouldn’t be?) the researchers took a deep breath, calmed down, and double-checked their results, by testing memory formation in mice whose brains were unable to produce Npas4:

Global knockout or selective deletion of Npas4 in CA3 both resulted in impaired contextual memory, and restoration of Npas4 in CA3 was sufficient to reverse the deficit in global knockout mice.

In short, they make a pretty convincing argument that Npas4 is a necessary ingredient in a mouse’s ability – and probably our ability – to form certain types of new memories.

Exactly how that program relates to our experience of memory remains unclear, but it’s a promising starting point for fine-tuning future memory research. I don’t know about you, but I’d be thrilled to green-light such a project.

Guiding Neuron Growth

Our neurons’ growth can be shaped by tiny cues from spinning microparticles in the fluids that surround them, a new study reports.

An axon gets all friendly with a spinnin' microparticle.

The branching and growth of neurons is based on several kinds of guides, including their chemical environment, their location within the brain, and the dense network of glial cells that support and protect them. But as it turns out, they’re also surprisingly responsive to fluid dynamics, turning in response to the rotation of nearby microparticles – a bit like the way a vine can climb a fence-post.

Since the early days of neuroscience, researchers have dreamed of growing and shaping neurons for specific purposes – to patch gaps in damaged neural networks, for example; or just to test their workings under controlled lab conditions.

But it’s only in the past few years that technologies like microfluidic chambers and pluripotent stem cells have enabled researchers to grow healthy, living neurons according to precise specifications, and study those cells’ responses to all kinds of stimuli. In fact, it looks like it won’t be much longer ’til doctors can implant networks of artificially grown neurons directly into living adult brains.

But as the journal Nature Photonics reports, the big breakthrough this time comes from Samarendra Mohanty at The University of Texas at Arlington, who found that neuron growth can respond to physical cues – spinning particles in fluid, for instance – as well as to chemical ones.

Mohanty’s team discovered this by using a tiny laser to direct the spin of a microparticle positioned next to the axon of a growing neuron. The spinning particle generated a miniature counterclockwise vortex in the fluid – and wouldn’t ya know it; the axon started wrapping around the spinning particle as the neuron grew:

Circularly polarized light with angular momentum causes the trapped bead to spin. This creates a localized microfluidic flow … against the growth cone that turns in response to the shear.

In short, this is the first time a scientific team has used a mechanical device – a “micro-motor,” as they call it – to directly control and precisely adjust the growth of a single axon:

The direction of axonal growth can be precisely manipulated by changing the rotation direction and position of this optically driven micromotor.

So far, the micromotor only works 42 percent of the time – but the team is optimistic that future tests will lead to greater reliability and more precise control. In the near future, micromotors like this one could be used to turn the growth of an axon back and forth – or even to funnel growth through “gauntlets” of spinning particles.

Most conveniently of all, the particles could be injected, re-positioned, and removed as needed – providing a much simpler, more modifiable architecture than any other neuron-shaping technology in use today.

And for the slightly more distant future, Mohanty’s lab is hard at work on a system for providing long-range, long-term guidance to entire neural networks through completely non-invasive optical methods.

Until then, though, isn’t it amazing to stop and think about all the neurons that are growing and reshaping themselves – all the delicate intertwining lattices relaying millions of mysterious coded messages, right now, within the lightless interior of your own head?

Call me self-centered, but I think it’s just about the coolest thing on planet Earth.

Synaptic Changes

Synapses – the junctions where neurons communicate – are constantly growing and pruning themselves – and those two processes occur independently of one another, says a new study.

A neuron shows off its fancy green glowy-ness.

As a synapse sees more and more use, it tends to grow stronger, while synapses that fall out of use tend to grow weaker and eventually die off. Collectively, these processes are known as synaptic plasticity: the ability of synapses to change their connective properties. But as it turns out, the elimination of redundant synapses isn’t directly dependent on others being strengthened – instead, it seems to be triggered by its own independent chemical messaging system.

Like many mammals, we humans are born with brains that aren’t yet particularly well-adapted for anything other than – well, adaptation. Through our brains’ extraordinary plasticity, we’re able to learn, in a few short years, everything from how to walk upright to how to argue – and as we grow older, our brains learn how to learn more quickly and think more efficiently by pruning away unneeded synapses.

But all that speed and flexibility come at a price: the balance between synaptic growth and die-back must be delicately maintained. Excessive connectivity can lead to diseases like epilepsy, while insufficient communication can result in disorders like autism.

This kind of summarizing oversimplifies the situation, of course – brain disorders rarely have just a single cause, or a clearly-defined set of symptoms. Even so, they provide grim reminders of the precise electrochemical balancing act that continues in the dark backstage of our skulls throughout each moment of every day.

And now, as the Journal of Neuroscience reports, a team led by The Jackson Laboratory’s Zhong-wei Zhang has found that synaptic growth and pruning aren’t two parts of a single process after all – they’re two processes, coordinated through two different chemical signaling systems.

The team discovered this by studying a certain type of glutamatergic synapses – i.e., synapses that use the common neurotransmitter chemical glutamate – in mouse neurons. The specific type of synaptic receptor the team studied is called the AMPAR, which is short for (get ready for this) “α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor.”

The team discovered that when AMPARs of baby mice were deprived of glutamate stimulation, these receptors’ symapses didn’t grow stronger, as they did in normal mice:

Deletion of GluA3 or GluA4 caused significant reductions of synaptic AMPAR currents in thalamic neurons at P16–P17, with a greater reduction observed in GluA3-deficient mice.

No huge shock there. But what was surprising for the team was their other discovery – that even in these glutamate-deficient mice, pruning of AMPARs still proceeded normally:

Deletions of GluA3 or GluA4 or both had no effect on the elimination of relay inputs: the majority of thalamic neurons in these knock-out mice—as in wild-type mice—receive a single relay input.

Looking at this data, it’s hard to escape the conclusion that pruning of AMPARs is dependent on a separate mechanism from their strengthening. Just what that other mechanism is remains to be seen.

In short, this is another example of how even the most mundane workings of our brains continue to stun neuroscientists with brand-new revelations every day.

Silicon Synapses

A new kind of computer chip mimics the way a neuron learns, a new study reports.

Behold! The mighty Synapse Chip!

The 400-transistor chip simulates the activity of a single synapse – a connection between two neurons. Because of the chip’s complexity, it’s able to mimic a synapse’s plasticity – its ability to subtly change structure and function in response to new stimuli.

For example, a synapse that repeatedly responds to an electric shock might, over time, become less sensitive to that shock. Thus, synaptic plasticity forms the basis of neural learning, well below the level of conscious processing.

The human brain contains approximately 100 billion neurons, and more than 100 trillion synapses. Ever since the anatomist  Santiago Ramón y Cajal discovered the function of neurons back in the early 1900s, scientists have dreamed of building a device that replicated the behavior of even a single synapse. For decades, they had to content themselves with mathematical models and digital simulations.

But now, as the journal Proceedings of the National Academy of Sciences reports, a team led by MIT’s Chi-Sang Poon has constructed a working silicon model of a synapse in the physical world.

The chip uses transistors to mimic the activity of ion channels – small “ports” in the cell membranes of neurons that allow various amounts of neurotransmitter chemicals and ions (positively or negatively charged atoms) to pass in and out of the cell. These channels form the basis for synaptic communication – and for some of the most hotly researched topics in neuroscience.

While ordinary transistors act as binary “on/off” gates, neural synapses conduct signals along fairly smooth gradients, allowing the signals to increase in strength until they finally trigger the neuron to “fire” a signal on to its neighbor(s). It was this gradient property that Poon’s team sought to mimic with their new chip:

While most chips operate in a binary, on/off mode, current flows through the transistors on the new brain chip in analog, not digital, fashion. A gradient of electrical potential drives current to flow through the transistors just as ions flow through ion channels in a cell.

Since the researchers can also tweak the chip’s properties to mimic different kinds of ion channels, this provides one of the most realistic models yet for studying how individual synapses actually work.

The researchers have already used the chip to study long-term depression (LTD), the process by which some ion channels can weaken the synaptic activity of others. They also hope they’ll soon be using chips like this one in conjunction with lab-grown biological neurons, to discover all kinds of exciting new things about how cells behave “in the wild.”

Who knows – by this time next year, we may be watching nature documentaries about Neuron Cyborgs – but my guess is that the SyFy channel will get there first.

Brief and tangentially relevant side-note: Connectome posts may be somewhat spotty over the next few weeks, as I’m currently launching a new project, the details of which need to be kept under wraps (or “on the DL,” as the kids say) for the time being. I’ll do my best to report on neuroscience breakthroughs as often as I can during this period, and things should be back to (relatively) normal soon. Thanks for stickin’ with me.

Wakefulness Cells

Certain groups of neurons determine whether light keeps us awake or not, says a new study.

Just a typical day for a hypocretin-deficient mouse. Okay, I'll wait for you to finish making that squinchy "Awww!" face, and then we'll move on with the article.

In the hypothalamus – a brain structure responsible for regulating hormone levels – specific kinds of neurons release a hormone called hypocretin (also known as hcrt or orexin). Hypocretin lets light-sensitive cells in other parts of the brain – such as the visual pathway – know that they should respond to incoming light by passing along signals for us to stay awake.

Scientists have understood for centuries that most animals and plants go through regular cycles of wakefulness and sleep – they call these patterns circadian rhythms or circadian cycles. More recently, researchers have begun unraveling the various chemical messaging systems our bodies use to time and control these cycles – enzymes like PER and JARID1a, which help give us an intuitive sense of how long we’ve been awake or asleep.

But now, as the Journal of Neuroscience reports, a team led by UCLA’s Jerome Siegel has isolated a neurochemical messaging system that dictates whether or not we can stay awake during the day at all. The team bred a special strain of mice whose brains were unable to produce hypocretin, and found that these mice acted like students in first-period algebra – even under bright lights, they just kept dozing off. However, they did jump awake when they received a mild electric shock:

This is the first demonstration of such specificity of arousal system function and has implications for understanding the motivational and circadian consequences of arousal system dysfunction.

What’s even more interesting, though, is that there’s a second half to this story – the dozy mice were perfectly perky in the dark:

We found that Hcrt knock-out mice were unable to work for food or water reward during the light phase. However, they were unimpaired relative to wild-type (WT) mice when working for reward during the dark phase or when working to avoid shock in the light or dark phase.

In other words, the mice without hypocretin stayed awake and worked for food just fine when the lights were out. So they probably have promising futures as bartenders or bouncers.

The takeaway here is that hypocretin isn’t so much responsible for enabling knee-jerk reactions as it is for helping mice (and us) stay alert and motivated to complete reward-based tasks when the lights are on. Without this hormone, we might act normally at night, but we just wouldn’t feel like staying awake when the sun was out.

And that’s exactly what Siegel’s team had found in several of their earlier studies, which linked human hypocretin deficiency with narcolepsy – a disease that causes excessive sleepiness and frequent daytime “sleep attacks.” These new results suggest that narcoleptic patients might have more success getting work done during the night, when their symptoms might be less severe.

Siegel also thinks clinically administered hypocretin might help block many effects of depression, and allow depressed patients to feel more motivated to get up and about during the day. If so, this could be a promising new form of treatment for that disease as well.

Finally, and perhaps most intriguingly of all, it’s likely that similar hormonal response “gateways” play crucial roles in other neurochemical arousal systems – like those involved in fearanger, and sexual excitement. If so, discoveries along those lines could provide us with some staggering new insights into the ways our brains regulate their own behavior.

So, I know what you’re probably wondering: am I really advocating the use of electric shocks to keep bored math students awake? Of course not – I think releasing wild badgers into the classroom would be much more effective.

Rhythms of Memory

Our neurons learn best when they’re working on the same wavelength – literally, says a new study.

What are these neurons doing, you ask? Well, um...it's grown-up neuron stuff, OK?

For every synapse – every signaling junction between neurons – there’s a certain firing frequency that increases signal strength the most. In short, neurons work like tiny analog antennas – tuning into incoming signals and passing along the clearest, strongest ones as electrochemical messages.

This represents a huge breakthrough in our understanding of how our brains work. Neuroscientists have known for decades that we (and other animals) learn by strengthening connections between certain neurons – i.e., the more a pair of neurons communicate with each other, the more receptive and sensitive they become to each others’ electrochemical signals. But one central mystery remained: what makes some neurons more likely to “hear” signals from certain other neurons in the first place?

Now we know that every synapse has a frequency “sweet spot” – a certain frequency to which it’s most responsive. The farther away from a neuron’s nucleus a certain synapse is, the higher the frequency of its sweet spot. Thus, different parts of a neuron are tuned to different signal wavelengths.

As the journal Frontiers in Computational Neuroscience reports, a team led by UCLA’s Arvind Kumar and Mayank Mehta set out to expand on previous studies, which had found that stimulating neurons with high-frequency electrical pulses (or “spikes“) – around 100 per second – tended to strengthen synaptic connections, while stimulating them with low-frequency pulses – around one per second – tended to weaken those connections. (Click here for a straightforward explanation of how electrical pulses help transmit signals between neurons.)

In the real world, however, neurons typically fire much quicker bursts – only about ten spikes at a time – at a rate of around 50 spikes per second. “Spike frequency refers to how fast the spikes come,” Mehta explains. “Ten spikes could be delivered at a frequency of 100 spikes a second or at a frequency of one spike per second.”

Until now, neuroscientists lacked the technology to model the results of such brief firing bursts with much accuracy – but Mehta and Kumar designed a sophisticated computer model to test spike frequencies and burst lengths closer to real-world levels:

We computed the influence of these variables on the plasticity induced at a single NMDAR containing synapse using a reduced model that was analytically tractable, and these findings were confirmed using detailed, multi-compartment model.

In other words, they created a simulation of an NMDAR, a certain type of receptor for the neurotransmitter glutamate. These receptors are crucial for strengthening synapses – and thus, for memory formation and learning.

Using their model, the researchers made four major new discoveries:

1) The more distant from a neuron’s soma (main body) a synapse is, the higher the spiking frequency to which it’s “tuned.”  In fact, the same frequency can strengthen or weaken a synaptic connection, depending on where on the neuron that particular synapse is located.

2) Regular, rhythmic pulses cause greater changes in synaptic strength (i.e., greater synaptic plasticity) than irregular bursts do.

3) Short bursts of spikes significantly raise the timing dependence of synaptic plasticity – in other words, the shorter the burst, the more important it is to get the frequency spot-on.

4) Once a synapse learns to communicate with another synapse, its optimal frequency changes – say, from 30 spikes per second to 24 spikes per second.

The internet is now abuzz with chatter about the implications of these results. One intriguing idea is that gradual “detuning” among synapses could underlie the process of forgetting – and that future drugs or electrical therapies could help “retune” some of these rhythms.

The research also raises the question of how incoming rhythms from our senses – especially light and sound – might directly impact these firing frequencies. We’ve known for years that some types of epileptic siezures can be triggered by light flashing at certain frequencies, and that music playing at certain rhythms can pump us up or calm us down. Is it so far-fetched to suggest that rhythms like these might also help shape our thoughts and memories?

On that note, I’m off to stare at some strobe lights and listen to four-on-the-floor dance music…for Science!

 

Social Cells

Our sociability may depend on certain brain cells that are born during our adolescent years, a new study says.

"Ha-ha! Bobby has abnormal hippocampal neurogenesis!"

Healthy adult mice are generally pretty happy to interact with others of their species. But mice who don’t get much social interaction during their adolescence tend to treat other mice as alien objects, rather than as fellow rodents – in fact, they tend to actively avoid social encounters. (As I’ve pointed out a lot on this blog, mouse brains provide reasonably good models of many human brain functions, which is why neuroscientists study them.)

Though conventional wisdom used to say that adult brains don’t generate new neurons, scientists have actually known for years that neurogenesis – the birth of new neurons – is a process that begins in the womb and continues well into adulthood, especially in areas like the hippocampus, which is crucial for memory formation.

This study, however, is one of the first to link neurogenesis with social development. As the journal Neuroscience reports, a team led by Yale psychiatrist Arie Kaffman blocked the process of neurogenesis in adolescent mice. The researchers found that these mice not only had no interest in interacting socially with other mice – they actually tried to escape when normal mice tried to interact with them:

“These mice acted like they did not recognize other mice as mice,” Kaffman said.

Interestingly, mice whose neurogenesis was blocked during adulthood still showed normal social behavior, which suggests that neurogenesis during adolescence is particularly crucial for the development of healthy interpersonal instincts.

The researchers suspect that similar adolescent neurogenesis problems – either biological or environmentally triggered – may be responsible for human mental diseases that impair social functioning:

Schizophrenics have a deficit in generating new neurons in the hippocampus, one of the brain areas where new neurons are created. Given that symptoms of schizophrenia first emerge in adolescence, it is possible that deficits in generating new neurons during adolescence or even in childhood holds new insights into the development of some of the social and cognitive deficits seen in this illness.

In other words, neurogenesis in an teenager’s hippocampus may be a major determining factor in whether he or she is able to feel connections with other people, and develop normal intuitions about social behavior.

So, if you’ve been pestering your kids to “get out and make some friends,” now you have a solid neuroscientific reason to keep it up!

Follow

Get every new post delivered to your Inbox.

Join 74 other followers