Archive for the ‘ News ’ Category

Living in Flatland

Our brains are terrible at understanding height, a new study reveals – and the research also explains the evolutionary trade-offs related to our flattened sense of orientation.

Non-Flat Land.

By studying two types of brain cells that fire as we move from place to place, the researchers found that our intuitive sense of relative location hardly changes as we move vertically – and our intuitive sense of distance doesn’t seem to change at all:

It looks like the brain’s knowledge of height in space is not as detailed as its information about horizontal distance, which is very specific. It’s perhaps akin to knowing that you are “very high” versus “a little bit high” rather than knowing exact height.

And the higher we get, the harder it is to get a clear sense of just how high we are. (I’m gonna let that statement speak for itself.)

As reported in the journal Nature Neuroscience, a team led by Kathryn Jeffery at University College London studied the firing patterns of two types of neurons in the hippocampus as mice explored two kinds of environments: a climbing wall and a helix (as I’ve mentioned before, some types of mouse brain activity are very reliable at predicting how human brains behave).

This particular study focused on grid cells, which help brains get a sense of relative distance, and place cells, which fire when the animal arrives at a specific (you guessed it) place. The researchers found that place cells didn’t fire much as the mice moved vertically – and that grid cells fired exclusively in response to horizontal movement:

It seems that grid cell odometry (and by implication path integration) is impaired or absent in the vertical domain, at least when the rat itself remains horizontal. These findings suggest that the mammalian encoding of three-dimensional space is anisotropic.

In other words, mice (and probably humans) measure distance mainly on a plane that’s roughly level with their eyes. Like a broken odometer, our place and grid cells simply don’t “clock” vertical movement. As far as those cells are concerned, we might as well be living in Flatland.

The upside of this is that when it comes to horizontal movement – such as navigating a maze – mice and men can keep a detailed memory for specific spots, and orient themselves very precisely relative to other locations. Unless you’re like me, that is, and need GPS to find your way from the front door to the driveway.

Anyway, what does all this lack of height-sensing mean for those with vertigo – or just your basic acrophobia? Well, as Jeffery points out, we clearly do have some instinctive sense of “very high” as opposed to “a little high” – but exactly how our brains encode that difference isn’t quite as well understood.

It’s likely that our sense of height has more to do with depth perception, and with an instinctive fear of visible drop-offs, than with any sense of personal distance or location – in other words, we aren’t able to use our bodies to sense how much higher we’re climbing, but we can look down/out and see that the ground ahead is farther away than we’d like it to be. Thus, one way to calm vertigo is simply to close your eyes.

Without that visual feedback, you’ll be back to Flatland in no time.

Where’s the Remote?!

A new study has revealed the neurophysiological reasons why we can remember when we last saw a lost object, but still forget where we left it. It turns out these are pretty similar to the reasons why we can remember a person’s face, but still forget his or her name.

Maybe if you'd shown it more love, it wouldn't have run away.

The research focused on the interactions between three brain regions known to be involved in different kinds of memory: the temporal lobe’s perirhinal cortex (PRC), which is involved in recognizing familiar objects we detect with our senses; the hippocampus, which helps us recognize familiar places; and the medial prefrontal cortex (mPFC), which is involved in working memory and decision-making.

As reported in the Journal of NeuroscienceDr. Clea Warburton and Dr. Gareth Barker at Bristol University’s School of Physiology and Pharmacology tested rats with different types of brain lesions on a variety of performance tasks (the functions of many areas in rat brains have proven to be pretty reliable predictors of how corresponding areas in human brains behave). The scientists found that each type of recognition depends on a different interaction between the brain areas they studied:

The hippocampus was crucial for object location, object-in-place, and recency recognition memory, but not for the novel object preference task … [and] object-in-place and recency recognition memory performance depended on a functional interaction between the hippocampus and either the perirhinal or medial prefrontal cortices.

In other words, although the hippocampus does play a crucial part in object location memory (e.g., where the remote is) and recency recognition memory (e.g., where we last saw it), this brain structure doesn’t work alone – it needs to communicate with the PRC and the mPFC to accomplish either of these tasks:

Neither ‘object-in-place’ or ‘temporal order recognition’ memories could be formed if communication between the hippocampus and either the perirhinal cortex, or the medial prefrontal cortex, was broken. In other words, disconnecting the regions prevented the ability to remember both where objects had been, and in which order.

Though rodents…well, most rodents…don’t have to worry about remembering each others’ names, this study tells us a lot about why our memories sometimes malfunction in such odd ways, like forgetting where we left the keys or the remote, even though we can easily picture them in all sorts of places we’ve seen them before.  And then there’s presque vu – the sense that a word is “right on the tip of our tongues.” Though we might devote a huge amount of mental resources to working on problems like these, we can still come up dry if we’re trying remember in the wrong way – and in fact, actively trying to remember in one certain way may even be counterproductive.

So the next time you’re trying to remember where you left the remote – or awkwardly fumbling for someone’s name, just slow down, give your brain a breather, and let the answer come on its own. You might be surprised how easily you find it.

Hitting the Brakes

A new driving simulator is showing how mind-controlled brakes could make the road safer for all of us – and a new study using this system shows proves that our instincts know when to brake before our conscious minds do.

In all fairness, that sign was hard to see.

By attaching electrodes to the skin of volunteers in a driving simulator, researchers found that people could “hit the brakes” mentally about 130 milliseconds quicker, on average, than if they had to brake with their feet. This might not seem like much, but at 60 mph, this works out to a little over 12 feet saved – definitely enough to spare some lives.

The experiment was pretty ingenious – as reported in the Journal of Neural Engineering, a team of scientists led by Stefan Haufe at the Technische Universität Berlin measured their volunteers’ responses to a driving game through two methods: electroencephalography (EEG), which measures electrical activity across the scalp, and electromyography (EMG), which measures the electrical activity behind muscle movement.

Using the electrical data they gathered from a few fun rounds of “Oh God Hit the Brakes,” the team fine-tuned their system, making its responses even quicker for the next round of simulations:

Our EEG analysis yielded a characteristic event-related potential signature that comprised components related to the sensory registration of a critical traffic situation, mental evaluation of the sensory percept and motor preparation. While all these components should occur often during normal driving, we conjecture that it is their characteristic spatio-temporal superposition in emergency braking situations that leads to the considerable prediction performance we observed.

In other words, they figured out exactly what electrical patterns were generated just before people decided to brake, and used those as an indicator for when people were about to try to slam on the brake pedal.

But even if systems like this end up being installed in mass-market cars, Haufe says, we probably won’t be braking with our minds alone:

If such a technology would ever enter a commercial product, it would certainly be used to complement other assistive technology to avoid the consequences of false alarms that could be both annoying and dangerous.

Yeah, I can see how false alarms in rush-hour traffic would make mental brake-slamming a less-than-popular technology. Still, it’s pretty cool to see how this and other developments in “thought control” devices are on the verge of making our lives easier. Let’s be honest here – aren’t there at least some days when you wish you could run all your errands without lifting a finger?

Skin Into Brain

Scientists have discovered a way to convert human skin cells into working brain cells. Cue the Weird Science theme song!

"The neurons! They live! They liiiiiive!"

Using strands of microRNA molecules and a few carefully chosen genes, a team led by Dr. Sheng Ding at the Scripps Research Institute reprogrammed the genetic code of skin cells taken from a 55-year-old patient, transforming them into full-fledged neurons that actually synapse with each other.

As the lab’s report in the journal Cell Stem Cell1 explains, this new reprogramming method allows scientists to directly transform one cell type from an adult human into a normal, functional cell of a completely different type:

These human induced neurons (hiNs) exhibit typical neuronal morphology and marker gene expression, fire action potentials, and produce functional synapses between each other.

But this isn’t only a major breakthrough because the induced neurons work so well – it also represents a major leap forward in the field of cell transformation.

See, over the past few years, scientists have had some success at turning various kinds of human somatic (body) cells into artificial stem cells – known as induced pluripotent stem cells (iPSCs)which can then be grown into various other types of cells.

This process is far from perfect – the successful conversion rate is has often been less than one percent, and it’s not yet clear whether iPSCs behave and grow exactly like their authentic stem cell equivalents. Still, iPSCs have proven to be useful tools for modeling diseases and testing drugs - and they may soon offer a less controversial alternative to research on human embryonic stem cells.

More recently, though, an even cooler technique has emerged: reprogramming one type of somatic cell directly into another type, without any intermediate iPSC stage. This is the technique Ding’s team used to make their induced neurons.

And that’s only the beginning, Ding says. For one thing, he thinks this method can be used to grow neurons for patients with Alzheimer’s and other neurodegenerative diseases – and they’ll be safe to transplant, because they’ll be grown from the patient’s own body:

Rather than using models made in yeast, flies or mice for disease research, all cell-reprogramming technology allows human brain, heart and other cells to be created from the skin cells of patients with a specific disease.

It’s also likely that as the technology improves, we’ll use it to grow transplantable tissue for hearts, lungs, and other organs – not to mention using it to study the development of various diseases in a specific patient’s body, in a controlled, safe environment:

“This will help us avoid any genome modifications,” said Dr. Ding. “These cells are not ready yet for transplantation, but this work removes some of the major technical hurdles to using reprogrammed cells to create transplant-ready cells for a host of diseases.”

As this research shows, we may soon be able to grow brand-new, guaranteed-compatible body parts from just a small sample of cells from the patient’s body. It looks like solutions to some age-old problems could be right on the horizon.

___________

1. Clearly, Cell Press hired a whole team of marketing geniuses to come up with that name.

Mirror Neurons On Trial

Decades of research have suggested that certain “mirror neuron” groups are activated not only when we perform an action, but also when we see it being performed, or even when we hear it being performed in another room. But other studies have raised strong criticisms about what this data actually means – and now the dispute is heating up more than ever.

Two neuroscientists debate the relative merits of each others' stances on mirror neurons.

In this month’s issue of the journal Perspectives on Psychological Science, a group of brain research veterans debate the evidence for and against the mirror neuron hypothesis. Their consensus is that mirror neurons do seem to play a role in our ability to understand and imitate the actions of others, but that the contributions of these cells are a lot more subtle and complex than previously thought.

The article focuses on three main categories of abilities in which some neuroscientists think mirror neurons take part: understanding speech, understanding others’ actions, and understanding others’ minds. The researchers on a few different sides had plenty to say about each of these categories, so let’s break ‘em down.

1. Actions
The role of mirror neurons in understanding visually perceived actions was the first to be researched. In the 1990s, a team led by Giacomo Rizzolatti at the University of Parma were studying the neurons of macaque monkeys, when they realized that certain groups of these monkey neurons (located in the ventral premotor cortex) responded in very similar ways whether the monkey picked up a piece of food or watched someone else pick it up.

A lot of subsequent research confirmed that these firing patterns were definitely happening – but what they meant was far from clear. Some scientists, such as UC Irvine’s Gregory Hickock, argued that supposed “mirror” activity might just reflect the normal behavior of the monkeys’ overall motor system, rather than being direct evidence for imitation or theory of mind.

Also, new evidence contradicts some major predictions about the role mirror neurons were thought to play in understanding actions:

The repeated finding that increased experience executing actions is associated with decreased, not increased, activation in putative MN [mirror neuron] regions, lies completely opposite that predicted … [One prominent MN researcher] predicts that executing these more familiar actions … should produce greater activity in mirror areas. However, increased experience executing actions is not associated with increased activity in putative MN regions.

So, at the very least, it seems that we need to take another look at how (or if) mirror neurons come into play when we imitate an action that we’ve seen or heard.

2. Speech
When it comes to our ability to understand spoken language, the evidence seems to strongly suggest that mirror neurons aren’t nearly as important as some of their supporters have claimed. Although our motor areas can be activated when we listen to someone speaking, no one’s been able to demonstrate a direct correlation between mirror neurons and our ability to understand speech. Not to mention that individuals who have no working motor system whatsoever can still understand spoken words:

High levels of speech perception ability have been demonstrated in (a) patients with severe motor speech deficits and damage to the mirror system; (b) individuals who failed to develop a functional motor speech system due to neurological disease; (c) infants who have not yet developed motor speech control; and (d) even chinchilla and quail, which don’t have the biological capacity for speech.

Mirror neurons might help us figure out what someone’s saying when it’s hard to hear, or under conditions where we have to lip-read or otherwise guess some of the speech’s content. But the part they play in actual speech comprehension seems to be minor, at best.

3. Minds
Last but not least, let’s talk about the idea that mirror neurons might help us understand the thoughts, feelings, and intentions of others. Some researchers have claimed that these neurons are activated when we (or monkeys) try to understand the motivations and thoughts behind another’s action – but just about all those papers cite the same four studies in support of this, and three of those are Rizzolati’s studies from the ’90s. Even in those studies, Rizzolati and his teams stressed that the mirror neuron system they observed was probably responsible only for linking observation and imitation – not for any sort of abstract comprehension.

Loads of debate has also centered around the question of whether autistic individuals might have a malfunctioning mirror neuron system – an idea known as the “broken mirror hypothesis.” Two of this view’s biggest supporters are UCLA’s Marco Iacobini and Mirella Dapretto, who have argued for several years that there’s a link between autism and decreased activation in sites “with mirror properties.” However, other studies have failed to replicate these results – and even more importantly, a growing body of evidence suggests that the study’s whole premise may have been flawed:

Much larger and more firmly established bodies of data contradict predictions made by MN theory. For example, it has been repeatedly demonstrated that autistic persons of all ages (from preverbal children to mature adults) have no difficulty understanding the intention of other people’s actions.

So, although mirror neurons may have something to do with our ability to imitate the actions of others, it doesn’t seem particularly likely that they’re an essential component of our ability to understand others’ minds.

As you can tell, a lot of these researchers are still far from waving a white flag – and there are  plenty more interesting questions about mirror neurons that are begging to be researched – for instance, what role their firing patterns might play in the process of learning, and whether the human brain even has a direct equivalent of the macaque mirror neuron system at all. The more research we perform on mirror neurons, the more conundrums they seem to reflect back at us – and it seems they’ll continue to provide us with intriguing puzzles.

Generosity Psychology

New research explains why it makes evolutionary (and mathematical) sense for us to be kind to strangers.

"It makes evolutionary sense for me to never let go of you...ever!"

The study, published in the Proceedings of the National Academy of Sciences, shows that people are, on average, more generous to strangers than most mathematical models predict – and that there’s a logical reason for cooperation to evolve this way: it often doesn’t cost much to be generous, but a single act of stinginess could cost you a long-term friend. In other words, petty greed just isn’t worth the risk.

This conclusion might seem face-slappingly obvious, but what’s intriguing here is the fact that it has a solid mathematical basis. A team led by psychologists Andrew Delton and Max Krasnow of the University of California, Santa Barbara constructed computer simulations of natural selection systems.

The “agents” (i.e., simulated individuals) used a Bayesian reasoning process to predict whether they would interact with the same partner in the future, and factored this information into their decisions about whether or not to be generous in a Prisoner’s Dilemma-type game. As it turned out, though, cooperation was a more evolutionarily stable strategy whether or not an agent reasoned that it would encounter the same partner again:

Even though their beliefs were as accurate as possible, our simulated people evolved to the point where they essentially ignored their beliefs and cooperated with others regardless. This happens even when almost 90 percent of the interactions in their social world are actually one-time rather than indefinitely continued.

This is in stark contrast to loads of previous mathematical models, which predicted that the best strategy is to be generous with one’s regular reciprocal partners, but selfish in one-time-only interactions. Instead, this research shows that the cost/benefit ratio for both these kinds of generosity is about the same:

The conditions that promote the evolution of reciprocity — numerous repeat interactions and high-benefit exchanges — tend to promote one-shot generosity as well. Consequently, one-shot generosity should commonly coevolve with reciprocity.

It’s also interesting to note that this model predicts the same sort of generosity regardless of the size of the group – what’s important isn’t how likely you are to meet the same person again, but simply that there is a chance, however slight, that they might help you in the future.

This research caught my eye because of something that happened to me the other night: my friend and I were eating at a restaurant, when we noticed the people at the next table over loudly lecturing the waitress – scolding her, even – over the allegedly poor quality of the food. They refused to pay, and finally stormed out of the place. When my friend and I asked the waitress what had happened, she rehashed the customers’ complaints for us, then mentioned that almost every other table in the restaurant had called her over to offer stern judgments of the rude customers, and supportive words for her (both of which we also did).

As this research demonstrates, it was more than just empathy or an ancestral tribe mentality that influenced our actions that night – though those factors did have their roles, and might be reflections of a more underlying mathematical truth (as so many patterns in nature are). But odds are, neither we nor the rude customers will ever see that waitress again – and being kind to her only gained us a few moments of positive feelings. Nevertheless, being nice to that harmless stranger “felt right” to us – and as it happens, there’s an evolutionary reason for that.

Mind Control

A comfy new “brain cap” will soon allow users to remotely control robots with their thoughts.

The new UMD brain cap is as stylish as it is functional.

By “comfy” I mean “noninvasive” – instead of sticky electrode patches or needles, the cap uses sensors embedded in its fabric to detect electrical signals along the scalp. Just slip it on, and you can start surfing the internet – or (probably eventually) remote-control a giant battle robot – using only the power of your mind.

A study published in the Journal of Neurophysiology shows off the results of the brain cap’s latest human tests, conducted by the University of Maryland’s José ‘Pepe’ L. Contreras-Vidal and his team.

The team’s first study focused on using these electroencephalography (EEG) caps could to translate brain activity into a sort of mental mouse, which allowed users to control a computer cursor with their thoughts after a short training period. The next round of experiments studied the neural correlates of hand movements, and allowed the subjects to turn and flex a 3-D rendering of a hand just by thinking about those motions.

But this new study aims to take this technology to a whole new level:

Angular kinematics of the left and right hip, knee and ankle joints and EEG were recorded, and neural decoders were designed and optimized using cross-validation procedures. Our results … suggest that EEG signals can be used to study in real-time the cortical dynamics of walking and to develop brain-machine interfaces aimed at restoring human gait function.

In other words, subjects may soon be able to control a pair of mechanical legs just by thinking about walking. This could be a helpful shortcut for the walking-robot industry, because it would allow the patient’s natural sense of balance to make the thousands of tiny adjustments needed to stay upright on uneven terrain.

This technology has been grabbing a lot of attention, and millions of dollars worth of grants have been pouring in from partners like the National Science Foundation (NSF), the National Institutes of Health (NIH), and respected medical schools and research centers across the country.

The benefits for people with paralysis or limb amputations are obvious, but an even more intriguing set of results focuses on patients who’ve suffered strokes:

“By decoding the motion of a normal gait,” Contreras-Vidal says, “we can then try and teach stroke victims to think in certain ways and match their own EEG signals with the normal signals.”

In other words, the team hopes to take advantage of the brain’s natural synaptic plasticity to retrain patients’ thought patterns to produce the right movements. This could allow them to regain use of their limbs, and perhaps even walk again, without needing surgery or permanent prosthesis of any kind.

If all goes as planned, the future may look brighter than ever for patients who’ve lost the use of an area of their body. Though many of today’s therapies still depend on invasive techniques like surgery or implants, the next wave of technologies could allow intuitive systems like the UMB brain cap to reshape the brains – and even the bodies – of patients who have the will to learn a new skill.

Follow

Get every new post delivered to your Inbox.

Join 75 other followers