Posts Tagged ‘ senses ’

Forget Me Not

Having trouble remembering where you left your keys? You can improve with a little practice, says a new study.

"I've forgotten more than you'll ever...wait, what was I saying?"

It’s an idea that had never occurred to me before, but one that seems weirdly obvious once you think about it: people who train their brains to recall the locations of objects for a few minutes each day show greatly improved ability to remember where they’ve left things.

No matter what age you are, you’ve probably had your share of “Alzheimer’s moments,” when you’ve walked into a room only to forget why you’re there, or set something down and immediately forgotten where you put it. Attention is a limited resource, and when you’re multitasking, there’s not always enough of it to go around.

For people with real Alzheimer’s disease, though, these little moments of forgetfulness can add up to a frustrating inability to complete even simple tasks from start to finish. This is known as mild cognitive impairment (MCI), and its symptoms can range from amnesia to problems with counting and logical reasoning.

That’s because all these tasks depend on memory – even if it’s just the working memory that holds our sense of the present moment together – and most of our memories are dependent on a brain structure called the hippocampus, which is one of the major areas attacked by Alzheimer’s.

What exactly the hippocampus does is still a hotly debated question, but it seems to help sync up neural activity when new memories are “written down” in the brain, as well as when they’re recalled (a process that rewrites the memory anew each time). So it makes sense that the more we associate a particular memory with other memories – and with strong emotions - the more easily even a damaged hippocampus will be able to help retrieve it.

But now, a team led by Benjamin Hampstead at the Emory University School of Medicine has made a significant breakthrough in rehabilitating people with impaired memories, the journal Hippocampus reports: the researchers have demonstrated that Alzheimer’s patients suffering from MCI can learn to remember better with practice.

The team took a group of volunteers with MCI and taught them a three-step memory-training strategy: 1) the subjects focused their attention on a visual feature of the room that was near the object they wanted to remember, 2) they memorized a short explanation for why the object was there, and 3) they imagined a mental picture that contained all that information.

Not only did the patients’ memory measurably improve after a few training sessions – fMRI scans showed that the training physically changed their brains:

Before training, MCI patients showed reduced hippocampal activity during both encoding and retrieval, relative to HEC. Following training, the MCI MS group demonstrated increased activity during both encoding and retrieval. There were significant differences between the MCI MS and MCI XP groups during retrieval, especially within the right hippocampus.

In other words, the hippocampus in these patients became much more active during memory storage and retrieval than it had been before the training.

Now, it’s important to point out that that finding doesn’t necessarily imply improvement – studies have shown that decreased neural activity is often more strongly correlated with mastery of a task than increased activity is – but it does show that these people’s brains were learning to work differently as their memories improved.

So next time you experience a memory slipup, think of it as an opportunity to learn something new. You’d be surprised what you can train your brain to do with a bit of practice.

That is, as long as you remember to practice.

5 Ways to Fight the Blues…with Science!

So you’re stuck in that mid-week slump…the weekend lies on the other side of a scorching desert of work, and you have no canteen because you gave up water for Lent (in this metaphor, “water” refers to alcohol…just to be clear).

YAY SCIENCE!

But fear not! Neuroscience knows how to cheer you up! Nope, this isn’t another post about sex or drugs…though those are coming soon. This one’s about five things science says you can do right now – with your mind – to chase your cranky mood away.

1.Take a look around
Research shows that people who focus on the world around them, instead of on their own thoughts, are much more likely to resist a relapse into depression. This is easy to do – just find something interesting (or beautiful) to look at, and think about that for a few seconds…you’ll be surprised how quickly your worries fade.

2. Do some mental math
Scientists say doing a little simple arithmetic – adding up the digits of your phone number, for example – reroutes mental resources from worry to logic. Don’t worry; your emotions will still be there when you’re done…but they’re less likely to hog the spotlight if you don’t give them center stage.

3. Get out and about
Lots of studies show that physical activity raises levels of endorphins – the body’s own “feel-good” chemicals – and helps improve your mood throughout the day. You don’t have to run a marathon; even a quick walk around the block will get your blood pumping and help clear your mind.

4. Find some excitement
Some very interesting studies have found that courage – a willingness to face some of your fears – feeds on itself; in other words, the more adventurous your behavior is, the fewer things your brain considers threatening. In a way, it’s a “fake it ’til ya make it” situation…but instead of trying to be someone you’re not, you’re becoming more comfortable with the person you are.

5. Remember, it’s not always a bad thing
It sometimes helps to remember that stress is a natural phenomenon…as natural as digestion or sleep. Though stress (or sadness, or worry) can sometimes get out of hand, our bodies have evolved these responses to help us, and there’s nothing “wrong” with you just because you’re feeling annoyed or down in the dumps today. Instead of trying to make the feeling go away, sometimes the best thing to do is acknowledge it, and think about what’s triggering it. You might surprise yourself with an insight.

So, those tips are pretty simple, right? Try some of ‘em out, and let me know which ones worked best for you. After all, that’s why scientists study this stuff – to help us all understand more about what our minds are up to.

Saving Faces

A brain area that’s specialized to recognize faces has a unique structure in each of our brains – and mapping that area’s connectivity patterns can tell us how each of our brains use it, says a new study.

The fusiform gyrus - home of the Brain Face Database.

The fusiform gyrus in the temporal lobe plays a part in our recognition of words, numbers, faces, colors, and other visual specifics – but it’s becoming increasingly clear that no two people’s fusiform gyrus structure is identical. By studying this region in a larger connectomic framework, though, researchers can now predict which parts of a certain person’s fusiform gyrus are specialized for face recognition.

Since the early days of neurophysiology – way back into the 1800s – scientists have been working to pinpoint certain types of brain activity to certain structures within the brain. Simple experiments and lesion studies – many of them pretty crude by today’s standards – demonstrated that, for instance, the cerebellum is necessary for coordinating bodily movement; and that the inferior frontal gyrus (IFG) is involved in speech production.

Things get trickier, though, when we try to study more abstract mental tasks. For example, debates over the possible existence of “grandmother cells” – groups of neurons whose activity might represent complex concepts like “my grandmother” – have raged for decades, with no clear resolution in sight. The story’s similar for “mirror neurons” – networks of cells that some scientists think are responsible for our ability to understand and predict the intent of another person’s action.

All these debates reflect a far more fundamental gap in our understanding – one that many scientists seem reluctant to acknowledge: To this day, no one’s been able to demonstrate exactly what a “concept” is in neurological terms – or even if it’s a single type of “thing” at all.

This is why you’ll sometimes hear theoretical psychologists talk about “engrams” – hypothetical means by which neural networks might store memories – a bit like computer files in the brain. But the fact is, no one’s sure if the brain organizes information in a way that’s at all analogous to the way a computer does. In fact, a growing body of research points toward the idea that our memories are highly interdependent and dynamic - more like ripples in a pond than files in a computer.

This is where connectomics comes in. As researchers become increasingly aware that no two brains are quite alike, they’ve begun to focus on mapping the neural networks that connect various processing hubs to one another. As an analogy, you might say they’ve begun to study traffic patterns by mapping a country’s entire highway system, rather than just focusing on the stoplights in individual cities.

And now, as the journal Nature Neuroscience reports, a team led by MIT’s David Osher has mapped a variety of connectivity patterns linking the fusiform gyrus to other brain areas.

They accomplished this through a technique called diffusion imaging, which is based on a brain-scanning technology known as diffusion MRI (dMRI). Diffusion imaging applies a magnetic field to the brain, causing water to flow along axons – the long “tails” of neurons that connect them to other areas – allowing the MRI scan to detect which areas are sending out lots of signals to others during certain mental activities. As you can imagine, this technique has been revealing all sorts of surprising new facts about the brain’s functionality.

In this particular study, the researchers found that during face-recognition tasks, certain parts of the fusiform gyrus lit up with active connections to areas like the superior and inferior temporal cortices, which are also known to be involved in face recognition. Intriguingly, they also detected connectivity with parts of the cerebellum – an ancient brain structure involved in bodily balance and movement, which no one expected to be part of any visual recognition pathway. Sounds like a Science Mystery to me!

The team even discovered that they could use the connectivity patterns they found to predict which faces a person would recognize:

By using only structural connectivity, as measured through diffusion-weighted imaging, we were able to predict functional activation to faces in the fusiform gyrus … The structure-function relationship discovered from the initial participants was highly robust in predicting activation in a second group of participants, despite differences in acquisition parameters and stimuli.

In short, they’ve discovered patterns of functional connectivity that directly correspond to our ability to recognize a particular face.

It’s still a far cry from an engram – after all, we still don’t know exactly what information these connections encode, or how the brain encodes that data, or what other conditions might need to be met for an “a-ha!” recognition to take place – but still, network mapping appears to be a very promising starting point for investigating questions like these.

The researchers plan to use this approach to study connectivity patterns in the brains of children with severe autism, and other patients who have trouble recognizing faces. They hope it’ll also be useful for understanding how we recognize scenes and other objects – eventually, a network-oriented approach may even offer clues about how we recognize familiar ideas.

In other words, for the first time in history, we’re using our brains’ natural love of connections to understand just how our brains form those connections in the first place. For those of us who love a good mystery, it’s an exciting time to be studying our own minds!

The Colors, Man! The Colors!

Scientists have discovered direct neural correlates of synesthesia, a new study reports.

They sound like unicorns and rainbows.

Not only have they detected activation patterns corresponding to synesthesic activity (such as “seeing” certain colors when thinking of certain numbers or sounds) – they’ve isolated an actual functional difference in the brains of synesthesic people. And what’s more, they’ve discovered a way to crank up synesthesic activity.

Let’s break this down and talk about what they’ve done here.

To understand what’s going on, let’s take a quick glance at history. Synesthesia’s fascinated artists and scientists since way back - in fact, the first people to write about it were the ancient Greeks, who composed treatises on the “colors” of various musical sounds.

Centuries later, Newton and Goethe both wrote that musical tones probably shared frequencies with color tones – and though that idea turned out to be incorrect, it did inspire the construction of “color organs” whose keyboards mapped specific notes to specific shades.

The first doctor to study synesthesia from a rigorous medical perspective was Gustav Fechner, who performed extensive surveys of synesthetes throughout the 1870s. The topic went on to catch the interest of other influential scientists in the late 19th century – but with the rise of behaviorism in the 1930s, objective studies on subjective experiences became taboo in the psychology community, and synesthesia was left out in the cold for a few decades.

In the 1950s, the cognitive revolution made studying cognition and subjective experience cool again – but it wasn’t until the 1980s that synesthesia returned to the scientific spotlight, as neuroscientists and psychologists like  Richard Cytowic and Simon Baron-Cohen began to classify and break down synesthetic experiences. For the first time, synesthesic experiences were organized into distinct types, and studied under controlled lab conditions.

Today, most synesthesia research focuses on grapheme → color synesthesia - in which numbers and letters are associated with specific colors – because it’s pretty straightforward to study. And thanks to the “insider reporting” of synesthetes like Daniel Tammett, we’re getting ever-clearer glimpses into the synesthetic experience.

But as the journal Current Biology reports, today marks a major leap forward in our understanding of synesthesia: a team led by Oxford University’s Devin Terhune has discovered that the visual cortex of grapheme → color synesthetes is more sensitive – and therefore, more responsive – than it is in people who don’t experience synesthesia.

The team demonstrated this by applying transcranial magnetic stimulation (TMS) to the visual cortices of volunteers, which led to a thrilling discovery:

Synesthetes display 3-fold lower phosphene thresholds than controls during stimulation of the primary visual cortex. … These results indicate that hyperexcitability acts as a source of noise in visual cortex that influences the availability of the neuronal signals underlying conscious awareness of synesthetic photisms.

In short, the visual cortex of a synesthete is three times more sensitive to incoming signals than that of a non-synesthete – which means tiny electrochemical signals that a non-synesthete’s brain might consider stray noise get interpreted into “mind’s-eye” experiences in a synesthete’s visual cortex.The question of what, exactly, causes this difference in the first place remains a Science Mystery, ripe for investigation.

But wait – this study gets much, much cooler.

There’s a technology called transcranial direct current stimulation (TDCS), which changes the firing thresholds of targeted neurons – making them more or less likely to fire when they get hit with a signal. The researchers applied TDCS to specific parts of the visual cortex, and found that they could “turn up” and “turn down” the intensity of the synesthesic experience:

Synesthesia can be selectively augmented with cathodal stimulation and attenuated with anodal stimulation of primary visual cortex. A control task revealed that the effect of the brain stimulation was specific to the experience of synesthesia.

In other words, they’ve discovered a technological mechanism for directly controlling the experience of synesthesia.

So Burning Question #1 is, Could TDCS be used to induce synesthesia – or create hallucinations - in non-synesthetes? With the right neurological and psychological preparation, it certainly seems possible. And Burning Question #2 is, could it be used to “turn down” the intensity of hallucinations in people with schizophrenia and other psychological disorders? It’ll take a lot more lab work to answer that question with any certainty – but I’d say it merits some lookin’ into.

In the meantime, I’m going to find some nice green music to listen to.

______________

1. This means synesthesia is somewhat similar to Charles Bonnet syndrome - in which blind patients see vivid, detailed hallucinations when their under-stimulated (and thus, hyper-sensitive) visual cortices catch a stray signal – and to musical ear syndrome, in which deaf people vividly hear singing. Here’s an amazing TED talk by Oliver Sacks on that very topic.

I Know Kung Fu

New technology may soon enable us download knowledge directly into our brains, says a new study.

"OK so there's people and planes and OH GOD what the hell am I learning??"

By decoding activation patterns from fMRI scans and then reproducing them as direct input to a precise area of the brain, the new system may be able to “teach” neural networks by example – priming them to fire in a certain way until they learn to do it on their own.

This has led everyone from io9 to the National Science Foundation to make Matrix references – and it’s hard to blame them. After all, immersive virtual reality isn’t too hard to imagine – but learning kung-fu via download, like grabbing an mp3? Sounds like pure sci-fi – especially since we know that the way muscle pathways form memories is pretty different from how we remember facts and images.

The basic idea is this: when you learn to perform a physical action – say, riding a bike or shooting a basketball – your muscles learn to coordinate through repetition. This is called procedural memory, because your muscles (and parts of your brain) are learning by repeating a procedure - in other words, a sequence of electrochemical actions – and (hopefully) improving the precision of that procedure with each run-through.

In contrast to this, we have declarative memories – memories of, say, the color of your favorite shirt, or where you had lunch today. Though declarative memories can certainly improve with practice – think of the last time you studied for an exam – there’s typically not an “awkward” stage as your brain struggles to learn how to recreate these memories. In short, once a bit of information is “downloaded” into your conscious awareness, it’s pretty much instantly available (until you forget it, of course).

Now, I could give some examples that blur the lines between these two types of memory – reciting a long list of words, for instance, seems to involve both procedural and declarative memory – but my point here is that procedural memories tend to require practice.

So it’s pretty surprising to read, in the journal Science, that a team led by Kazuhisa Shibata at Boston’s Visual Science Laboratory may have found a way to bridge the gap between these two types of memory.

The team began by taking fMRI scans of the visual cortex as volunteers looked at particular visual images – objects rotated at various angles. Once the team had isolated a precise activation pattern corresponding to a particular angle of orientation, they turned around and directly induced that same activation pattern in the volunteers’ brains:

We induced activity patterns only in early visual cortex corresponding to an orientation without stimulus presentation or participants’ awareness of what was to be learned. The induced activation caused VPL specific to the orientation.

In other words, the researchers triggered brain activity patterns corresponding to specific angles of visual orientation without telling the volunteers what the stimulus was going to be.

Then, when the scientists asked the volunteers what they’d “learned,” the  volunteers had no idea. But when the researchers asked them to pick a “best guess” orientation that seemed “right” to them, a significant percentage chose the orientation their brains had been trained to remember.

This isn’t exactly downloadable kung-fu – but it provides some of the first conclusive evidence that not only do repeated visual stimuli help sculpt brain networks – direct stimulation can sculpt the way those networks learn about visual stimuli.

Could we use technology like this to teach people to feel less anxious or depressed? Or to understand concepts they have trouble grasping? And what might happen if we could harness this technology to our own mental output? What if we could literally dream our way to new skills, or more helpful beliefs?

We’re not quite there yet, but it may very well happen in our lifetime.

Musical Matchups

Our brains process music via different sensory pathways depending on what we think its source is, a new study finds.

Let me stop ya right there - "Stairway to Heaven" is off-limits.

As our brains organize information from our senses into a coherent representation of the world around us, they’re constantly hard at work associating data from one sense – say, sight – with data from another – say, hearing.

A lot of the time, this process is pretty straightforward – for instance, if we see a man talking and hear a nearby male voice, it’s typically safe for our brains to assume the voice “goes with” the man’s lip movements. But it’s also not too hard for others to trick this association process – as anyone who’s watched a good ventriloquism act knows.

Now, as the journal Proceedings of the National Academy of Sciences (PNAS) reports, a team led by HweeLing Lee and Uta Noppeney at the Max Planck Institute for Biological Cybernetics has discovered a way in which musicians‘ brains are specially tuned to correlate information from different senses when their favorite instruments are involved.

Neuroscientists have known for years that the motor cortex in the brains of well-trained guitar and piano players devotes much more processing power to fingertip touch and finger movement than the same area of a non-musician’s brain does. But what this new study tells us is that the brains of pianists are also much more finely-tuned to detect whether a finger stroke is precisely synchronous with a sound produced by the touch of a piano key.

To figure this out, the team assembled 18 pianists – amateurs who practice on a regular basis – and compared their ability to tell synchronous piano tones and keystrokes from slightly asynchronous ones while they lay in an fMRI scanner (presumably by showing them this video). The researchers also tested the pianists’ ability to tell when lip movements were precisely synchronized with spoken sentences.

The team then compared the musicians’ test results against the results of equivalent tests taken by 19 non-musicians. What they found was pretty striking:

Behaviorally, musicians exhibited a narrower temporal integration window than non-musicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry.

In short, pianists are much more sensitive to a slight asynchrony between a keystroke and a piano tone than non-musicians are – but this sensitivity doesn’t also apply to speech and lip movements. In other words, pianists’ brains are unusually sensitive to asynchrony only when it involves piano keystrokes.

Another important finding is that the researchers could predict how sensitive the musicians would be to asynchrony based on asynchronies the fMRI scanner detected in their motor cortex:

Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds.

This means there’s a direct link between inter-neural coordination and ear-eye coordination. I don’t know about you, but I think that’s pretty incredible.

The researchers hope that as they study similar data from musicians who work with other instruments, they’ll come to better understand how our brains learn to associate stimuli from one sense from information from another – and maybe even how they learn when and when not to “sync up” these stimuli in our subjective experience of reality.

It’s too bad we can’t hook up the brain of, say, Mozart or Hendrix to an fMRI scanner – who knows what amazing discoveries we might make. But even so, I’m sure you can think of some living musical geniuses whose brains you’d like to see in action.

Harry Potter and the Nature of the Self

Hooray for Google Image Search!

Yup, this is what we’re doing today. I finally got to see Deathly Hallows Part 2, and it got me thinking about neuroscience like frickin’ everything always does, and I came home and wrote an essay about the nature of consciousness in the Harry Potter universe.

And we’re going to talk about it, because it’s the holidays and can we please just pull it together and act like a normal family for the length of one blog post? Thank you. I really mean it. Besides, I guarantee you that this stuff is gonna bug you too once I’ve brought it up.

So in the movie, there’s this concept of Harry and Voldemort sharing minds; mental resources – each of them can occasionally see what the other one sees; sometimes even remember what the other one remembers.

That idea is not explored to anywhere near a respectable modicum of its full extent.

First of all, are these guys the only two wizards in history who this has happened to? Yeah, I’m sure the mythology already has an answer for this – one that I will devote hours to researching just as soon as that grant money comes through. Ahem. Anyway, the odds are overwhelming that at least some other wizards have been joined in mental pairs previously – I mean, these are guys who can store their subjective memories in pools of water to be re-experienced at will; you can’t tell me nobody’s ever experimented; bathed in another person’s memories; tried to become someone else, or be two people at once. Someone, at some point, must’ve pulled it off. Probably more than one someone.

OK, so there’ve been a few pairs of wizards who shared each others’ minds. Cool. Well, if two works fine, why not three? Hell, why not twelve, or a thousand? With enough know-how and the right set of minds to work with, the wizarding world could whip us up a Magic Consciousness Singularity by next Tuesday.

But there’s the rub: Who all should be included in this great meeting of the minds? Can centaurs and house-elves join? What about, say, dragons, or deer, or birds? Where exactly is the cutoff, where the contents of one mind are no longer useful or comprehensible to another? As a matter of fact, given the – ah – not-infrequent occurrence of miscommunication in our own societies, I’d say it’s pretty remarkable that this kind of mental communion is even possible between two individuals of the same species.

Which brings us to an intriguing wrinkle in the endless debate about qualia – those mental qualities like the “redness” of red, or the “painfulness” of pain, which are only describable in terms of other subjective experiences. Up until now, of course, it’s been impossible to prove whether Harry’s qualia for, say, redness are exactly the same as Voldemort’s – or to explain just how the concept of “exactly the same” would even apply in this particular scenario. But now Harry can magically see through Voldemort’s eyes; feel Voldemort’s feelings – he can experience Voldemort’s qualia for himself.

Ah, but can he, really? I mean, wouldn’t Harry still be experiencing Voldemort’s qualia through his own qualia? Like I said, this is a pretty intriguing wrinkle.

The more fundamental question, though, is this: What  does this all tell us about the concept of the Self in Wizard Metaphysics? (It’s capitalized because it’s magical.) Do Harry and Voldemort together constitute a single person? A single self? Is there a difference between those two concepts? Should there be?

I don’t ask these questions idly – in fact, here’s a much more pointed query: What do we rely on when we ask ourselves who we are? A: Memories, of course; and our thoughts and feelings about those memories. Now, if some of Harry’s thoughts and feelings and memories are of things he experienced while “in” Voldemort’s mind (whatever that means) then don’t some of Voldemort’s thoughts and feelings and memories comprise a portion of Harry’s? You can see where we run into problems.

Just one last question, and then I promise I’ll let this drop. When you read about Harry’s and Voldemort’s thoughts and feelings and memories, and you experience them for yourself, what does that say about what your Self is made of?

I’ll be back next week to talk about neurons and stuff.

Paying Awareness

Attention and awareness are actually two fundamentally different mental processes, and they often operate separately, says a new study.

Awareness. Not pictured: attention.

Certain parts of the brain – such as the primary visual cortex (V1) – are activated in response to attention but not to awareness; and for others, the reverse is the case. In short, the processes of attention and awareness either affect distinct networks of nerve cells, or they each affect cells in different ways - or perhaps some combination of those two ideas is closer to the truth.

But in any case, the verdict is in: in neurological terms, being aware and paying attention are two different things.

Awareness of a particular sensory stimulus or thought is usually described as the ability to be conscious of it. Attention, on the other hand, is the process of bringing that stimulus or thought into the focus of conscious awareness, at the expense of others.

Several studies, including this one published in June 2011, have found that not only can you be aware of something without paying attention to it – you can also pay attention to something without being aware it’s there.

For instance, when people are shown a pattern of colorful moving shapes in their left eye, and a static pattern of dots - most green, but one bright red – in their right, their visual attention will be directed to the red dot – even though they’ll report that they weren’t aware the dot was there! Pretty weird, huh?

But now, as the journal Science reports, two teams – one from the Max Planck Institute for Biological Cybernetics and one from the University of Tokyo – have taken this research another step further, by taking fMRI scans of volunteers’ brains as the subjects watched split-eye visuals similar to the ones I just mentioned.

What they discovered shocked them:

The visibility or invisibility of a visual target led to only nonsignificant blood oxygenation level–dependent (BOLD) effects in the human primary visual cortex (V1). Directing attention toward and away from the target had much larger and robust effects across all study participants.

In other words, awareness of a visible object triggers almost no activity in the primary visual cortex – but directing attention toward or away from that object causes flurries of activity across V1:

The difference in the lower-level limit of BOLD activation between attention and awareness illustrates dissociated neural correlates of the two processes. Our results agree with previously reported V1 BOLD effects on attention, while they invite a reconsideration of the functional role of V1 in visual awareness.

The researchers are understandably pumped about their breakthrough, but they also advise caution about drawing broad conclusions at this point – after all, this experiment focused only on the visual cortex, a relatively primitive part of the cerebrum. How attention and awareness are coupled or decoupled in more advanced processing areas remains to be seen – and the answers may not be nearly so straightforward.

Even so, it’s always cool to watch scientists take apart components of this deeply strange thing we call a “self,” and begin to understand how those components work together to create the moments of our conscious existence.

And though I can’t be aware of every new discovery, I’ll certainly be paying close attention to the ones I notice.

Brain Scans & Lucid Dreams

The brain activity of lucid dreamers – people who become aware that they’re in a dream state – shows some interesting similarities with that of people who are awake, says a new study.

"Ahh - nothing puts me to sleep like a roomful of bright lights!"

By studying the brain activity of lucid dreamers under electroencephalograms (EEGs) and fMRI scans, researchers have found that activity in the somatosensory and motor cortices – regions crucial for touch and movement, respectively – show very similar activation patterns during lucid dreams to those they display when people make or imagine those same movements while awake.

Though dreams have fascinated philosophers and scientists since the dawn of history – some of the earliest written texts are dream-interpretation handbooks from ancient Egypt and Babylon – it’s only in recent years that neuroscience has begun to advance the study of dreams beyond Freudian theorizing and into the realm of hard data.

In the early 1950s, scientists identified several stages of sleep, including rapid eye movement (REM) sleep – the stage in which dreaming takes place; and in 1959, a team discovered a certain class of brain waves – ponto-geniculo-occipital (PGO) waves – which only appear during REM sleep.

Then, in 2009, an EEG study found that lucid dreams exhibit slightly different wave patterns from those associated with ordinary REM sleep – and later that year, another study proposed an astonishing theory: that REM sleep might be a form of proto-consciousness, which performs maintenance and support duty for the “full” consciousness that took over for it at some point in our evolution.

Now, as the journal Current Biology reports, a team led by Michael Czisch at Germany’s Max Planck Institute has made a new leap forward in dream research. By concentrating their research on lucid dreams, the team were able to map the neural correlates of controlled and remembered dream content:

Lucid dreamers were asked to become aware of their dream while sleeping in a magnetic resonance scanner and to report this “lucid” state to the researchers by means of eye movements. They were then asked to voluntarily “dream” that they were repeatedly clenching first their right fist and then their left one for ten seconds.

This approach has provided some surprising new insights into the ways our brains function in a dream state. By having the subjects retell their lucid dreams, the researchers were able to correlate recorded activation patterns with specific actions the subjects had “performed” while asleep:

A region in the sensorimotor cortex of the brain, which is responsible for the execution of movements, was actually activated during the dream. This is directly comparable with the brain activity that arises when the hand is moved while the person is awake. Even if the lucid dreamer just imagines the hand movement while awake, the sensorimotor cortex reacts in a similar way.

This confirms that the brain’s sensorimotor areas are actively involved in planning and executing movements in dreams, rather than just passively observing events.

What’s even more exciting is that, in light of other new technologies like the thought-video recorder, it looks like we may be able to record and play back our thoughts and dreams within the next few decades.

I think this research reflects an even more fundamental shift in thinking about neuroscience, though: as we unravel more and more of the neural correlates of phenomena like sleep and consciousness, we’re coming to realize just how vast a chasm yawns between scientific data and subjective experience.

Before long, it’s going to become essential for scanners and volunteers to be involved in the same continuous feedback loop – one in which the subjects can watch, in real time, the neural correlates of their thoughts and feelings from moment to moment, and adjust them accordingly to produce useful results.

Ambitious? I guess so. But a guy’s gotta have a dream.

Hypnotized Eyes

A state of hypnosis creates detectable changes in a person’s eye movement patterns, says a new study.

"When you awake, you will experience an overwhelming desire to Make Me a Sammich!"

The “glazed” look of a person who’s been hypnotized can be linked to measurable, quantifiable changes in the patterns of that person’s reflexive eye movements – changes that non-hypnotized people aren’t able to replicate.

The exact nature – and even the actual existence – of the hypnotic state have been controversial topics since the term was first coined in the 1840s. Some have likened it to a form of sleep (the word itself comes from the ancient Greek hypnos, meaning “sleep”), while others have described it as a state of intense focus, or of heightened suggestibility. Some have claimed that hypnotized people are faking, or at least fooling themselves – but even so, the idea of hypnosis continues to fascinate many of us.

One of the first real indicators that hypnosis might be an objectively detectable state came in 1999, when a team at Belgium’s University of Liège used positron emission tomography (PET) scans to detect altered activation patterns in the brains of volunteers under hypnosis. A 2002 study at the University of Montreal lent more detail to these results; and in 2007, a team led by Andrew Fingelkurts at Finland’s Brain and Mind Technologies Research Centre attacked the problem from another angle, using electroencephalography (EEG) to detect changes in electrical activation across the scalps of hypnotized subjects.

The research seemed to suggest that hypnosis might alter the brain’s functional connectivity, preventing certain areas – such as the anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (dlPFC), both of which are involved in goal-directed behavior - from communicating with other areas that might inhibit or otherwise modulate their activity. Interestingly, though, these changes were only detectable in “very highly hypnotizable subjects” – implying that belief and willful participation are crucial ingredients in hypnosis.

Now, as the journal PLoS ONE reports, a multidisciplinary team of researchers from Finland’s University of Turku and Aalto University, and Sweden’s University of Skövde, have found what may be even stronger indicators of a physically detectable hypnotic state – changes in various reflexive eye movements.

The team hypnotized a single volunteer (which means these results should be taken with several large grains of salt) and compared her eye movements against those of a control group of non-hypnotized subjects. The researchers found that when their subject was hypnotized, her eyes exhibited some unique behaviors:

[She] showed a markedly reduced eye-blinking rate (0.012 ± 0.04 blinks/s) as compared to [control subjects] (1.18 ± 0.63 blinks/s). Although some control subjects could mimic rather well this external feature of the “trance stare,” at the group level the changes were far less marked.

Even stranger, though, were changes in the subject’s saccades – the rapid, often involuntary eye movements that help us focus on changes in our environment or quickly scan the details of a scene:

[When hypnotized, the subject] performed only short saccades toward the target regardless of the distance from the fixation point. This “creeping” pattern of short saccades was difficult to simulate by the control group since their fixation tended to automatically gravitate to the target.

The subject’s saccades were also much slower, shorter, and fewer than any that non-hypnotized volunteers could produce. In other words, her gaze tended to shift much less, and less often, than the gazes of people in the control group.

Exactly what this data means is tough to say – and not just because it’s only been detected in one volunteer; it’s also not clear what changes in brain activity these unusual eye behaviors might reflect. The researchers have a suggestion, though: the ACC is known to be closely connected with visual brain areas, and to help coordinate saccades. Could these strange saccades reflect changes in the ACC’s functional connectivity?

It’s too soon to say for sure – but it seems that even if hypnosis does depend on willing belief, it still has some objective neurological correlates that are worth studying. It may be that we have much more control over our brains’ functional connectivity than we ever suspected.

Who knows – it could be that even now, someone is implanting subtle cues waiting for the moment when- SLEEEEP!!!

Follow

Get every new post delivered to your Inbox.

Join 74 other followers