Posts Tagged ‘ vision ’

Forget Me Not

Having trouble remembering where you left your keys? You can improve with a little practice, says a new study.

"I've forgotten more than you'll ever...wait, what was I saying?"

It’s an idea that had never occurred to me before, but one that seems weirdly obvious once you think about it: people who train their brains to recall the locations of objects for a few minutes each day show greatly improved ability to remember where they’ve left things.

No matter what age you are, you’ve probably had your share of “Alzheimer’s moments,” when you’ve walked into a room only to forget why you’re there, or set something down and immediately forgotten where you put it. Attention is a limited resource, and when you’re multitasking, there’s not always enough of it to go around.

For people with real Alzheimer’s disease, though, these little moments of forgetfulness can add up to a frustrating inability to complete even simple tasks from start to finish. This is known as mild cognitive impairment (MCI), and its symptoms can range from amnesia to problems with counting and logical reasoning.

That’s because all these tasks depend on memory – even if it’s just the working memory that holds our sense of the present moment together – and most of our memories are dependent on a brain structure called the hippocampus, which is one of the major areas attacked by Alzheimer’s.

What exactly the hippocampus does is still a hotly debated question, but it seems to help sync up neural activity when new memories are “written down” in the brain, as well as when they’re recalled (a process that rewrites the memory anew each time). So it makes sense that the more we associate a particular memory with other memories – and with strong emotions - the more easily even a damaged hippocampus will be able to help retrieve it.

But now, a team led by Benjamin Hampstead at the Emory University School of Medicine has made a significant breakthrough in rehabilitating people with impaired memories, the journal Hippocampus reports: the researchers have demonstrated that Alzheimer’s patients suffering from MCI can learn to remember better with practice.

The team took a group of volunteers with MCI and taught them a three-step memory-training strategy: 1) the subjects focused their attention on a visual feature of the room that was near the object they wanted to remember, 2) they memorized a short explanation for why the object was there, and 3) they imagined a mental picture that contained all that information.

Not only did the patients’ memory measurably improve after a few training sessions – fMRI scans showed that the training physically changed their brains:

Before training, MCI patients showed reduced hippocampal activity during both encoding and retrieval, relative to HEC. Following training, the MCI MS group demonstrated increased activity during both encoding and retrieval. There were significant differences between the MCI MS and MCI XP groups during retrieval, especially within the right hippocampus.

In other words, the hippocampus in these patients became much more active during memory storage and retrieval than it had been before the training.

Now, it’s important to point out that that finding doesn’t necessarily imply improvement – studies have shown that decreased neural activity is often more strongly correlated with mastery of a task than increased activity is – but it does show that these people’s brains were learning to work differently as their memories improved.

So next time you experience a memory slipup, think of it as an opportunity to learn something new. You’d be surprised what you can train your brain to do with a bit of practice.

That is, as long as you remember to practice.

Taking Vision Apart

For the first time, scientists have created neuron-by-neuron maps of brain regions corresponding to specific kinds of visual information, and specific parts of the visual field, says a new study.

At age 11, Cajal landed in prison for blowing up his town's gate with a homemade cannon. Seriously. Google it.

If other labs can confirm these results, this will mean we’re very close to being able to predict exactly which neurons will fire when an animal looks at a specific object.

Our understanding of neural networks has come a very long way in a very short time. It was just a little more than 100 years ago that Santiago Ramón y Cajal first proposed the theory that individual cellsneurons – comprised the basic processing units of the central nervous system (CNS). Cajal lived until 1934, so he got to glimpse the edge – but not much more – of the strange new frontier he’d discovered. As scientists like Alan Lloyd Hodgkin and Andrew Huxley – namesakes of today’s Hodgkins-Huxley neuron simulator – started studying neurons’ behavior, they began realizing that the brain’s way of processing information was much weirder and more complex than anyone had expected.

See, computers and neuroscience evolved hand-in-hand – in many ways, they still do – and throughout the twentieth century, most scientists described the brain as a sort of computer. But by the early 1970s, they were realizing that a computer and a brain are different in a very fundamental way: computers process information in bits – tiny electronic switches that say “on” or “off” – but a brain processes information in connections and gradientsdegrees to which one piece of neural architecture influences others. In short, our brains aren’t digital – they’re analog. And as we all know, there’s just something warmer about analog.

So where does this leave us now? Well, instead of trying to chase down bits in brains, many of today’s cutting-edge neuroscientists are working to figure out what connects to what, and how those connections form and change as a brain absorbs new information. In a way, the process isn’t all that different from trying to identify all the cords tangled up under your desk – it’s just that in this case, there are trillions of plugs, and a lot of them are molecular in size. That’s why neuroscientists need supercomputers that fill whole rooms to crunch the numbers – though I’m sure you’ll laugh if you reread that sentence in 2020.

But the better we understand brains, the better we get at understanding them – and that’s why a team led by the Salk Institute’s James Marshel and Marina Garrett set out to map the exact neural pathways that correspond to specific aspects of visual data, the journal Neuron reports. (By the way, if you guys are reading this, I live in L.A. and would love to visit your lab.)

The team injected mouse brains with a special dye that’s chemically formulated to glow fluorescent when a neuron fires. This allowed them to track exactly which neurons in a mouse’s brain were active – and to what degree they were – when the mice were shown various shapes. And the researchers confirmed something wonderfully weird about the way a brain works:

Each area [of the visual cortex] contains a distinct visuotopic representation and encodes a unique combination of spatiotemporal features.

In other words, a brain doesn’t really have sets of neurons that encode specific shapes – instead, it has layers of neurons, and each layer encodes an aspect of a shape – its roundness, its largeness, its color, and so on. As signals pass through each layer, they’re influenced by the neurons they’ve connected with before. Each layer is like a section of a choir, adding its own voice to the song with perfect timing.

Now, other teams have already developed technologies that can record memories and dreams right out of the human brain – so what’s so amazing about this particular study? The level of detail:

Areas LM, AL, RL, and AM prefer up to three times faster temporal frequencies and significantly lower spatial frequencies than V1, while V1 and PM prefer high spatial and low temporal frequencies. LI prefers both high spatial and temporal frequencies. All extrastriate areas except LI increase orientation selectivity compared to V1, and three areas are significantly more direction selective (AL, RL, and AM). Specific combinations of spatiotemporal representations further distinguish areas.

Are you seeing this? We’re talking about tuning in to specific communication channels within the visual cortex, down at the level of individual neuronal networks.

The gap between mind and machine is getting narrower every day. How does that make you feel?

Saving Faces

A brain area that’s specialized to recognize faces has a unique structure in each of our brains – and mapping that area’s connectivity patterns can tell us how each of our brains use it, says a new study.

The fusiform gyrus - home of the Brain Face Database.

The fusiform gyrus in the temporal lobe plays a part in our recognition of words, numbers, faces, colors, and other visual specifics – but it’s becoming increasingly clear that no two people’s fusiform gyrus structure is identical. By studying this region in a larger connectomic framework, though, researchers can now predict which parts of a certain person’s fusiform gyrus are specialized for face recognition.

Since the early days of neurophysiology – way back into the 1800s – scientists have been working to pinpoint certain types of brain activity to certain structures within the brain. Simple experiments and lesion studies – many of them pretty crude by today’s standards – demonstrated that, for instance, the cerebellum is necessary for coordinating bodily movement; and that the inferior frontal gyrus (IFG) is involved in speech production.

Things get trickier, though, when we try to study more abstract mental tasks. For example, debates over the possible existence of “grandmother cells” – groups of neurons whose activity might represent complex concepts like “my grandmother” – have raged for decades, with no clear resolution in sight. The story’s similar for “mirror neurons” – networks of cells that some scientists think are responsible for our ability to understand and predict the intent of another person’s action.

All these debates reflect a far more fundamental gap in our understanding – one that many scientists seem reluctant to acknowledge: To this day, no one’s been able to demonstrate exactly what a “concept” is in neurological terms – or even if it’s a single type of “thing” at all.

This is why you’ll sometimes hear theoretical psychologists talk about “engrams” – hypothetical means by which neural networks might store memories – a bit like computer files in the brain. But the fact is, no one’s sure if the brain organizes information in a way that’s at all analogous to the way a computer does. In fact, a growing body of research points toward the idea that our memories are highly interdependent and dynamic – more like ripples in a pond than files in a computer.

This is where connectomics comes in. As researchers become increasingly aware that no two brains are quite alike, they’ve begun to focus on mapping the neural networks that connect various processing hubs to one another. As an analogy, you might say they’ve begun to study traffic patterns by mapping a country’s entire highway system, rather than just focusing on the stoplights in individual cities.

And now, as the journal Nature Neuroscience reports, a team led by MIT’s David Osher has mapped a variety of connectivity patterns linking the fusiform gyrus to other brain areas.

They accomplished this through a technique called diffusion imaging, which is based on a brain-scanning technology known as diffusion MRI (dMRI). Diffusion imaging applies a magnetic field to the brain, causing water to flow along axons – the long “tails” of neurons that connect them to other areas – allowing the MRI scan to detect which areas are sending out lots of signals to others during certain mental activities. As you can imagine, this technique has been revealing all sorts of surprising new facts about the brain’s functionality.

In this particular study, the researchers found that during face-recognition tasks, certain parts of the fusiform gyrus lit up with active connections to areas like the superior and inferior temporal cortices, which are also known to be involved in face recognition. Intriguingly, they also detected connectivity with parts of the cerebellum – an ancient brain structure involved in bodily balance and movement, which no one expected to be part of any visual recognition pathway. Sounds like a Science Mystery to me!

The team even discovered that they could use the connectivity patterns they found to predict which faces a person would recognize:

By using only structural connectivity, as measured through diffusion-weighted imaging, we were able to predict functional activation to faces in the fusiform gyrus … The structure-function relationship discovered from the initial participants was highly robust in predicting activation in a second group of participants, despite differences in acquisition parameters and stimuli.

In short, they’ve discovered patterns of functional connectivity that directly correspond to our ability to recognize a particular face.

It’s still a far cry from an engram – after all, we still don’t know exactly what information these connections encode, or how the brain encodes that data, or what other conditions might need to be met for an “a-ha!” recognition to take place – but still, network mapping appears to be a very promising starting point for investigating questions like these.

The researchers plan to use this approach to study connectivity patterns in the brains of children with severe autism, and other patients who have trouble recognizing faces. They hope it’ll also be useful for understanding how we recognize scenes and other objects – eventually, a network-oriented approach may even offer clues about how we recognize familiar ideas.

In other words, for the first time in history, we’re using our brains’ natural love of connections to understand just how our brains form those connections in the first place. For those of us who love a good mystery, it’s an exciting time to be studying our own minds!

The Colors, Man! The Colors!

Scientists have discovered direct neural correlates of synesthesia, a new study reports.

They sound like unicorns and rainbows.

Not only have they detected activation patterns corresponding to synesthesic activity (such as “seeing” certain colors when thinking of certain numbers or sounds) – they’ve isolated an actual functional difference in the brains of synesthesic people. And what’s more, they’ve discovered a way to crank up synesthesic activity.

Let’s break this down and talk about what they’ve done here.

To understand what’s going on, let’s take a quick glance at history. Synesthesia’s fascinated artists and scientists since way back – in fact, the first people to write about it were the ancient Greeks, who composed treatises on the “colors” of various musical sounds.

Centuries later, Newton and Goethe both wrote that musical tones probably shared frequencies with color tones – and though that idea turned out to be incorrect, it did inspire the construction of “color organs” whose keyboards mapped specific notes to specific shades.

The first doctor to study synesthesia from a rigorous medical perspective was Gustav Fechner, who performed extensive surveys of synesthetes throughout the 1870s. The topic went on to catch the interest of other influential scientists in the late 19th century – but with the rise of behaviorism in the 1930s, objective studies on subjective experiences became taboo in the psychology community, and synesthesia was left out in the cold for a few decades.

In the 1950s, the cognitive revolution made studying cognition and subjective experience cool again – but it wasn’t until the 1980s that synesthesia returned to the scientific spotlight, as neuroscientists and psychologists like  Richard Cytowic and Simon Baron-Cohen began to classify and break down synesthetic experiences. For the first time, synesthesic experiences were organized into distinct types, and studied under controlled lab conditions.

Today, most synesthesia research focuses on grapheme → color synesthesia – in which numbers and letters are associated with specific colors – because it’s pretty straightforward to study. And thanks to the “insider reporting” of synesthetes like Daniel Tammett, we’re getting ever-clearer glimpses into the synesthetic experience.

But as the journal Current Biology reports, today marks a major leap forward in our understanding of synesthesia: a team led by Oxford University’s Devin Terhune has discovered that the visual cortex of grapheme → color synesthetes is more sensitive – and therefore, more responsive – than it is in people who don’t experience synesthesia.

The team demonstrated this by applying transcranial magnetic stimulation (TMS) to the visual cortices of volunteers, which led to a thrilling discovery:

Synesthetes display 3-fold lower phosphene thresholds than controls during stimulation of the primary visual cortex. … These results indicate that hyperexcitability acts as a source of noise in visual cortex that influences the availability of the neuronal signals underlying conscious awareness of synesthetic photisms.

In short, the visual cortex of a synesthete is three times more sensitive to incoming signals than that of a non-synesthete – which means tiny electrochemical signals that a non-synesthete’s brain might consider stray noise get interpreted into “mind’s-eye” experiences in a synesthete’s visual cortex.The question of what, exactly, causes this difference in the first place remains a Science Mystery, ripe for investigation.

But wait – this study gets much, much cooler.

There’s a technology called transcranial direct current stimulation (TDCS), which changes the firing thresholds of targeted neurons – making them more or less likely to fire when they get hit with a signal. The researchers applied TDCS to specific parts of the visual cortex, and found that they could “turn up” and “turn down” the intensity of the synesthesic experience:

Synesthesia can be selectively augmented with cathodal stimulation and attenuated with anodal stimulation of primary visual cortex. A control task revealed that the effect of the brain stimulation was specific to the experience of synesthesia.

In other words, they’ve discovered a technological mechanism for directly controlling the experience of synesthesia.

So Burning Question #1 is, Could TDCS be used to induce synesthesia – or create hallucinations – in non-synesthetes? With the right neurological and psychological preparation, it certainly seems possible. And Burning Question #2 is, could it be used to “turn down” the intensity of hallucinations in people with schizophrenia and other psychological disorders? It’ll take a lot more lab work to answer that question with any certainty – but I’d say it merits some lookin’ into.

In the meantime, I’m going to find some nice green music to listen to.

______________

1. This means synesthesia is somewhat similar to Charles Bonnet syndrome – in which blind patients see vivid, detailed hallucinations when their under-stimulated (and thus, hyper-sensitive) visual cortices catch a stray signal – and to musical ear syndrome, in which deaf people vividly hear singing. Here’s an amazing TED talk by Oliver Sacks on that very topic.

I Know Kung Fu

New technology may soon enable us download knowledge directly into our brains, says a new study.

"OK so there's people and planes and OH GOD what the hell am I learning??"

By decoding activation patterns from fMRI scans and then reproducing them as direct input to a precise area of the brain, the new system may be able to “teach” neural networks by example – priming them to fire in a certain way until they learn to do it on their own.

This has led everyone from io9 to the National Science Foundation to make Matrix references – and it’s hard to blame them. After all, immersive virtual reality isn’t too hard to imagine – but learning kung-fu via download, like grabbing an mp3? Sounds like pure sci-fi – especially since we know that the way muscle pathways form memories is pretty different from how we remember facts and images.

The basic idea is this: when you learn to perform a physical action – say, riding a bike or shooting a basketball – your muscles learn to coordinate through repetition. This is called procedural memory, because your muscles (and parts of your brain) are learning by repeating a procedure – in other words, a sequence of electrochemical actions – and (hopefully) improving the precision of that procedure with each run-through.

In contrast to this, we have declarative memories – memories of, say, the color of your favorite shirt, or where you had lunch today. Though declarative memories can certainly improve with practice – think of the last time you studied for an exam – there’s typically not an “awkward” stage as your brain struggles to learn how to recreate these memories. In short, once a bit of information is “downloaded” into your conscious awareness, it’s pretty much instantly available (until you forget it, of course).

Now, I could give some examples that blur the lines between these two types of memory – reciting a long list of words, for instance, seems to involve both procedural and declarative memory – but my point here is that procedural memories tend to require practice.

So it’s pretty surprising to read, in the journal Science, that a team led by Kazuhisa Shibata at Boston’s Visual Science Laboratory may have found a way to bridge the gap between these two types of memory.

The team began by taking fMRI scans of the visual cortex as volunteers looked at particular visual images – objects rotated at various angles. Once the team had isolated a precise activation pattern corresponding to a particular angle of orientation, they turned around and directly induced that same activation pattern in the volunteers’ brains:

We induced activity patterns only in early visual cortex corresponding to an orientation without stimulus presentation or participants’ awareness of what was to be learned. The induced activation caused VPL specific to the orientation.

In other words, the researchers triggered brain activity patterns corresponding to specific angles of visual orientation without telling the volunteers what the stimulus was going to be.

Then, when the scientists asked the volunteers what they’d “learned,” the  volunteers had no idea. But when the researchers asked them to pick a “best guess” orientation that seemed “right” to them, a significant percentage chose the orientation their brains had been trained to remember.

This isn’t exactly downloadable kung-fu – but it provides some of the first conclusive evidence that not only do repeated visual stimuli help sculpt brain networks – direct stimulation can sculpt the way those networks learn about visual stimuli.

Could we use technology like this to teach people to feel less anxious or depressed? Or to understand concepts they have trouble grasping? And what might happen if we could harness this technology to our own mental output? What if we could literally dream our way to new skills, or more helpful beliefs?

We’re not quite there yet, but it may very well happen in our lifetime.

Musical Matchups

Our brains process music via different sensory pathways depending on what we think its source is, a new study finds.

Let me stop ya right there - "Stairway to Heaven" is off-limits.

As our brains organize information from our senses into a coherent representation of the world around us, they’re constantly hard at work associating data from one sense – say, sight – with data from another – say, hearing.

A lot of the time, this process is pretty straightforward – for instance, if we see a man talking and hear a nearby male voice, it’s typically safe for our brains to assume the voice “goes with” the man’s lip movements. But it’s also not too hard for others to trick this association process – as anyone who’s watched a good ventriloquism act knows.

Now, as the journal Proceedings of the National Academy of Sciences (PNAS) reports, a team led by HweeLing Lee and Uta Noppeney at the Max Planck Institute for Biological Cybernetics has discovered a way in which musicians‘ brains are specially tuned to correlate information from different senses when their favorite instruments are involved.

Neuroscientists have known for years that the motor cortex in the brains of well-trained guitar and piano players devotes much more processing power to fingertip touch and finger movement than the same area of a non-musician’s brain does. But what this new study tells us is that the brains of pianists are also much more finely-tuned to detect whether a finger stroke is precisely synchronous with a sound produced by the touch of a piano key.

To figure this out, the team assembled 18 pianists – amateurs who practice on a regular basis – and compared their ability to tell synchronous piano tones and keystrokes from slightly asynchronous ones while they lay in an fMRI scanner (presumably by showing them this video). The researchers also tested the pianists’ ability to tell when lip movements were precisely synchronized with spoken sentences.

The team then compared the musicians’ test results against the results of equivalent tests taken by 19 non-musicians. What they found was pretty striking:

Behaviorally, musicians exhibited a narrower temporal integration window than non-musicians for music but not for speech. At the neural level, musicians showed increased audiovisual asynchrony responses and effective connectivity selectively for music in a superior temporal sulcus-premotor-cerebellar circuitry.

In short, pianists are much more sensitive to a slight asynchrony between a keystroke and a piano tone than non-musicians are – but this sensitivity doesn’t also apply to speech and lip movements. In other words, pianists’ brains are unusually sensitive to asynchrony only when it involves piano keystrokes.

Another important finding is that the researchers could predict how sensitive the musicians would be to asynchrony based on asynchronies the fMRI scanner detected in their motor cortex:

Our results suggest that piano practicing fine tunes an internal forward model mapping from action plans of piano playing onto visible finger movements and sounds.

This means there’s a direct link between inter-neural coordination and ear-eye coordination. I don’t know about you, but I think that’s pretty incredible.

The researchers hope that as they study similar data from musicians who work with other instruments, they’ll come to better understand how our brains learn to associate stimuli from one sense from information from another – and maybe even how they learn when and when not to “sync up” these stimuli in our subjective experience of reality.

It’s too bad we can’t hook up the brain of, say, Mozart or Hendrix to an fMRI scanner – who knows what amazing discoveries we might make. But even so, I’m sure you can think of some living musical geniuses whose brains you’d like to see in action.

Harry Potter and the Nature of the Self

Hooray for Google Image Search!

Yup, this is what we’re doing today. I finally got to see Deathly Hallows Part 2, and it got me thinking about neuroscience like frickin’ everything always does, and I came home and wrote an essay about the nature of consciousness in the Harry Potter universe.

And we’re going to talk about it, because it’s the holidays and can we please just pull it together and act like a normal family for the length of one blog post? Thank you. I really mean it. Besides, I guarantee you that this stuff is gonna bug you too once I’ve brought it up.

So in the movie, there’s this concept of Harry and Voldemort sharing minds; mental resources – each of them can occasionally see what the other one sees; sometimes even remember what the other one remembers.

That idea is not explored to anywhere near a respectable modicum of its full extent.

First of all, are these guys the only two wizards in history who this has happened to? Yeah, I’m sure the mythology already has an answer for this – one that I will devote hours to researching just as soon as that grant money comes through. Ahem. Anyway, the odds are overwhelming that at least some other wizards have been joined in mental pairs previously – I mean, these are guys who can store their subjective memories in pools of water to be re-experienced at will; you can’t tell me nobody’s ever experimented; bathed in another person’s memories; tried to become someone else, or be two people at once. Someone, at some point, must’ve pulled it off. Probably more than one someone.

OK, so there’ve been a few pairs of wizards who shared each others’ minds. Cool. Well, if two works fine, why not three? Hell, why not twelve, or a thousand? With enough know-how and the right set of minds to work with, the wizarding world could whip us up a Magic Consciousness Singularity by next Tuesday.

But there’s the rub: Who all should be included in this great meeting of the minds? Can centaurs and house-elves join? What about, say, dragons, or deer, or birds? Where exactly is the cutoff, where the contents of one mind are no longer useful or comprehensible to another? As a matter of fact, given the – ah – not-infrequent occurrence of miscommunication in our own societies, I’d say it’s pretty remarkable that this kind of mental communion is even possible between two individuals of the same species.

Which brings us to an intriguing wrinkle in the endless debate about qualia – those mental qualities like the “redness” of red, or the “painfulness” of pain, which are only describable in terms of other subjective experiences. Up until now, of course, it’s been impossible to prove whether Harry’s qualia for, say, redness are exactly the same as Voldemort’s – or to explain just how the concept of “exactly the same” would even apply in this particular scenario. But now Harry can magically see through Voldemort’s eyes; feel Voldemort’s feelings – he can experience Voldemort’s qualia for himself.

Ah, but can he, really? I mean, wouldn’t Harry still be experiencing Voldemort’s qualia through his own qualia? Like I said, this is a pretty intriguing wrinkle.

The more fundamental question, though, is this: What  does this all tell us about the concept of the Self in Wizard Metaphysics? (It’s capitalized because it’s magical.) Do Harry and Voldemort together constitute a single person? A single self? Is there a difference between those two concepts? Should there be?

I don’t ask these questions idly – in fact, here’s a much more pointed query: What do we rely on when we ask ourselves who we are? A: Memories, of course; and our thoughts and feelings about those memories. Now, if some of Harry’s thoughts and feelings and memories are of things he experienced while “in” Voldemort’s mind (whatever that means) then don’t some of Voldemort’s thoughts and feelings and memories comprise a portion of Harry’s? You can see where we run into problems.

Just one last question, and then I promise I’ll let this drop. When you read about Harry’s and Voldemort’s thoughts and feelings and memories, and you experience them for yourself, what does that say about what your Self is made of?

I’ll be back next week to talk about neurons and stuff.

Follow

Get every new post delivered to your Inbox.

Join 75 other followers