Posts Tagged ‘ language ’

The Brain Lab Tour

This past weekend, I got to visit one of the coolest places I’ve ever seen: the UCLA Laboratory of Neuro Imaging (LONI). So just for today, I’m gonna take a break from news reporting, and tell you a little about what goes on inside an actual cutting-edge neuroscience lab. Sound good? OK, let’s go!

I'd be okay with just bringing a tent and camping out here.

I’m not sure quite what I was expecting to see as I stepped through the lab’s electronically locked door – certainly not the roomful of clean, open-walled work areas that greeted me. I might’ve been standing in a sleek law office, or an advertising agency – except that the flatscreens adorning the walls displayed colorful 3-D brain maps and reams of dense scientific data.

Imagine being five years old, celebrating Christmas morning at Disneyland – you get the idea.

But before I could start running around making googly-eyes at everything, it was time to meet my host – the delightful Eileen Luders, who’d offered to give me a tour of the lab when I’d gushed about my enthusiasm for her work. Eileen studies neural correlates of meditative states, and she’s also interested in isolating tiny structural variations that correspond to specific kinds of intelligence. More on this awesomeness shortly.

After introductions and a bit of happy chitchat, Eileen led me to a small screening room, whose entire front wall was an enormous wraparound high-resolution video screen. From the control booth, a lab technician dimmed the lights and played us a promo video that was ten minutes of pure heaven. We flew through huge, detailed 3-D brain atlases, watching neural pathways assemble and disassemble before our eyes. We plunged into the brains of schizophrenics and drug abusers and meditators, as data from fMRI and DTI and OTI exploded into multicolored digital sculptures of these brains’ structures and functions.

When it was over (all too soon), I asked Eileen what the lab normally uses this room for. “Mostly meetings,” she said with a chuckle.

As we strolled back to her office, she explained the principle on which this lab works: all the scans are processed by one huge supercomputer array housed in a well-locked room. All a team (or scientist) needs to perform a basic scan is the lab’s permission, about $600, and the time and technical know-how to program the scan they want and parse its results into meaningful data.

Researchers do most of this coding and data analysis from the comfort of the cubicle-esque work areas that fill most of the lab – and this non-centralization frees up more time on the scanners for other researchers, which keeps the lab more affordable and efficient for everyone.

And then we got to talking about some really cool stuff.

I asked Eileen what had gotten her so interested in the neuroscience of meditative states, and she told me she’s always been fascinated by the chasm between subjective experience (i.e., learning how to meditate) and objective science (i.e., what happens in our brains when we meditate). So, she’s been working to help narrow that gap – to study the brain activity of meditators as they meditate, give them feedback about the results – and essentially do her best to act as a fairly seamless translator between mind and machine (and vice versa).

I asked her what it was like having monks visit the lab. She told me two things: “They’re incredibly sharp; incredibly present,” she said. “And they were really excited to see if the bathroom was as cool as everything else.” In short, monks frickin’ rule.

Thinking about meditation, I wondered aloud whether singing and chanting might reflect an inherently different cognitive process from speech. There’s some (very preliminary) research suggesting that certain tones and vocalizations (such as chanting “aum” or humming a major scale) may help modulate patterns of neural activity far more directly than words can. Eileen mentioned that she’s done a bit of research on yogic chanting that might point in this direction. (Too bad we can’t time-travel back to 1968 and put these guys in one of her scanners.)

Anyway, it’s easy to see why Eileen’s also interested in finding neural correlates for specific kinds of intelligence. This got us talking about one of her other pet projects: looking for neural correlates of gender differences. She pointed out that women’s brains are, on average, a little smaller than men’s – “But when you adjust a male and female brain to equal size,” she said, “the differences aren’t nearly as obvious as they might appear at first glance.”  (A few days later, she sent me a published, peer-reviewed paper she’s written on this topic; I’m looking forward to diving into it.)

By this time, it was starting to get late, so Eileen offered me a quick tour of the lab before rushing off to do more Awesome Neuroscience Stuff. We peeked through the window of the supercomputer room, where multicolored lights flickered on rows of imposing black towers. We poked our heads into the wet lab, where neuroanatomists actually freeze and dissect brains. We stopped by a few workstations where technicians were busy designing scanning programs or analyzing their output. I wish I’d been able to take photos.

And then it was time to say goodbye. As Eileen and I shook hands at the doorway, I couldn’t help feeling that I wanted to give her a hug – to hug the whole lab, and everyone in it. It just made me so happy to think that there are other people like us – people filled with a deep yearning to understand what’s going on inside our heads, and just how it works – people driven to dedicate massive amounts of time and money to finding the pieces of this crazy puzzle, and starting to fit a few of those pieces together.

But most of all, I’m so glad that those people are kind, and friendly, and every bit as geeky as I am. Neuroscience FTW!

Babies, Brains, and Bilingualism

If you’d like to learn a new language, pay attention to the background noise, a new study suggests.

"Yo sé más idiomas que tú!"

See, each person’s brain responds a little differently to different types of sounds. Yours is most sensitive to slight variations in your native tongue (which you think of as accents), and less sensitive to variations in other languages. If you’re musically inclined, your brain is highly responsive to slight changes in rhythm and pitch. But when it comes to say, ceiling fans, it’s unlikely that you could tell them apart by sound alone.

None of this is hard-wired, though – when we’re born, we’re extraordinarily sensitive to just about every sound we hear; as we grow, our brains become “tuned in” to some types of sounds more than others.

Late in their first year of life, babies in monolingual environments become more sensitive to sound differences within their own language, while the brains of babies raised bilingually continue to respond to speech in both languages:

For instance, between 8 and 10 months of age babies exposed to English become better at detecting the difference between “r” and “l” sounds, which are prevalent in the English language. This is the same age when Japanese babies, who are not exposed to as many “r” and “l” sounds, decline in their ability to detect them.

In other words, when a non-native speaker has persistent trouble pronouncing certain words, it may be because his or her brain actually can’t process the difference between, say, the “r” and “l” sounds. To them, it may sound like English speakers are emphasizing an imaginary difference.

As the Journal of Phonetics reports, a team led by UW’s Patricia Kuhl fitted babies with special caps that could measure electrical activity across their scalps (i.e., EEG devices). Then they played random English and Spanish syllables to babies of various ages:

For example, a sound that is used in both Spanish and English served as the background sound and then a Spanish “da” and an English “ta” each randomly occurred 10 percent of the time as contrasting sounds. If the brain can detect the contrasting sound, there is a signature pattern called the mismatch response that can be detected with the EEG.

The researchers found that babies less than eight months old showed equal brain activity for either language – but by ten months of age, the brains of bilingual babies responded to both syllables, while those of monolingual babies only responded to English sounds. So it’s pretty clear that those few months are a critical time in the brain’s language development.

This discovery has a wide range of exciting implications. For one thing, it explains why immersion is such a powerful tool for learning a new language – if your brain is bathed in a constant stream of foreign pronunciation, it’s hard not to become at least a little more sensitive to its sounds. Even if you don’t pick up a lot of new vocabulary, your pronunciation and comprehension are bound to improve.

At least, that’s what happened for the babies in the study:

The researchers followed up with the parents when the babies were about 15 months old to see how many Spanish and English words the children knew. They found that … the size of the bilingual children’s vocabulary was associated with the strength of their brain responses in discriminating languages at 10-12 months of age.

Another intriguing implication involved treatment of autism. Some research suggests that autism-related language problems may be linked to early-life exposure to excessive background noise, which causes the brain to treat, say, ceiling fan noise as if it’s just as relevant as speech.

Even for non-autistic children, though, this research suggests there’s hope for those who have trouble learning even their native language, or for kids who’d like to become bilingual. By combining the latest neurological data with training and exposure exercises, it may be possible to reopen the mind’s sensitivity to a wider range of sounds.

As for me, I’m planning to start my advanced Rosetta Stone Ceiling Fan course as soon as it arrives.


Get every new post delivered to your Inbox.

Join 79 other followers