Brain Videos!


For the first time ever, scientists have recorded video images from the brain’s visual pathway, a new study reports.

The next step is obviously to develop contact lenses that a computer can wear.

By recording fMRI scans of volunteers’ brains as they watched various movie clips, the scientists were able to correlate neuronal firing patterns with certain aspects of visual images – like, say, coordinates or colors. Another specially designed software program then looked back through the brain scans, and assembled its own composite video clips that corresponded to certain data from the volunteers’ brain activity.

This might sound a little confusing, so let’s break the process down step-by-step.

As the journal Current Biology reports, a team led by UC Berkeley’s Jack Gallant began by showing movie trailers to volunteers as they lay in an fMRI scanner. The scans were fed into a computer system that recorded patterns in the volunteers’ brain activity. Then, the computer was shown 18 million seconds of random YouTube video, for which it built up patterns of simulated brain activity based on the patterns it had recorded from the volunteers.

The computer then chose 100 YouTube clips that corresponded most closely to the brain activity observed in, say “volunteer A,” and combined them into a “superclip.” Although the results are blurry and low-resolution, they match the video in the trailers pretty well.

While the human visual pathway works much like a camera in some ways – retinotopic maps, for instance, are fairly similar to the Cartesian (X,Y) coordinates used by TV screens – in many other areas – such as color and movement – there’s no direct resemblance between the way a certain pattern of brain activity looks to us on a scan, and what it actually encodes. So – as in so much of neuroscience – this study instead focused on finding correlations between certain brain activity patterns and certain aspects of visual data.

Up until recently, one major problem with finding these correlations was that even the quickest fMRI scans – which record changes in blood flow throughout the brain – couldn’t keep up with the rapid changes in neuronal activity patterns as volunteers watched video clips. The researchers got around this problem by designing a two-stage model that analyzed both neural activity patterns and changes in blood flow:

The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies.

In other words, the system recorded both blood flow (BOLD) and voxelwise data, which map firing bursts from groups of neurons to sets of 3D coordinates. This gave the scientists to plenty of info to feed the computer for its reconstructions. As this wonderful Gizmodo article explains it:

Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

In short, the composite clip isn’t an actual video of what a volunteer saw – it’s a rough reconstruction of what the computer thinks that person saw, based on activity patterns in the volunteer’s visual cortex.

Even so, I think it’s totally feasible that (as this article suggests) systems like this could be letting us re-watch our dreams when we wake up. My guess, though, is that we’ll be surprised at how swimmy and chaotic those visuals actually are – I mean, have you ever tried to focus on a specific object in a dream?

Then again, I would totally buy a box set of Alan Moore’s dreams.

  1. October 14th, 2011
  2. October 28th, 2011
  3. January 21st, 2012

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 74 other followers

%d bloggers like this: