Posts Tagged ‘ technosophy ’

Podcast 1: Our Interview With Joshua Vogelstein

Here it is – the first Connectome podcast!

Click here to subscribe in iTunes.

Join us as we talk with Joshua Vogelstein, a leading connectomics researcher, about the Open Connectome Project, an international venture to make data on neural connectivity available to everyone, all over the world. It’s like Google Maps for your brain.

Here’s a direct link to download the mp3.

We’ve learned a lot while working on this first episode, and future ones will be much cleaner and higher-fi.

Anyway, enjoy!

Clarke’s Third Law

Today I want to take a break from breaking news and tell you about the new love of my life: my Emotiv EPOC neuroheadset.

Love at first sight.

This thing costs $299, and it is worth every penny. It uses 14 sensors positioned around my scalp to create a wireless EEG interface between my brain and my computer. I can move objects onscreen by thinking about it. I can click the mouse by thinking “click.” I can watch real-time video maps of my brain activity as I think about different ideas. I can summon specific feelings to navigate through photo albums sorted by emotion.

In short, the future is here, and it is awesome.

Brain-machine interfaces aren’t exactly earth-shaking news anymore, I know. I’ve written here about thought-controlled cursors, and here about sensory feedback systems that allow monkeys to control virtual hands and literally feel virtual textures. But this device makes this technology available and (relatively) affordable for me – and for you.

And we can do anything we want with it.

Are ya with me here?

Anything.

For instance, I spent most of last night watching waves of neural communication coruscate across my brain as I meditated, imagined, planned, observed, understood, realized, believed and calculated. I watched my left and right hemispheres signal to one another, like two great whales exchanging songs across an ocean, as they worked together to complete tasks. I watched tsunamis of synchronized activation blaze across the screen as disparate thoughts coalesced into dawning realizations. I watched congeries of light dance in the darkness as I thought, “I believe” or “I trust” or “I love.”

And that was just our first night together.

This is what I was talking about when I said we need devices that create real-time feedback loops between our brains and our computers, so we can watch the patterns our thoughts generate as we’re thinking them – the most intimate link ever between human consciousness and technology. We’re hurtling toward the culmination of a process that began millions of years ago in Africa; when one ape, a little smarter than his cousins, looked down at a rock and thought, “I want to use that for something.”

So, guess what this device is being used for right now. Well, for one thing, it’s providing easy computer access to people with physical disabilities, which is fantastic – but other than that, it’s mainly being marketed as a new gimmick for controlling video games.

Come on, people – we can dream so much bigger than this.

By way of inspiration, here’s my all-time favorite short sci-fi story, Exhalation by Ted Chiang. It’ll tell you everything you want to know about my aspirations.

And here’s a little song to set the mood.

Welcome to the Age of Technosophy. Let’s see where our imaginations take us.

Brain Scans & Lucid Dreams

The brain activity of lucid dreamers – people who become aware that they’re in a dream state – shows some interesting similarities with that of people who are awake, says a new study.

"Ahh - nothing puts me to sleep like a roomful of bright lights!"

By studying the brain activity of lucid dreamers under electroencephalograms (EEGs) and fMRI scans, researchers have found that activity in the somatosensory and motor cortices – regions crucial for touch and movement, respectively – show very similar activation patterns during lucid dreams to those they display when people make or imagine those same movements while awake.

Though dreams have fascinated philosophers and scientists since the dawn of history – some of the earliest written texts are dream-interpretation handbooks from ancient Egypt and Babylon – it’s only in recent years that neuroscience has begun to advance the study of dreams beyond Freudian theorizing and into the realm of hard data.

In the early 1950s, scientists identified several stages of sleep, including rapid eye movement (REM) sleep – the stage in which dreaming takes place; and in 1959, a team discovered a certain class of brain waves – ponto-geniculo-occipital (PGO) waves – which only appear during REM sleep.

Then, in 2009, an EEG study found that lucid dreams exhibit slightly different wave patterns from those associated with ordinary REM sleep – and later that year, another study proposed an astonishing theory: that REM sleep might be a form of proto-consciousness, which performs maintenance and support duty for the “full” consciousness that took over for it at some point in our evolution.

Now, as the journal Current Biology reports, a team led by Michael Czisch at Germany’s Max Planck Institute has made a new leap forward in dream research. By concentrating their research on lucid dreams, the team were able to map the neural correlates of controlled and remembered dream content:

Lucid dreamers were asked to become aware of their dream while sleeping in a magnetic resonance scanner and to report this “lucid” state to the researchers by means of eye movements. They were then asked to voluntarily “dream” that they were repeatedly clenching first their right fist and then their left one for ten seconds.

This approach has provided some surprising new insights into the ways our brains function in a dream state. By having the subjects retell their lucid dreams, the researchers were able to correlate recorded activation patterns with specific actions the subjects had “performed” while asleep:

A region in the sensorimotor cortex of the brain, which is responsible for the execution of movements, was actually activated during the dream. This is directly comparable with the brain activity that arises when the hand is moved while the person is awake. Even if the lucid dreamer just imagines the hand movement while awake, the sensorimotor cortex reacts in a similar way.

This confirms that the brain’s sensorimotor areas are actively involved in planning and executing movements in dreams, rather than just passively observing events.

What’s even more exciting is that, in light of other new technologies like the thought-video recorder, it looks like we may be able to record and play back our thoughts and dreams within the next few decades.

I think this research reflects an even more fundamental shift in thinking about neuroscience, though: as we unravel more and more of the neural correlates of phenomena like sleep and consciousness, we’re coming to realize just how vast a chasm yawns between scientific data and subjective experience.

Before long, it’s going to become essential for scanners and volunteers to be involved in the same continuous feedback loop – one in which the subjects can watch, in real time, the neural correlates of their thoughts and feelings from moment to moment, and adjust them accordingly to produce useful results.

Ambitious? I guess so. But a guy’s gotta have a dream.

The Sound of Fear

A certain inaudible sound frequency may directly trigger feelings of “creepiness” and physical symptoms of fear, one scientist says.

Don't look now, but I think I see a g-g-g-gh-gh-sound wave!

A sound frequency of around 19hz – just below the range of human hearing – has been detected in several “haunted” places, including a laboratory where staff had reported inexplicable feelings of panic, and and a pub cellar where many people have claimed to see ghosts.

Though no peer-reviewed studies have examined this phenomenon yet, I think it’s still intriguing enough to be worth talking about – and after all, it is that special time of year. So huddle up close, and let me tell you a tale – the tale of… The Frequency of Fear!

Back in the 1980s, an engineer named Vic Tandy began hearing strange stories from his otherwise-scientifically minded coworkers: whenever they spent time working in a certain laboratory, they’d experience inexplicable feelings of unease, and glimpses of ghostly apparitions.

At first, Tandy chalked these reports up to stress, or to the irritating wheeze of life-support machines that permeated the building. But one foreboding night, as Tandy toiled alone in the lab, he suddenly broke into a cold sweat, and felt the hairs on his neck stand up. He was overcome with the feeling that he was being watched. From the corner of his eye, he glimpsed a sinister gray form moving toward him – but when he turned to face it, it vanished. Tandy fled the lab for the safety of his home, his keen scientific mind churning, asking what could have triggered this bizarre episode.

The next day, Tandy happened to catch sight of a clue: in the lab, he noticed that a foil blade clamped in a vice was vibrating at a rapid rate. Fetching his trusty frequency meter, he discovered that the sound wave behind these vibrations was bouncing off the walls of the lab, and that its peak intensity was focused in the room’s center. Its frequency was 19hz – slightly below the minimum human-audible frequency of 20hz, but easy for a human body to feel as a subtle vibration.

Tandy began to delve into ancient forbidden texts (OK, actually he started reading biology papers) and learned that frequencies near this range can cause animals to behave nervously, hyperventilate, stumble dizzily, and even have trouble seeing clearly.

It’s likely that these animals’ sensitivity to these vibrations evolved as an early-warning system for earthquakes, tsunamis and related disasters, and may explain why animals flee the sites of these disasters en masse long before humans suspect anything’s the matter.

Over the years, subsequent investigations have found that similar frequencies occur in other reputedly haunted spots, which seems to indicate that we humans may be sensitive to these frequencies as well.

If you ask me, though, the scariest part of this story is that as you read this, scientists with less noble purposes could potentially be developing devices to project these frequencies directly into a target’s body. Not to be paranoid here, but I’m not too keen on the idea of a fear ray. Just putting that out there.

On the whole, I think right now is a pretty awesome time to be alive – we’ve got mind-controlled computers, we’ll soon be able to record videos of our thoughts and dreams, and it won’t be long before we can see, hear and even touch virtual worlds. But we’ve also learned that magnetic stimulation can make people want to lie, that electrical stimulation can alter our decision-making processes, and that sound waves can make us feel pain and fear.

We’re on the brink of an unprecedented epoch in human history, when miracle-working may quite literally lie within any person’s grasp – but with that power also comes the potential to create truly unimaginable hells at the push of a button. All I can say is, I hope with all my might that our better nature wins out.

Because, I don’t know about you guys, but I can hardly wait to see what the future holds.

Rhythms of Memory

Our neurons learn best when they’re working on the same wavelength – literally, says a new study.

What are these neurons doing, you ask? Well, um...it's grown-up neuron stuff, OK?

For every synapse – every signaling junction between neurons – there’s a certain firing frequency that increases signal strength the most. In short, neurons work like tiny analog antennas – tuning into incoming signals and passing along the clearest, strongest ones as electrochemical messages.

This represents a huge breakthrough in our understanding of how our brains work. Neuroscientists have known for decades that we (and other animals) learn by strengthening connections between certain neurons – i.e., the more a pair of neurons communicate with each other, the more receptive and sensitive they become to each others’ electrochemical signals. But one central mystery remained: what makes some neurons more likely to “hear” signals from certain other neurons in the first place?

Now we know that every synapse has a frequency “sweet spot” – a certain frequency to which it’s most responsive. The farther away from a neuron’s nucleus a certain synapse is, the higher the frequency of its sweet spot. Thus, different parts of a neuron are tuned to different signal wavelengths.

As the journal Frontiers in Computational Neuroscience reports, a team led by UCLA’s Arvind Kumar and Mayank Mehta set out to expand on previous studies, which had found that stimulating neurons with high-frequency electrical pulses (or “spikes“) – around 100 per second – tended to strengthen synaptic connections, while stimulating them with low-frequency pulses – around one per second – tended to weaken those connections. (Click here for a straightforward explanation of how electrical pulses help transmit signals between neurons.)

In the real world, however, neurons typically fire much quicker bursts – only about ten spikes at a time – at a rate of around 50 spikes per second. “Spike frequency refers to how fast the spikes come,” Mehta explains. “Ten spikes could be delivered at a frequency of 100 spikes a second or at a frequency of one spike per second.”

Until now, neuroscientists lacked the technology to model the results of such brief firing bursts with much accuracy – but Mehta and Kumar designed a sophisticated computer model to test spike frequencies and burst lengths closer to real-world levels:

We computed the influence of these variables on the plasticity induced at a single NMDAR containing synapse using a reduced model that was analytically tractable, and these findings were confirmed using detailed, multi-compartment model.

In other words, they created a simulation of an NMDAR, a certain type of receptor for the neurotransmitter glutamate. These receptors are crucial for strengthening synapses – and thus, for memory formation and learning.

Using their model, the researchers made four major new discoveries:

1) The more distant from a neuron’s soma (main body) a synapse is, the higher the spiking frequency to which it’s “tuned.”  In fact, the same frequency can strengthen or weaken a synaptic connection, depending on where on the neuron that particular synapse is located.

2) Regular, rhythmic pulses cause greater changes in synaptic strength (i.e., greater synaptic plasticity) than irregular bursts do.

3) Short bursts of spikes significantly raise the timing dependence of synaptic plasticity – in other words, the shorter the burst, the more important it is to get the frequency spot-on.

4) Once a synapse learns to communicate with another synapse, its optimal frequency changes – say, from 30 spikes per second to 24 spikes per second.

The internet is now abuzz with chatter about the implications of these results. One intriguing idea is that gradual “detuning” among synapses could underlie the process of forgetting – and that future drugs or electrical therapies could help “retune” some of these rhythms.

The research also raises the question of how incoming rhythms from our senses – especially light and sound – might directly impact these firing frequencies. We’ve known for years that some types of epileptic siezures can be triggered by light flashing at certain frequencies, and that music playing at certain rhythms can pump us up or calm us down. Is it so far-fetched to suggest that rhythms like these might also help shape our thoughts and memories?

On that note, I’m off to stare at some strobe lights and listen to four-on-the-floor dance music…for Science!

 

Virtual Touch

A new brain-machine interface allows minds to literally feel the texture of computer-generated objects, a recent paper reports.

Only the most cutting-edge CGI was used in the new interface.

This interface not only allows a monkey to remotely control a virtual hand by willing it to move – the system also routes feedback on textures and vibrations to the somatosensory cortex, where that feedback is processed as sensations of touch.

Though mind-controlled robotic hands aren’t exactly breaking news anymore, most of those devices only provide visual feedback – in other words, the users of those robotic hands can’t actually feel the objects the hands touch. One recent project did use vibration feedback to help subjects sense the placement of a cursor, but that’s about as far as the idea had been taken.

But now, as the journal Nature reports, a team led by Duke University’s Miguel Nicolelis has created a brain–machine–brain interface (BMBI) that routes movement impulses from a monkey’s brain directly to a virtual hand, and routes tactile (touch) sensations from that hand directly into touch-processing regions of the monkey’s brain:

Here we report the operation of a brain–machine–brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex.

At the risk of sounding repetitive (I can’t help it; I’m so awestruck by this) the BMBI doesn’t involve any robotic hands – the entire interface takes place between the monkey’s brain and a virtual world created within a computer:

Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object.

The computer receives movement commands from 50 to 200 neurons in the monkey’s motor cortex, translating them into a variety of movements for a virtual “avatar” hand (which I’m picturing, of course, as huge and blue and stripey). As the virtual hand feels virtual objects, the system sends electrical signals down wires implanted into the monkey’s somatosensory cortex, where those signals are processed as touch sensations.

The researchers rewarded the monkeys for choosing virtual objects with specific textures. In trials, it only took the monkeys a few tries to learn how to feel using the BMBI – one monkey got proficient after nine attempts; another one picked it up in four.

The researchers hope this technology can be used to create touch-sensitive prostheses for people with amputated or paralyzed limbs. Which sounds awesome – but why stop there? Why not create entirely new bodies from computer-generated touch sensations? Why not place our consciousnesses into virtual birds, or fish, or swarms of bees?

Maybe it’s just me, but I feel like Sgt. Pepper’s must be playing on continual repeat in a lot of these neuroscience labs.

 

Brain Videos!

For the first time ever, scientists have recorded video images from the brain’s visual pathway, a new study reports.

The next step is obviously to develop contact lenses that a computer can wear.

By recording fMRI scans of volunteers’ brains as they watched various movie clips, the scientists were able to correlate neuronal firing patterns with certain aspects of visual images – like, say, coordinates or colors. Another specially designed software program then looked back through the brain scans, and assembled its own composite video clips that corresponded to certain data from the volunteers’ brain activity.

This might sound a little confusing, so let’s break the process down step-by-step.

As the journal Current Biology reports, a team led by UC Berkeley’s Jack Gallant began by showing movie trailers to volunteers as they lay in an fMRI scanner. The scans were fed into a computer system that recorded patterns in the volunteers’ brain activity. Then, the computer was shown 18 million seconds of random YouTube video, for which it built up patterns of simulated brain activity based on the patterns it had recorded from the volunteers.

The computer then chose 100 YouTube clips that corresponded most closely to the brain activity observed in, say “volunteer A,” and combined them into a “superclip.” Although the results are blurry and low-resolution, they match the video in the trailers pretty well.

While the human visual pathway works much like a camera in some ways – retinotopic maps, for instance, are fairly similar to the Cartesian (X,Y) coordinates used by TV screens – in many other areas – such as color and movement – there’s no direct resemblance between the way a certain pattern of brain activity looks to us on a scan, and what it actually encodes. So – as in so much of neuroscience – this study instead focused on finding correlations between certain brain activity patterns and certain aspects of visual data.

Up until recently, one major problem with finding these correlations was that even the quickest fMRI scans – which record changes in blood flow throughout the brain – couldn’t keep up with the rapid changes in neuronal activity patterns as volunteers watched video clips. The researchers got around this problem by designing a two-stage model that analyzed both neural activity patterns and changes in blood flow:

The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies.

In other words, the system recorded both blood flow (BOLD) and voxelwise data, which map firing bursts from groups of neurons to sets of 3D coordinates. This gave the scientists to plenty of info to feed the computer for its reconstructions. As this wonderful Gizmodo article explains it:

Think about those 18 million seconds of random videos as a painter’s color palette. A painter sees a red rose in real life and tries to reproduce the color using the different kinds of reds available in his palette, combining them to match what he’s seeing. The software is the painter and the 18 million seconds of random video is its color palette. It analyzes how the brain reacts to certain stimuli, compares it to the brain reactions to the 18-million-second palette, and picks what more closely matches those brain reactions. Then it combines the clips into a new one that duplicates what the subject was seeing. Notice that the 18 million seconds of motion video are not what the subject is seeing. They are random bits used just to compose the brain image.

In short, the composite clip isn’t an actual video of what a volunteer saw – it’s a rough reconstruction of what the computer thinks that person saw, based on activity patterns in the volunteer’s visual cortex.

Even so, I think it’s totally feasible that (as this article suggests) systems like this could be letting us re-watch our dreams when we wake up. My guess, though, is that we’ll be surprised at how swimmy and chaotic those visuals actually are – I mean, have you ever tried to focus on a specific object in a dream?

Then again, I would totally buy a box set of Alan Moore’s dreams.

Follow

Get every new post delivered to your Inbox.

Join 73 other followers