Posts Tagged ‘ consciousness ’

Neuroscience Friends!

I’ve just returned from a thrilling weekend at the BIL Conference in Long Beach, California (yes, the pun on “TED” is very intentional) where I met all kinds of smart, fun people – including lots of folks who share my love for braaaiiins!

The conference was held in... The Future!

So I thought I’d introduce you guys to some of the friends I made. I think you’ll be as surprised – and as excited – as I am.

Backyard Brains
Their motto is “neuroscience for everyone” – how cool is that? They sell affordable kits that let you experiment at home with the nervous systems of insects and other creatures. They gave a super-fun presentation where I got to help dissect a cockroach and send electrical signals through its nerves.

They build all kinds of cutting-edge tools that let home users study their brain activity, and even control machines and art projects with it. Their founder, Ariel Garten, has a great TED talk here – I’ve rarely met anyone else who was so excited to have weird new neuroscience adventures.

Deltaself and Dangerously Hardcore
Two blogs by the very smart Naomi Most – the first is about how scientific data is changing the way we all understand our minds and bodies; the second is about hacking your own behavior to stay healthier and live better.

Halcyon Molecular
Their aim is to put the power to sequence and modify genomes in everyone’s hands within the next few decades. They’re getting some huge funding lately, and lots of attention in major science journals.

Bonus – XCOR Aerospace
They’re building a privately-funded suborbital spacecraft for independent science missions. If there’s anybody who can help us all join the search for alien life in the near future, I bet it’s these guys.

So check those links out and let me know what you think. I’d love to get these folks involved in future videos, especially if you’re interested in any of them.

Harry Potter and the Nature of the Self

Hooray for Google Image Search!

Yup, this is what we’re doing today. I finally got to see Deathly Hallows Part 2, and it got me thinking about neuroscience like frickin’ everything always does, and I came home and wrote an essay about the nature of consciousness in the Harry Potter universe.

And we’re going to talk about it, because it’s the holidays and can we please just pull it together and act like a normal family for the length of one blog post? Thank you. I really mean it. Besides, I guarantee you that this stuff is gonna bug you too once I’ve brought it up.

So in the movie, there’s this concept of Harry and Voldemort sharing minds; mental resources – each of them can occasionally see what the other one sees; sometimes even remember what the other one remembers.

That idea is not explored to anywhere near a respectable modicum of its full extent.

First of all, are these guys the only two wizards in history who this has happened to? Yeah, I’m sure the mythology already has an answer for this – one that I will devote hours to researching just as soon as that grant money comes through. Ahem. Anyway, the odds are overwhelming that at least some other wizards have been joined in mental pairs previously – I mean, these are guys who can store their subjective memories in pools of water to be re-experienced at will; you can’t tell me nobody’s ever experimented; bathed in another person’s memories; tried to become someone else, or be two people at once. Someone, at some point, must’ve pulled it off. Probably more than one someone.

OK, so there’ve been a few pairs of wizards who shared each others’ minds. Cool. Well, if two works fine, why not three? Hell, why not twelve, or a thousand? With enough know-how and the right set of minds to work with, the wizarding world could whip us up a Magic Consciousness Singularity by next Tuesday.

But there’s the rub: Who all should be included in this great meeting of the minds? Can centaurs and house-elves join? What about, say, dragons, or deer, or birds? Where exactly is the cutoff, where the contents of one mind are no longer useful or comprehensible to another? As a matter of fact, given the – ah – not-infrequent occurrence of miscommunication in our own societies, I’d say it’s pretty remarkable that this kind of mental communion is even possible between two individuals of the same species.

Which brings us to an intriguing wrinkle in the endless debate about qualia – those mental qualities like the “redness” of red, or the “painfulness” of pain, which are only describable in terms of other subjective experiences. Up until now, of course, it’s been impossible to prove whether Harry’s qualia for, say, redness are exactly the same as Voldemort’s – or to explain just how the concept of “exactly the same” would even apply in this particular scenario. But now Harry can magically see through Voldemort’s eyes; feel Voldemort’s feelings – he can experience Voldemort’s qualia for himself.

Ah, but can he, really? I mean, wouldn’t Harry still be experiencing Voldemort’s qualia through his own qualia? Like I said, this is a pretty intriguing wrinkle.

The more fundamental question, though, is this: What  does this all tell us about the concept of the Self in Wizard Metaphysics? (It’s capitalized because it’s magical.) Do Harry and Voldemort together constitute a single person? A single self? Is there a difference between those two concepts? Should there be?

I don’t ask these questions idly – in fact, here’s a much more pointed query: What do we rely on when we ask ourselves who we are? A: Memories, of course; and our thoughts and feelings about those memories. Now, if some of Harry’s thoughts and feelings and memories are of things he experienced while “in” Voldemort’s mind (whatever that means) then don’t some of Voldemort’s thoughts and feelings and memories comprise a portion of Harry’s? You can see where we run into problems.

Just one last question, and then I promise I’ll let this drop. When you read about Harry’s and Voldemort’s thoughts and feelings and memories, and you experience them for yourself, what does that say about what your Self is made of?

I’ll be back next week to talk about neurons and stuff.

Brain Scans & Lucid Dreams

The brain activity of lucid dreamers – people who become aware that they’re in a dream state – shows some interesting similarities with that of people who are awake, says a new study.

"Ahh - nothing puts me to sleep like a roomful of bright lights!"

By studying the brain activity of lucid dreamers under electroencephalograms (EEGs) and fMRI scans, researchers have found that activity in the somatosensory and motor cortices – regions crucial for touch and movement, respectively – show very similar activation patterns during lucid dreams to those they display when people make or imagine those same movements while awake.

Though dreams have fascinated philosophers and scientists since the dawn of history – some of the earliest written texts are dream-interpretation handbooks from ancient Egypt and Babylon – it’s only in recent years that neuroscience has begun to advance the study of dreams beyond Freudian theorizing and into the realm of hard data.

In the early 1950s, scientists identified several stages of sleep, including rapid eye movement (REM) sleep – the stage in which dreaming takes place; and in 1959, a team discovered a certain class of brain waves – ponto-geniculo-occipital (PGO) waves – which only appear during REM sleep.

Then, in 2009, an EEG study found that lucid dreams exhibit slightly different wave patterns from those associated with ordinary REM sleep – and later that year, another study proposed an astonishing theory: that REM sleep might be a form of proto-consciousness, which performs maintenance and support duty for the “full” consciousness that took over for it at some point in our evolution.

Now, as the journal Current Biology reports, a team led by Michael Czisch at Germany’s Max Planck Institute has made a new leap forward in dream research. By concentrating their research on lucid dreams, the team were able to map the neural correlates of controlled and remembered dream content:

Lucid dreamers were asked to become aware of their dream while sleeping in a magnetic resonance scanner and to report this “lucid” state to the researchers by means of eye movements. They were then asked to voluntarily “dream” that they were repeatedly clenching first their right fist and then their left one for ten seconds.

This approach has provided some surprising new insights into the ways our brains function in a dream state. By having the subjects retell their lucid dreams, the researchers were able to correlate recorded activation patterns with specific actions the subjects had “performed” while asleep:

A region in the sensorimotor cortex of the brain, which is responsible for the execution of movements, was actually activated during the dream. This is directly comparable with the brain activity that arises when the hand is moved while the person is awake. Even if the lucid dreamer just imagines the hand movement while awake, the sensorimotor cortex reacts in a similar way.

This confirms that the brain’s sensorimotor areas are actively involved in planning and executing movements in dreams, rather than just passively observing events.

What’s even more exciting is that, in light of other new technologies like the thought-video recorder, it looks like we may be able to record and play back our thoughts and dreams within the next few decades.

I think this research reflects an even more fundamental shift in thinking about neuroscience, though: as we unravel more and more of the neural correlates of phenomena like sleep and consciousness, we’re coming to realize just how vast a chasm yawns between scientific data and subjective experience.

Before long, it’s going to become essential for scanners and volunteers to be involved in the same continuous feedback loop – one in which the subjects can watch, in real time, the neural correlates of their thoughts and feelings from moment to moment, and adjust them accordingly to produce useful results.

Ambitious? I guess so. But a guy’s gotta have a dream.

Inside the Vegetative Mind

For the first time, scientists have successfully communicated with patients trapped in vegetative bodies.

A vegetative patient, thinkin' about tennis and stuff.

As a report published in the journal NeuroImage explains, these patients’ thought patterns come through quite clearly on an fMRI scanner, and they’re able to respond to questions in ways that demonstrate that they understand what’s being asked:

To answer yes, [one patient] was told to think of playing tennis, a motor activity. To answer no, he was told to think of wandering from room to room in his home, visualising everything he would expect to see there, creating activity in the part of the brain governing spatial awareness.

Four out of 23 vegetative patients proved their responsiveness to a team led by Cambridge neuroscientists Dr. Adrian Owen and Dr. Steven Laureys, by correctly answering a series of “yes” or “no” questions about their families. Based on these results, the scientists speculate that as many as one fifth of vegetative patients may be able to communicate in this way.

The line between comatose and vegetative is often blurry. Patients in comas typically have no response to stimuli and no sleep/wake cycles, whereas vegetative patients return to partial arousal, exhibiting some normal brain activity – such as sleep and wakefulness – but no responsiveness to the external world.

For years, many scientists had assumed that all vegetative patients were unconscious – but this new data calls that assumption into question: at least some patients who seem vegetative on the outside may still have conscious awareness.  Owen says that perhaps 40% of unresponsive patients are misdiagnosed as vegetative, when they may actually be conscious.

And consciousness is only the beginning – as Owen points out, thinking of things like tennis and one’s house requires the ability to use working memory and other high-level cognitive functions.

Laureys adds that this study is only the beginning – he and his colleagues hope to refine their system of communication soon:

It’s early days, but in the future we hope to develop this technique to allow some patients to express their feelings and thoughts, control their environment and increase their quality of life.

Another intriguing set of implications revolves around the idea of synaptic plasticity. If these patients are taught to communicate more effectively, and given real-time feedback from the external world, it’s possible that they could become more responsive overall. This is still speculation – but studies on Alzheimer’s sufferers have yielded encouraging results in this direction.

Since doctors often deny any hope of recovery for most vegetative patients, it’s exciting to think that they may finally get a chance to speak their minds.

Sleepocalypse 2011

One of the images recorded by the fEITER system.

For the first time in history, scientists have recorded functional images of brain activity as humans shift from consciousness into unconsciousness.

What they’ve learned is that the process of falling asleep involves a variety of areas within the brain. Some of these areas systematically inhibit others, until an entirely different type of functional network is created:

The images show that changes in the anesthetized brain start in the midbrain, where certain receptors for a neurotransmitter called GABA are plentiful. From the midbrain, changes move outward to affect the whole brain; as [GABAergic] messages spread from region to region, consciousness dissolves.

GABA (short for gamma-aminobutyric acid) helps initiate inhibitory activity in the brain. Interestingly, it first seems to inhibit inhibition, leading to a state of slight excitation and euphoria. Over time, though, GABA also inhibits that excitatory activity, causing various areas of the brain to decrease their communications with other areas and “shut down.”

Here’s what anaesthesiologist Professor Brian Pollard from the Manchester Royal Infirmary says about seeing the images for the first time:

Our jaws ricocheted off the ground. I can’t tell you the words we used as it wouldn’t be polite over the phone.

I like to imagine he yelled things like “bollocks!” and “cor blimey!”

Anyway, the scans were taken with a new imaging technique known as Functional Electrical Impedance Tomography by Evoke Response (fEITER). It’s essentially the smarter descendant of electrical impedance tomography (EIT), which estimates electrical activity inside the body by measuring signals passing through electrodes attached to peoples’ heads. But fEITER goes one better by measuring detailed electrical activity deep within the brain.

By the way, the Greek word ἀποκάλυψις (apokálypsis), ancestor of the English word “apocalypse,” means “revelation” – or literally, a “lifting of the veil.” Hence the post’s title.

Sharing Thoughtspace

A couple posts back, I asked what you thought it’d be like to not have a self. Today, I want to ask a related question with a very different set of implications.

Exactly where are the boundaries of your self?

Maybe you’d say its limits are defined by the physical boundaries of your nervous system. On the other hand, you might define the boundaries of your self more abstractly, and identify them with the limits of your assembled sensory perceptions, feelings, and thoughts.

I choose to imagine that the robot in the thought experiment looks like this.

The tricky thing is, no matter how we try to define those boundaries, philosophers have come up with some mind-bending thought experiments to explore their logical consequences.

Here’s an example. Imagine that in the near future, scientists design a robot that can be remotely controlled by thought alone (this isn’t very farfetched at all, actually). Let’s say the robot is equipped with senses of sight and sound – and by wearing a special virtual reality helmet, you see (in ultra-HD) what the robot’s eye-cameras see, and hear the sounds its audio sensors pick up. Furthermore, wherever your thoughts direct the robot to go, it goes. When you think “turn around,” it instantly obeys.

It’s not hard to imagine that, after an hour or so of wearing the special VR helmet, you’d get the distinct impression that you were somehow in the robot. Your body would still be seated in a chair or reclining on the bed, of course – but where would your self be?

When I say “self,” I’m not talking about the traditional concept of a soul, but something more abstract – the unnamed sense that enables you to distinguish between “a hand” and “my hand;” and to know where you end and another person begins.

In fact, in I Am a Strange Loop, Douglas Hofstadter brings up the idea that in a long-term intimate relationship, it can become difficult to distinguish your preferences and ideas – and sometimes even memories – from the other person’s. In a sense, both of you begin to “think with another person’s brain,” to use Hofstadter’s phrase – or, one might say that both of your selves, to some degree, are represented throughout both brains. It can be blissful or catastrophic, depending on the situation.

This is all a bit abstract, but let’s take a look at a more down-to-earth example. Last week, I was fascinated to read this story of 4-year-old twins with a conjoined brain. Conjoined twins are rare enough as it is, and ones with craniopagus (i.e., who are conjoined at the head) are even rarer – but these particular twins seem to be unique in all of medical history:

Their brain images reveal what looks like an attenuated line stretching between their two brains – a piece of anatomy their neurosurgeon, Douglas Cochrane of British Columbia Children’s Hospital, has called a thalamic bridge, because he believes it links the thalamus of one girl to the thalamus of her sister.

The twins, thinking some deep twin thoughts.

As I’ve mentioned before, the thalamus acts as a relay station for signals entering the brain from the peripheral nervous system – and it also forms a critical part of the thalamo-cortico-thalamic circuits that enable the cerebral cortex to process and respond to sensory data. In short, some doctors think these twins may share – and even actively participate in – each others’ subjective sensory experiences. One might say they’re “experiential roommates” in a way none of us will ever be.

Though the story admits that no controlled clinical trials of the girls’ behavior or neurophysiology have been conducted (their family is understandably wary), anecdotal evidence suggests that the twins exhibit an extraordinary level of nonverbal coordination. Here are some examples:

Krista reached for a cup with a straw in the corner of the crib. “I am drinking really, really, really, really fast,” she announced and started to power-slurp her juice, her face screwed up with the effort.

Tatiana was, as always, sitting beside her but not looking at her, and suddenly her eyes went wide. She put her hand right below her sternum, and then she uttered one small word that suggested a world of possibility: “Whoa!”

“Now I do it,” Tatiana said, reaching for the cup from which her sister was just drinking. She started to chug. Krista’s hand flew to her own stomach. “Whoa!” she said.

“I have two pieces of paper,” Krista announced.

The girls sat at a small table in the living room, drawing, their faces, as always, angled away from each other. Each had one piece of paper. So I was surprised by Krista’s certainty: She had two pieces of paper?

“Yeah,” the girls affirmed in their frequent singsong unison, nodding together.

On the rare occasions when the girls fight, it’s painful to watch: they reach their fingers into each other’s mouths and eyes, scratching, slapping, hands simultaneously flying to their own cheeks to soothe the pain.

Without hard neurological data, it’s hard to speculate in detail about what to make of all this. But one thing seems clear: although the twins have distinct personalities, motivations and preferences, a large amount of their sensory experience is shared – through nonverbal communication and perhaps even direct CNS connections. Reading the story, one gets the sense that the girls’ developing senses-of-self are struggling to define themselves amid a sea of shared perceptions – possibly even shared thoughts. I can’t resist bringing up Hofstadter’s famous “Twinwirld” thought experiment:

[In Twinwirld], instead of people normally giving birth to one person, the normal birth, in fact almost all births, are identical twins. And so the identical twins grow up just basically hanging around together all the time, and they become what I call a ‘pairson’. And they’re two halves; they have the same name, so not even Greta and Freda, but just the same name, but if you want you can append an ‘l’ and an ‘r’, you can call that a left and a right if you want to them.

So here is Karen for example. Karen consists of two pieces, Karen L and Karen R. And here is another ‘pairson’, Greg L and Greg R. And Karen and Greg get married and they have ‘twildren’, Lucas and Natalie. Lucas is a boys and Natalie is a girls. And this is totally normal. This is the way it is in twinworld, and Lucas considers himself to be one thing; he does know that he has two parts, but he doesn’t feel as if he’s divided in two.

This guy's mind is just all over the place.

When I first read about Twinwirld, it seemed like a very strange fantasy. But after reading about the real twins – Krista and Tatiana – it seems that the only really implausible part is the idea that two human minds (i.e., a “pairson”) could ever have completely identical tastes and motivations. But then again, maybe the two halves of a pairson would have debates and fights, just as we all sometimes furiously debate within our own minds – and just as the twin girls occasionally escalate their arguments to physical violence. Bodily limitations aside, it’s not so implausible to conceive of a whole society composed of such pairs.

The fact that a human mind can still develop at all in such an unorthodox environment as the twin girls’ mind(s?) is – I think – a ringing endorsement of the human connectome’s versatility. But most of us are “wired” with a sense of ourselves that assumes a discrete, bounded, and spatially contiguous container for that self: “One Self, One Brain, One Body.” And this message is subtly reaffirmed at every level of human interaction, from the verbal to the societal.

Are these boundaries as sharp as we assume they are, though? In certain situations, such as the relationship example above, another person’s brain can help us think our thoughts – and even form a major part of our self-identity. Some of us may live to see a level of technology that challenges the necessity of a direct correlation between self and body. And as the twins‘ story shows, those ideas may not apply in every case even today.

So, next time you find yourself engrossed in a live video feed of a place in another city, or notice that you’re finishing your best friend’s sentences, you might try asking yourself where, exactly, your self is located. I’ll leave you with a little song for thinking about this.

Kinds of Selves

What do you think it’d be like to not have a self?

Take a moment and actually try to imagine it: a subjective experience that’s just as rich with sensory experiences, thoughts, and feelings as your life is now – but entirely devoid of an “I” to assign them to.

A self, as rendered by Natalie Dee.

If your mind works anything like mine, you probably found it difficult – if not impossible – to conceive of such a thing. After all, in any such experience, there’d still need to be someone perceiving those thoughts and feelings. It’s fairly straightforward to imagine life without a continuing abstract sense of self, but it’s much harder to imagine life without any “I” at all.

This points to an important distinction: your abstract concept of “I” is not the same thing as the subjective experience of “I”-ness. One is an idea; the other is an entire mode of perception. Let’s translate that into neurophysiological terms.

The human brain actually has an entire region – the left temporoparietal junction (TPJ) – that’s crucial to processing self/other distinctions. When the TPJ is stimulated with electrical current, patients experience a “shadow self” that lurks behind them:

For the few seconds that the electrical stimulation was occurring, [a patient] described a sensation of a shadowy man hovering behind her. And when she was asked to lean forward and hug her knees, she said it felt as if the man was (unpleasantly) reaching around to grasp her.

In other words, under artificial stimulation of the TPJ, the patient’s mind projects its own body’s movements – and even its sense of its position and shape in physical space – onto an imagined other.

So as we can see, having a self requires several distinct types of processes:

a) raw subjective experience
b) the TPJ’s ability to help make distinctions between the self and others
c) a continuing and self-reinforcing abstract concept of “I.”

The third of these – the abstract concept of “I” – might be considered a complex symbol rendered by a connectome. But the distinction between “me” and “not me” seems to be rooted in much more fundamental brain processes – after the occipital lobe/visual cortex, the temporal lobe is one of the most primitive (i.e., oldest) cerebral structures.

The temporoparietal junction.

I bring all this up because I’ve been re-reading Douglas Hofstadter’s book I Am a Strange Loop this week (if you’ve never had the pleasure, I highly recommend it) and I ran across a passage that stopped me in my tracks. See, Hofstadter’s work is the first place I encountered the distinction between the subjective self that actually experiences the present moment, and the abstract conceptual “I” to which we ascribe the sensations and motivations encountered in that subjective experience. In the passage that captured my attention, Hofstadter is explaining how human consciousness synthesizes this abstract “I”-concept from raw experience:

We are powerfully driven to create a term that summarizes the presumed unity, internal coherence, and temporary stability of all the hopes and beliefs and desires that are found inside our own cranium – and that term, as we all learn very early on, is “I.” And pretty soon this high abstraction behind the scenes comes to feel like the maximally real entity in the universe. (p. 179)

I’m in complete agreement with Hofstadter as far as the idea that concepts like “brown,” “ball,” “between,” and “big” all have some sort of neural correlates – in other words, that they’re all symbols within the representational architecture rendered by a connectome.

But the subjective self and the “other,” it seems to me, are far more than just two symbols – they represent two entirely different categories of symbols, whose inherent differences are hardwired into the brain. If the self is just another concept we invent, why are our brains specifically geared toward differentiating that self from the external world? It seems to make more sense to say that the self is not just a concept, but an entire mode of perception.

Then again, Hofstadter seems to have planned for this objection; he navigates these treacherous waters by keeping his definition of “symbol” fairly vague – in his explanation, any pattern in brain activity that corresponds to a concept is a symbol. Thus, brains have symbols for “dog,” “running,” “excitement,” and so on. We even carry around a symbol that corresponds to “subjective experience” – a complex Gödelian loop, as Hofstadter puts it, but a symbol nonetheless.

This idea may be helpful for understanding some aspects of human self-consciousness, but it also seems to be the sort of reasoning Hilary Putnam was objecting to with his famous “There are a lot of cats in the neighborhood” analogy.

Without delving into too many details, the heart of the argument is this: there can be no empirically detectable mental state (i.e., symbol) that corresponds to the concept, “There are a lot of cats in the neighborhood,” because each individual brain has its own unique ways of rendering concepts like “a lot of,” “cats,” and “neighborhood.” We might agree on the overall meanings of these words, but the precise ranges of concepts attached to each of them are dynamic and unique to each brain.

It seems to logically follow from this that while subjective experience (i.e., self-hood) seems to be an inherent aspect of connectomic functionality, a concept like “me” is as dynamic – and as dependent on the previous experiences of the brain that renders it – as are concepts about cats and neighborhoods. It also makes sense that the more any symbol grows in complexity, the less likely it is to correspond directly to a roughly equivalent symbol in another brain (see also: Wittgenstein).

So even if the abstract concept “me” is a highly complex symbol, it’s a symbol whose very structure is unique in every individual. I don’t think Hofstadter would disagree with that.

But what do you think about the subjective experience of self-hood? Is it just another complex symbol too, or is it a different category of perception altogether?

Hermes and Aphrodite

It’s time for another edition of Comics by People Cleverer Than Me. Enjoy the following introductory XCKD. Click it to embiggen!

Today, I’m going to talk about what exactly it is that we call “me.”

But first – let’s talk about feelings.

Have you ever wondered why the English language has hundreds (if not thousands) of words for precise shades of colors, and comparatively few terms for precise emotions?

Why are we often limited to vague words like “rapport” and “energy” when we talk about the emotion of a conversation? I mean, they’re fine for ordinary conversation, but they don’t tell us much about exactly how it felt. It’s easy enough to say something like, “his lighthearted tone made us all laugh,” but where’s the laconic precision of terms like “dark magenta” and burnt umber?”

Authors often convey emotions with metaphors - “A cold knife twisted in his gut as he heard the dreadful news,” but these kinds of descriptions are more meant to convey a sensation the reader can empathize with – they’re not intended as precise terms for cataloged, objective emotions. I’m not even talking about scientific precision here, necessarily – just enough to provide the listener with a clearly defined idea of what emotion the speaker was feeling.

It seems that emotions often exceed and transcend words. That’s because they affect and involve many more areas of the brain. In fact, they may involve a functional network that’s much older than the one that thinks in words. But I’m getting ahead of myself. First, I’ve got to explain what I mean by “networks.”

While the left and right hemispheres of the brain are physically divided, they’re connected by a bridge of neural fibers known as the corpus callosum (Latin for “tough body”), which allows signals to pass between them. Scientific evidence from functional magnetic resonance imaging (fMRI) research shows that the left and right hemispheres don’t work as separate processors, but as one overall functional unit composed of many subunits heavily dependent on one hemisphere or the other.

The fissures and sulci (ridges) of the brain exhibit some intriguing anatomical non-symmetries between the left and right-hand sides; and some neurophysiological similarities – and differences – have also been identified. Which means there are no exclusively left-brained or right-brained people. However, each hemisphere does seem to specialize in a few particular types of processing.

In an earlier post on connectome hacking, I talked about switching between your mind’s two basic modes of processing: rational vs. experiential; sequential vs. timeless. One of these modes is that of subjective perception of the present moment – full of feelings, sight, and sound; but devoid of interpretation or meaning. The second mode, which can be run in addition to the first, is that of associations - dividing the subjective experience into parts, and placing those parts in a sequence of causes and effects, attaching words to them, and calling up memories to compare them to. By choosing to inhibit the sequential, verbally-oriented components, we can just experience existence.

In my beloved TED video, Dr. Jill Bolte Taylor describes the left hemisphere as a serial processor (one that processes tasks sequentially) and the right hemisphere as a parallel processor (one that processes multiple tasks at the same time). Now, I doubt that Dr. Taylor would be a fan of any neurophysiological theory that treated the left and right hemispheres as independent systems. Still, she does make a convincing case for a somewhat similar idea – that the two hemispheres are semi-independent components of a single system.

When Dr. Taylor suffered a stroke in her left frontal cortex, words, sequential reasoning, and finally even her perception of her own body gave way to an experience detached from physical reality. As she describes it, her subjective consciousness was freed entirely from associations, from sequential planning, and even from subject/object relativity, and simply bathed in awareness itself. It seems she was self-conscious, and yet not conscious of any abstract “self” beyond the present moment and place.

Her journey reveals some intriguing features of consciousness, which is clearly composed of more than just a single self-referential loop of some kind. We can start by looking at the break with external reality that took place in Dr. Taylor’s brain. Between the two types of processes she describes – raw subjective experience, and thoughts about subjective experience – we have a possible (very simplified) model for the basic structure of consciousness. But within this fundamental functional structure, a more complex interplay of communication circuits is at work.

Some scientists are discussing the idea that the brains of human infants come pre-wired with at least five intrinsic connectivity networks (ICNs), each of which utilizes neurocircuitry across both brain hemispheres.

The five networks are specialized, and activity in each one corresponds to a certain type of processing:

1. Sensory (Visual cortex /occipital lobe)
2. Motor (somatosensory / motor cortex)
3. Memory (temporal cortex / auditory)
4. Language/Spatial (depending on lateralization and hemisphere)  (sup. parietal , cerebellum)
5. Cognition (frontal)

Though language processing does involve both sides of the brain, precise speech and verbal “mental chatter” seem to be heavily dependent on the left hemisphere (the right hemisphere is active in language processing, but seems to deal more with the tone and rhythm of speech than the words themselves). Sensory and cognitive processes involve neural networks distributed throughout both hemispheres, and emotions are rooted in even older structures in the lower parts of the brain, like the amygdala and the cingulate cortex. In other words, language is a newer and much more specialized phenomenon than emotions are.

There’s another important difference between emotions and language: we’re able to use precise words for colors because their shades can be measured and quantified fairly easily – they correspond to wavelengths of light. But emotions are more like symphonies – they’re formed through a complex interaction of connectome networks over time, and they can’t be directly perceived (i.e., experienced) by an external observer the way colors in the physical world can. Thus, we describe emotions with words the listener can empathize with, rather than trying to quantify them in any objective sense.

In fact, the more we focus cognitive attention on understanding an emotion rationally, the less attention and processing power is available for feeling that emotion. This is similar to the principle of doing math to stop a panic attack. In fact, it’s a technique used in many meditation schools for dealing with negative feelings.

Now, another intriguing thing about ICNs is that as we age, they continue to grow, change, and interact in new ways. By adulthood, entirely new networks have formed:

  1. Visual – occipital
  2. Sensorimotor -pre-post central gyrus
  3. Auditory/memory – auditory/temporal cortex
  4. Language/spatial – fronto-parietal, strongly lateralized in two hemispheres
  5. SALience (also Known as SAL) Anterior Insula+ anterior cingulate
  6. Balance and co-ordination – cerebellum
  7. Default Mode Network - Medial frontal, posterior cingulate, Angular gyrus
  8. Executive Control Network – dorsolateral, prefrontal + superior parietal

Even as the interactions between our ICNs become more complex, our connectomes still manage to sustain a continued sense of self (“ego”) – or at least the subjective illusion of one. The underlying idea here seems to be that the subjective consciousness – that pure awareness that draws heavily on right-hemisphere resources – can focus a “spotlight” or “active grip” of attention on one or more of these networks at any given time, thus causing the subjective self to experience that network’s activity. This is how my subjective self can go from “drifting” in a daydream to being “absorbed” in a conversation to being “focused” on a sequential task.

Meanwhile, all these ICNs are independently active, to some degree or another, during most of our waking hours – but the subjective self’s spotlight can only focus on a certain amount of neural activity at a time. This, the (easy permeable) division between perceived reality, memory, and feelings, and those that are said to be processed by the “subconscious” – i.e., the majority of non-spotlighted nervous processing taking place at any given moment. It’s even possible that some of the networks and sub-networks constitute independent “selves” of their own.

I suspect that some budding neuropshychologists of the ancient world may have had a sense of this idea of specialized networks. In Greek mythology, for instance, Hermes was the gods’ messenger – the god who conveyed information from the gods on Mount Olympus to listeners throughout the world. He was also the god of words, and of symbols. He was known for being a charmer and a schemer – the “silver-tongued” and the “many-cloaked.” Aphrodite was the goddess of love, beauty, eroticism, and sensuality.

These two archetypes provide an intriguing look at the way ancient thinkers classified aspects of the mind – because, after all, what are gods if not reflections and personifications of aspects of ourselves? Now, I’m not suggesting that these two gods each represent a hemisphere of the brain, or a particular ICN. Rather, I think they each reflect certain abilities and attributes of certain functional networks within the brain – not unlike software that can be run on a particular type of hardware.

I’ll get into more of this in future posts. For now, I’ll sign off with this: when words turn back from your thoughts, chances are you’ve stumbled on something worth thinking about.

The Watson Situation

Or, “Logistical Problems in Subsymbolic Computation: A Neurophysiological Perspective.”

Because that title’s fun - that’s why.

Today, I want to take a little break from connectome hacking and talk about a question that’s been generating some lively debate lately: what exactly is IBM’s Watson doing?

Watson's grandfather in a cheeky pose, mocking its creator.

I don’t mean “doing,” so much, in the sense that can be answered, “Crushing every human opponent with a silicon fist,” but in a more abstract, philosophical sense: “Is Watson thinking? Does it have awareness? Does it display any of the components of conscious thought?”

I think we can learn a lot about consciousness by comparing some things Watson does with some things that happen in a biological connectome. So let’s dig into this debate, and see if we can come to a more precise understanding of some important differences – and similarities – between the two.

A good place to start is 1972, because it’s the year Neil Young’s “Heart of Gold” broke the top ten. More relevant to this discussion, it’s the year the philosophy professor Hubert Dreyfus released the first edition of his book “What Computers Can’t Do.”

As a little background, the late 1960s were an exciting time for artificial intelligence (A.I.) research, because the development of the integrated circuit allowed computers the size of a filing cabinet (which was stunningly compact for the time) to solve mathematical equations and win games like chess – and their speed and complexity were leaping ahead every year. Some researchers predicted that by the early ’70s, computers would be proving new mathematical theorems and diagnosing people’s psychological problems.

Not so much ackshully, said Dreyfus. By the ’70s, some unanticipated hurdles on the path to Strong A.I. had been discovered. Dreyfus pointed out four false assumptions in particular:

1) The biological assumption: that the brain processes information via on/off switches, similar to a computer. But neurons are much more like “fuzzy” analog signal carriers (such as antennas) than digital gates.

2) The psychological assumption: that brain activity is based on equivalents of “bits” of information, and follows discrete rules for moving these “bits” around. But a connectome is much more holistic than this – each “operation” it performs seems to be a dynamic congruence of a multitude of perceptions and biases.

3) The epistemological assumption: that all knowledge can be formalized and represented. But plenty of human knowledge – especially things like “what really matters” about a situation – can only be formalized in a unique experiential context.

4) The ontological assumption: that the universe consists of discrete facts that can be translated into pure information. But some subjective experiences may lie outside the realm of objectivity, simply because they’re subjective. For example, we can all agree on some objective definition of what the phrase “in love” means, but we can’t define exactly what it feels like to be in love in any objective sense – we can only experience it subjectively.

All four of these objections point back to the same essential idea: that the vast majority of human knowledge – the knowledge we’re acting on when we make instinctive decisions, or “lean” one way or the other – is inherently experiential. Every connectome participates in a ceaseless feedback loop with its environment, and that feedback – especially from others of the same species – is crucial to understanding (for example) what “really matters” about a certain situation, or what counts as a “weird” answer.

Watson, on the other hand, can only compare mathematical probabilities of correctness (or “weirdness”) based on rather strict formulas – which is why it guessed “Toronto” in a category called “U.S. Cities.”

During its training phase, Watson had learned that categories are only a weak indicator of the answer type … The problem is that lack of attention to such a mismatch will sometimes produce a howler. Knowing when it’s relevant to pay attention to the mismatch and when it’s not is trivial for a human being. But Watson doesn’t understand relevance at all. It only measures statistical frequencies.

The temptation at this point is to slip into a debate about what we really mean when we say “understand” – that is, whether understanding is simply a perception of a perception, or if it’s a different order of phenomenon altogether. But I think there’s an even more obvious reason why Watson can’t be said to truly understand or think anything: it’s not designed to factor new experiences into its analytical process.

Oh, it can factor in new facts at a dazzling rate. What it can’t do is abstractly assess the reason it got a question wrong – it can’t look for errors in its own thought process, and recalibrate its own analysis methods to avoid errors of that type in the future. Animals (including humans) do this automatically all the time – operant and classical (Pavlovian) conditioning are two examples.

But these types of learning depend not on data points and formulas, but on the strength of connections between certain neurons and groups of neurons. Feelings like pain and pleasure play a major part in forming or weakening those connections. And as the philosopher John Haugeland put it, “The problem with computers is that they just don’t give a damn.”

The writer of this New York Times article sums up this contrast neatly:

What computers can’t do, we don’t have to do because  the worlds we live in are already built; we don’t walk around putting discrete items together until they add up to  a context; we walk around with a contextual sense — a sense of where we are and what’s at stake and what our resources are — already in place; we inhabit worldly spaces already organized by purposes, projects and expectations.

In other words, context isn’t a piece of information fed into a connectome, or a conclusion drawn by one – context is inherent in each connectome’s unique identity. Every connectome evolves ceaselessly from one moment to the next, and recalibrates its thoughts and behavior in response to new stimuli. Watson, on the other hand, has no framework for knowing when its own rules can – or should – be bent.

So symbol manipulation can only take Watson so far – to make the leap into what we’d call “understanding,” an A.I. would need to somehow carry around an awareness of what it’s like to be that unique A.I. This is where we move from the symbolic to the subsymbolic.

Symbols. Not pictured: meanings.

As an analogy for subsymbolism, think of pixels in an image – an individual pixel doesn’t symbolize anything; it just is what it is: a dot of color. A grouping of pixels can collectively symbolize a letter, a shape, or a whole photo; but even then, the symbol itself has no inherent meaning – the meaning is in the mind of the interpreter(s). Meaning is a subjective process, rather than an objective fact.1

No one’s questioning that our thought processes rely heavily on symbols – they obviously do. The objection is that symbol manipulation is insufficient to explain consciousness itself – that is, the subjective experience in which those symbols have meaning. Just as the subsymbolic pixels in a line of text only have significance to an interpreter who understands the letters they collectively signify, there must be a “language,” of sorts, in which the symbols of the mind have significance – and that “language” is the experience of being subjectively conscious.

This isn’t just philosophy – researchers in fields like computational neuroscience have learned that although consciousness makes use of symbolic systems, it isn’t based on symbol manipulation, but apparently on interactivity patterns that develop in complex neural networks over time. If anything, the question “What is consciousness?” seems to look blurrier the harder we stare at it.

Maybe that’s why Dreyfus has followed up with a new book: “What Computers Still Can’t Do.”

So, what is Watson doing? It’s correlating data, just as your word processor checks its dictionary for words spelled similarly to the one you just misspelled – and like your word processor, Watson doesn’t learn from its mistakes unless it’s explicitly taught that they’re mistakes. Because Watson doesn’t have subjective experiences, it doesn’t carry around feelings like “success” or “failure” or “difficulty,” so it can’t factor such ideas into its reasoning. It simply computes probabilities mathematically, and selects the highest one.

And what’s a connectome doing? Experiencing, of course. Every thought, feeling, stimulus, and reaction is a part of that experience. It’s not just context that separates biological minds from machines – it’s the fact that subjective experience itself is the context.


1. There’s a Heraclitus quote that seems à propos here: “Ever-newer waters flow on those who step into the same rivers.” Or, as another translator put it, “We both step and do not step in the same rivers. We are and are not.”

Opening a Terminal

…otherwise known as “What It’s Like to Be a Fish, Part 2.”

I have to start by showing you this TED talk that just grips me every time I watch it. It’s by Dr. Jill Bolte Taylor, a neuroanatomist who experienced a stroke in her left hemisphere, and…well, I’ll let it speak for itself.

Dr. Taylor, having a staring contest with a human brain.

On the morning of December 10, 1996, I woke up to discover that I had a brain disorder of my own. A blood vessel exploded in the left half of my brain. And in the course of four hours, I watched my brain completely deteriorate in its ability to process all information. On the morning of the hemorrhage, I could not walk, talk, read, write or recall any of my life. I essentially became an infant in a woman’s body.

Would you mind spending 15 minutes or so with her story, if I promise you that it will make you think long and hard about your connectome – about yourself?

During her stroke, Dr. Taylor experienced a series of events that took her to the edge of consciousness – to what some might call the “fringes” of science itself. She brought back a story that is, to say the least, strange and intriguing.

Whatever your thoughts about her out-of-body experience, it’s easy to see that Dr. Taylor is trying her best to report, as accurately as she knows how, the subjective experience of watching her own brain shut down piece by piece. The interpretation is hers, of course – but even a skeptical science hero like me can’t ignore how unusual it is that Dr. Taylor’s consciousness was preserved intact through such a devastating chain of cerebral hardware failures.

What I want to talk about is her inner voice – that “mental chatter” she mentions. She calls it the voice of the “left brain,” which is accurate enough – as soon as I say it’s the left dorsolateral prefrontal cortex, someone’s going to come along and say I define the area too generally or too specifically – and she describes a point when she could literally hear that voice speaking to her muscles:

I could actually hear the dialogue inside of my body. I heard a little voice saying, “OK. You muscles, you gotta contract. You muscles, you relax.”

Is this really going on all the time – this chatter to our muscles?

Can I just toss an idea out, here? Maybe the verbal chatter is a component – in human left prefrontal cortexes, at least – of sequential reasoning. In people like me, it’s in words. In people who use sign language, it’s in sign. In any case, maybe the chatter is about whatever the attention of the subjective consciousness is focusing on at any given moment.

But that doesn’t mean the chatter doesn’t affect our brain’s non-conscious processing. Just the opposite, actually. Self-talk is a major influence on how we interpret our experiences, which ones we remember most readily, and how we assemble those into a concept of a self – not the subjective self of the “now” moment, but the story we tell ourselves, constantly, about who we are.

This endless interface of the subjective consciousness with “the script” – and the sequential mode of thinking it represents – breaks down to a basic pattern:

1) You have an experience (A mime hands you a peanut-butter sandwich.)

2) Your connectome searches memory for a similar situation. (“I’ve never been handed a peanut butter sandwich on the street before…but I remember that one time a mime accosted me with an invisible scythe.”)

3) Your connectome “sorts” the new experience, and comes up with a few possible responses. (“I should run like hell, lest he start wielding an invisible scythe! Unless…well, that peanut butter does look delicious.”)

4) The motivation that gathers the most brainpower to its cause gains access to enough action selection circuits in the basal ganglia to override all suppression signals and initiate a response in the peripheral nervous system. At some point, the subjective consciousness is alerted that the conceptual “self” has come to a decision.

OK, that last one involved some speculation, I admit.

But it’s clear, at any rate, that this inner chatter is heavily involved in the ongoing narration that helps maintain a coherent sense of a “self” in the abstract. Consciously altering that narration – rewriting the script as you go along – can cause changes in self-perception, which translate to concrete changes in behavior. I’m sure you’ve heard about self-talk before, but are you familiar with scripting?

A related – and controversial – concept is biofeedback: the idea that when you can see the results of your own physiological processes in real time on a monitor, your connectome can be taught to regulate those processes differently – to slow down your heart rate, erase your migraines, and so on.

The thing about biofeedback is, it mostly seems to work on stress-related disorders. So it would be easy to say biofeedback just teaches people to manage stress, which allows their bodies to function more healthily. And I think that’s completely true – but I think there’s another, more useful concept at work here.

See, the brain isn’t isolated up inside the skull – it’s wired into millions of neurons, all throughout the body, in a constant feedback loop. Control any part of the loop, and you control that input to the brain. Another way to say it is, the human brain may be the first organ capable of consciously programming (and reprogramming) itself and its body. This will be old news to anyone who practices meditation regularly.

So, scripts and feedback – where’s the take-home gift here, right? Coming up.

I think when a lot of people imagine a mental script, they’re probably thinking about a script for a movie or a play – one with lines and bits of description. And I’m sure those are part of anyone’s mental chatter. But I’d rather get beneath that surface, wouldn’t you?

A Linux terminal. Not pictured: women, social life.

I’m going to show my geek colors now: I think Linux is awesome. A lot of things about UNIX architecture kinda make my heart flutter. One thing I love is the terminal1, which is basically all that computer screens used to show: a prompt where you can type commands for the processor to execute. Or you can give the computer a command to execute a script – a pre-written set of instructions to perform a set of actions. Through the terminal, you can even tell the system execute a certain script at a certain time. You can to gain access to – and modify – files and folders that are normally invisible. You can run any program, and appear to be any user you want.

And this can all be done in a connectome.

One key lies in becoming attuned to – then reprogramming – the mental chatter of the left hemisphere. With practice, the mind’s scripts can be linked to sensory or emotional triggers, and automated. They can be safely tested in sandboxes. Inputs and outputs can be rerouted. Combined with bodily feedback, scripts can be used not only to consciously alter any mood, but to erase or create entire behavior patterns.

Now that I’ve explained the basic idea, I’d say I’m about ready to break open the toolbox. So next time, I’m going to talk about three specific examples of using conscious bodily feedback and mental scripting to create concrete changes in behavior and circumstances. And I’m going to explain the neurochemical reasons why they work.

Until then, I’ll sign off with this: your connectome may not be a computer, but it’s a pretty fun thing to hack!


1. Yes, I know Windows can run a DOS shell and use batch scripts. My scientific opinion is that Linux is more fun. Can’t we all get along?


Get every new post delivered to your Inbox.

Join 74 other followers