Opening a Terminal

…otherwise known as “What It’s Like to Be a Fish, Part 2.”

I have to start by showing you this TED talk that just grips me every time I watch it. It’s by Dr. Jill Bolte Taylor, a neuroanatomist who experienced a stroke in her left hemisphere, and…well, I’ll let it speak for itself.

Dr. Taylor, having a staring contest with a human brain.

On the morning of December 10, 1996, I woke up to discover that I had a brain disorder of my own. A blood vessel exploded in the left half of my brain. And in the course of four hours, I watched my brain completely deteriorate in its ability to process all information. On the morning of the hemorrhage, I could not walk, talk, read, write or recall any of my life. I essentially became an infant in a woman’s body.

Would you mind spending 15 minutes or so with her story, if I promise you that it will make you think long and hard about your connectome – about yourself?

During her stroke, Dr. Taylor experienced a series of events that took her to the edge of consciousness – to what some might call the “fringes” of science itself. She brought back a story that is, to say the least, strange and intriguing.

Whatever your thoughts about her out-of-body experience, it’s easy to see that Dr. Taylor is trying her best to report, as accurately as she knows how, the subjective experience of watching her own brain shut down piece by piece. The interpretation is hers, of course – but even a skeptical science hero like me can’t ignore how unusual it is that Dr. Taylor’s consciousness was preserved intact through such a devastating chain of cerebral hardware failures.

What I want to talk about is her inner voice – that “mental chatter” she mentions. She calls it the voice of the “left brain,” which is accurate enough – as soon as I say it’s the left dorsolateral prefrontal cortex, someone’s going to come along and say I define the area too generally or too specifically – and she describes a point when she could literally hear that voice speaking to her muscles:

I could actually hear the dialogue inside of my body. I heard a little voice saying, “OK. You muscles, you gotta contract. You muscles, you relax.”

Is this really going on all the time – this chatter to our muscles?

Can I just toss an idea out, here? Maybe the verbal chatter is a component – in human left prefrontal cortexes, at least – of sequential reasoning. In people like me, it’s in words. In people who use sign language, it’s in sign. In any case, maybe the chatter is about whatever the attention of the subjective consciousness is focusing on at any given moment.

But that doesn’t mean the chatter doesn’t affect our brain’s non-conscious processing. Just the opposite, actually. Self-talk is a major influence on how we interpret our experiences, which ones we remember most readily, and how we assemble those into a concept of a self – not the subjective self of the “now” moment, but the story we tell ourselves, constantly, about who we are.

This endless interface of the subjective consciousness with “the script” – and the sequential mode of thinking it represents – breaks down to a basic pattern:

1) You have an experience (A mime hands you a peanut-butter sandwich.)

2) Your connectome searches memory for a similar situation. (“I’ve never been handed a peanut butter sandwich on the street before…but I remember that one time a mime accosted me with an invisible scythe.”)

3) Your connectome “sorts” the new experience, and comes up with a few possible responses. (“I should run like hell, lest he start wielding an invisible scythe! Unless…well, that peanut butter does look delicious.”)

4) The motivation that gathers the most brainpower to its cause gains access to enough action selection circuits in the basal ganglia to override all suppression signals and initiate a response in the peripheral nervous system. At some point, the subjective consciousness is alerted that the conceptual “self” has come to a decision.

OK, that last one involved some speculation, I admit.

But it’s clear, at any rate, that this inner chatter is heavily involved in the ongoing narration that helps maintain a coherent sense of a “self” in the abstract. Consciously altering that narration – rewriting the script as you go along – can cause changes in self-perception, which translate to concrete changes in behavior. I’m sure you’ve heard about self-talk before, but are you familiar with scripting?

A related – and controversial – concept is biofeedback: the idea that when you can see the results of your own physiological processes in real time on a monitor, your connectome can be taught to regulate those processes differently – to slow down your heart rate, erase your migraines, and so on.

The thing about biofeedback is, it mostly seems to work on stress-related disorders. So it would be easy to say biofeedback just teaches people to manage stress, which allows their bodies to function more healthily. And I think that’s completely true – but I think there’s another, more useful concept at work here.

See, the brain isn’t isolated up inside the skull – it’s wired into millions of neurons, all throughout the body, in a constant feedback loop. Control any part of the loop, and you control that input to the brain. Another way to say it is, the human brain may be the first organ capable of consciously programming (and reprogramming) itself and its body. This will be old news to anyone who practices meditation regularly.

So, scripts and feedback – where’s the take-home gift here, right? Coming up.

I think when a lot of people imagine a mental script, they’re probably thinking about a script for a movie or a play – one with lines and bits of description. And I’m sure those are part of anyone’s mental chatter. But I’d rather get beneath that surface, wouldn’t you?

A Linux terminal. Not pictured: women, social life.

I’m going to show my geek colors now: I think Linux is awesome. A lot of things about UNIX architecture kinda make my heart flutter. One thing I love is the terminal1, which is basically all that computer screens used to show: a prompt where you can type commands for the processor to execute. Or you can give the computer a command to execute a script – a pre-written set of instructions to perform a set of actions. Through the terminal, you can even tell the system execute a certain script at a certain time. You can to gain access to – and modify – files and folders that are normally invisible. You can run any program, and appear to be any user you want.

And this can all be done in a connectome.

One key lies in becoming attuned to – then reprogramming – the mental chatter of the left hemisphere. With practice, the mind’s scripts can be linked to sensory or emotional triggers, and automated. They can be safely tested in sandboxes. Inputs and outputs can be rerouted. Combined with bodily feedback, scripts can be used not only to consciously alter any mood, but to erase or create entire behavior patterns.

Now that I’ve explained the basic idea, I’d say I’m about ready to break open the toolbox. So next time, I’m going to talk about three specific examples of using conscious bodily feedback and mental scripting to create concrete changes in behavior and circumstances. And I’m going to explain the neurochemical reasons why they work.

Until then, I’ll sign off with this: your connectome may not be a computer, but it’s a pretty fun thing to hack!


1. Yes, I know Windows can run a DOS shell and use batch scripts. My scientific opinion is that Linux is more fun. Can’t we all get along?

The Splort Hormone

At the end of my last post, I promised I’d explain more about inner dialogue, and get into some practical tips on self-programming. A draft of that write-up is almost finished [SCIENCE UPDATE! It's here.] but I came across an article today that brought up some intriguing points – and some common misconceptions – about neurochemistry. I couldn’t resist such a perfect opportunity to explain some concepts more clearly.

Eye (and nose) contact.

The article is mainly about the chemistry of eye contact, and…well, I’d better let the author speak for herself.

A loved one’s lingering look can trigger a rush of happiness, but too much eye contact with an acquaintance or a stranger can bring on sudden discomfort. How, exactly, does eye contact affect us, anyway?

Sounds pretty frickin’ fascinating so far, right? So let’s dive in. After a few introductory paragraphs, the author gets to the good part – the neuroscience. She explains that authentic expressions affect other individuals’ emotional responses differently than faked ones do – which is accurate, and backed up by some intriguing scientific research.

But the next bit was what made my Science Radar bleep a warning:

Oxytocin, also known as the “love” or “cuddle” hormone, plays a big part in [making our hearts flutter]. It’s a feel-good chemical that’s released when we feel bonded with someone, either emotionally or physically. The release is prompted by a warm hug, holding hands, falling in love, and so forth.

Well… that’s only sort of true. And sort-of-true statements often lead to confusion, which is why I want to explain the oxytocin situation as clearly as I know how.

Oxytocin is often called the “love hormone” or the “cuddle hormone.” That’s probably because it can be detected at raised levels in human blood plasma around the time of orgasm. If you Google oxytocin, most of the results will contain words like “love” or “cuddle.” You’ll also see a lot of articles asking, “Can oxytocin do [X]?” and “Does it do [Y]?” There’s plenty of speculation floating around – but the fact is, scientists are still unraveling the complex relationship between oxytocin and human emotions.

For example, oxytocin levels seem to rise during physical sexual arousal in women, and “spike” around the time of orgasm. But women’s bodies also release high levels of oxytocin during cervical dilation (i.e., expansion of the vaginal canal) during second- and third-stage labor, as well as when their nipples are physically stimulated for breastfeeding by an infant.

Meanwhile, in men’s bodies, oxytocin levels seem to just rise and then level off during sexual arousal. Some studies have found a mild spike around the time of orgasm, while others haven’t. Oddly enough, at least one study has found that oxytocin levels rise highest in men who stimulate themselves to orgasm. So we might just call oxtyocin the “splort hormone” and be done with it – but (as usually happens with science) there’s a lot more to it than that.

A sexy, naked oxytocin molecule.

First of all, who wants to know what oxytocin is? It’s a polypeptide hormone (i.e., a hormone created when a string of amino acids join together in a specific way). It’s produced in the pituitary gland of mammals. In very general terms, oxytocin is related to changes in the contractile properties of reproductive tissue. Some studies seem to show that oxytocin induces or promotes those changes; other scientists think its presence is just a reflection that they’re happening.

But even that’s just the tip of the oxytocin iceberg. Over the past few years, scientists have learned a lot about this hormone by studying some of the effects it can produce.

In mice, oxytocin in taste buds has been shown to inhibit the desire to keep eating. In the hypothalamus, it helps rodents time their birth cycles. In rats, a raised oxytocin level in the hippocampus decreases responsiveness to stress, and allows wounds to heal more quickly. In humans, it’s been shown to increase generosity toward strangers. Then again, it also makes people racist:

The love and trust oxytocin promotes are not toward the world in general, just toward a person’s in-group. It turns out to be the hormone of the clan, not of universal brotherhood.

Here’s where we finally come around to those ideas about oxytocin being a “cuddle hormone.” Oxytocin in blood plasma has been shown to spike when a person receives a friendly hug, or even an extended gaze. But whether the hormone is rising because we’re feeling loved – or if it’s just because we’re responding emotionally to the behaviors of other individuals in our species – is a question that’s still open.

So, like I said: sort of true.

To be honest, I’m glad I found that article, and I hope others do too – anything that helps get people excited about neurophysiology is awesome as far as I’m concerned. But I like to present things clearly, with plenty of specifics. I think that, as a journalist, if I can’t sit down and describe any given detail of my subject clearly and succinctly, I haven’t done enough research, and it’s time to hit the books again.

That’s just one guy’s opinion, obviously. But it’s the way my connectome is wired.

What It’s Like to Be a Fish

Have you ever wondered what it’s like to be a fish? Not a person in a fish body – I mean, have you ever wondered what it would be like to have a fish brain – a fish mind?

A disoriented goldfish.

Here’s another question: do you have a self in your connectome that tends to…well, to whine a lot? Maybe it doesn’t always whine in words – a lot of the time it’s in emotions – but when it’s an inner voice, it says things like, “Let’s just stay here – it’s safer here,” or, “Sure that’s great to think about, but actually doing it would be too hard.”

Now, let me ask you: do you think fish have an inner critic like that? What about reptiles?

I want to talk about where that inner voice comes from. I want to talk about taming it. Eventually, I want to help you learn to reprogram it.

But it’s going to take a couple of posts. To really understand what’s going on, we have to go back to fish.

A delicious-looking fish brain.

See, fish seem to have been the first animals to develop a cerebrum – what we think of when we think “brain.” But the cerebrum is just one part of fish brains…

- the brainstem (myelencephalon) regulates muscle reflexes, respiration, and other dull-but-crucial survival tasks

- the hindbrain (metencephalon) is mostly composed of the cerebellum, which deals with balance; it sits just on top of the brainstem

- the midbrain (mesencephalon) is mostly made up of the optic lobe; it first appeared in primitive fish, to deal with processing complex visual data

- the cerebrum is composed of two main parts:

1) the interbrain (diencephalon) which monitors and relays incoming information from sensory and muscle neurons, as well as those throughout the brain

2) the forebrain (telencephalon) was at first mainly used for smelling things; it’s made up of the pallium – a distant ancestor of the cerebral cortex – and in some fish, a smaller subdivision known as the olfactory lobe (subpallium)

Like I mentioned, fish used their pallium – known as an archipallium - mostly for processing smells. But the cerebrums of a few fishes and amphibians evolved more ambitiously. Their interbrains began to specialize into structures like the thalamus, which regulates sleep rhythms, and plays a major role in coordinating motor function. On the whole, the thalamus seems to receive incoming signals from neurons, and route them – selectively – to other brain areas through a set of structures called the basal nuclei.

Here’s where we get into the most mysterious role of the thalamus – its interplay with the cerebrum. The basal nuclei allow the thalamus to receive signals, not from outside sensory input, but from another area of the brain, then respond by routing signals to yet another area – enabling a brain to be aware of (and respond to) its own states. This may not be consciousness, exactly – but it represents the leap from just having the feeling of anger, to being aware that you are feeling anger. That’s about as far as most reptiles got.

Meanwhile, the basal nuclei were developing into functional clusters called the basal ganglia. In mammals, these are a component of the thalamo-cortico-thalamic circuits which may be responsible for producing the feedback loops associated with subjective consciousness. As mammals got smarter, the cerebral cortex got larger, and eventually developed the lobes that allow us to speak, memorize lines, and perform the other sorts of tasks usually associated with sentience.

So, what does all this have to do with the inner critic?

Well, there’s a pattern that starts to become clear throughout all this analysis of the brain’s history: it’s a lot easier for evolution to co-opt an existing structure for a new purpose than to develop an entirely new one. Some theories, like the Triune Brain hypothesis, actually group the brain’s structures into semi-separate “systems” or “layers” of consciousness. At any rate, it’s clear that in mammalian brains, a wide variety of instincts, emotions, and thoughts are constantly competing for the spotlight.

Now, in the forebrain there’s an area called the cingulate cortex that probably plays a major part in this ongoing brain-chatter. The cingulate cortex is folded on top of and around the interbrain, and seems to help us reconsider our instinctual desires, especially in social situations. (If you’re into the Triune Brain hypothesis, the cingulate cortex is considered a major part of the limbic system.) It’s not particularly associated with rationality – instead, it seems to help some emotional reactions “debate” with other ones before the body takes action.

The anterior cingulate cortex helps modulate the brain’s responses to stress, including social pressures. A deficiency or suppression of the synaptic protein neurabin in the anterior cingulate cortex has been implicated in anxiety disorders and paranoid schizophrenia, where patients often feel that “inner voices” are constantly judging and mocking them.


Which is pretty interesting, because the posterior cingulate cortex may be helping to dream up some of those voices. The area has been linked with the brain’s ability to model the behavior of others – to imagine what they probably believe, and how they’ll probably act. An overactive posterior cingulate cortex, combined with an anterior cingulate cortex that’s not modulating those signals effectively, might contribute to a feeling that one is under constant “inner criticism.”

Some patients say drugs that suppress signals coming from the posterior cingulate cortex (dissociatives, for example) help silence their inner critic, allowing them to just live “in the moment” without worrying about what others think. This is an easy slope to slide down, though: people who abuse dissociative drugs may find that they lose the capacity to self-regulate, or even to assemble a coherent sense of self.

In short, a connectome has to maintain a delicate balance between gratification and regulation – between acting heedlessly on instinct and self-analyzing to the point of paralysis.

Luckily for fish and amphibians, their brains don’t seem to have the neural hardware to support this sort of an inner critic. Now, I do want to point out that some neuroscientists, such as Oxford’s Gero Miesenböck, have identified simple “critic” neurons that modify fruit fly behavior, and even used those neurons to remotely control these animals. But even if these neurons have some functionality in common with the human cingulate cortex, they operate at a much more instinctive level of direct feedback – they’re much too simple to create an actual internal dialogue or debate. For that, we need at least some measure of what we might call self-consciousness.

In this same vein, mice can definitely get paranoid about pain, but they don’t seem to worry about much beyond their basic physical comfort. In other words, anxiety about abstract ideas – or about one’s own characteristics – seems to be the price our mind pays for modeling a “self,” and analyzing complex social environments.

Of course, it would be overly simplistic to say that the inner critic is the voice of a cingulate cortex gone haywire. Science doesn’t seem to support the idea that such a “voice” really originates, or is localized, in any one area of the brain. But here’s a thought that I think is worth considering: maybe these “selves” we keep hearing about are actually the desires and intentions of various communication systems in the brain.

Here’s what I mean: evolution hasn’t changed the essential structure of the brain since the days of fish – it’s just woven in layer after layer of more complex structures. What if most of those relay systems generate selves of their own? What if, as the brain evolved, none of those selves was ever silenced, even as new ones were added on? Maybe they constantly interrupt each other. Maybe they disagree with one another, and send out signals that they’re upset. Maybe a bunch of them are constantly scrambling for a spot in the consciousness seat – for a moment of control.

I mean, just give the idea some thought.

You might also try a bit of an experiment: the next time you’re gearing up for a new and potentially scary experience, and a little voice in the back of your mind pipes up and says, “I’m scared,” ask it (nicely), “Who are you?” You may be surprised at the response.

In the next post, I’m going to get into some more practical stuff: we’ll talk about the human connectome’s feedback systems, and how consciousness uses speech to self-modify.

Other Selves

The idea that everyone (sort of) carries around multiple selves doesn’t seem to be getting as much of a surprised reaction as you’d think it would. If anything, people are shrugging their shoulders, then pointing out some stuff worth pointing out

A less-smart phone.

When the pump was invented, the brain was like a pump. When plumbing was the new thing on the block, the brain was a series of pipes. When telephones came into their own, the brain was a giant switchboard. When electronics became mainstream, the brain was a a circuit board, and then a computer. Now the brain is a smartphone. Go figure.

Or, as the writer of this excellent New Yorker article puts it,

When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.

Somewhere I read that in the Middle Ages, a philosopher compared the brain to a catapult.

But I think the analogy about self-apps is less meant to describe the mind than to make a point about a connectome’s overall organization: the “self” isn’t one solitary dude who rides around in your head. It’s more like a school of fish, or a swarm of bees – a whole bunch of individual motivations running around, constantly scrabbling for attention.

Whose attention?

Good question. That’s where the global workspace theory comes in. The basic idea is that a “self” isn’t so much a thing (like an app or a fish) but a point of view. I picture it kinda like a theater with one seat, and a huge wraparound stage. The self isn’t a person sitting in the chair – selfhood is the experience of sitting in the chair. And the chair doesn’t always have the same occupant.

It’s funny – I remember fights I had with girlfriends like ten years ago where they’d say, “If you have multiple selves, how can I know I’m getting close to the real you?” I still don’t know how to answer that.

Magical Mice

Before I get into the magic mice, I should probably explain about the magic ravens.

Even if you don’t follow the old Nordic religion of Asatru, you’ve probably heard about the one-eyed god Odin (maybe from Thor or Sandman comics, both of which are epically fun to read). Rack your connectome a little, and you might remember that Odin had two super-intelligent ravens, Huginn and Muninn* (Thought and Memory). While Odin chilled on his throne in Valhalla, Huginn and Muninn would fly around the world, then return to tell him everything they saw.

Odin chillin' with Huginn and Muninn, evidently making some kind of medieval Trollface.

I think the stories about Huginn and Muninn are trying to explain that thought and memory seem almost magical, when you stop to think about it. Do they just fly around each other in endless strange loops, or are they reporting back to something else…?

It’s questions like these that keep neuroscientists (and philosophers, and nerds like me) up at night. That’s why I do a happy dance whenever I read about research like this, because it means I get to learn a learn a little more about how memory works.

It breaks down a little sum’n like this: a research team at Princeton have genetically engineered mice with ultra-powerful memories. These furry little dudes can coast through the AP honors program in Mouse School (which seems, mainly, to involve a lot of Not Getting Lost in Confusing Places) in about half the time it takes an average mouse. One of these rodent geniuses – the star of the program, you might say – is a mouse named (seriously) “Doogie.” He can solve complex mazes in a flash, and he remembers his previous solutions effortlessly. A Sudoku addiction clearly lies in his future.

To show why this particular article gets me all hot ‘n’ bothered, I’m gonna quote a quote that was quoted in another quote:

“There’s something magical about taking a mind and making it work better,” says Alcino Silva, a professor of neuroscience at the University of California, Los Angeles and one of the pioneers of the enhanced cognition field. “In neuroscience, we’ve learned so much from loss-of-function mutants. But we’re only beginning to learn from these smart animals.”

Meanwhile, the smart animals continue to study the scientists, watching and plotting…always plotting…

If that's a raven, this may be a depiction of the ultimate animal super-team.

There’s a clear upshot to all this mouse engineering: the more scientists learn to isolate genes related to memory and learning, the brighter the light at the end of the tunnel for people suffering from dyslexia, ADHD, and a slew of other mental troubles.

And that’s only the beginning. I mean, imagine what it’d be like to have a truly perfect memory…to always be able to find the word that’s on the tip of your tongue… to never forget where you left your keys… to never forget anything… Ohhh wait; this is headed into Cautionary Tale of Hubris territory now, isn’t it?

Yep. Because it raises the question of what, exactly, a “deficit” in memory is. You don’t need me to tell you that there are some things we’d all rather forget. I mean, one of my very favorite movies is about that exact need. So you kinda have to feel some sympathy for the mouse described here:

When placed in an enclosure where days before it had received a mild electric shock – a jolt so minor, most mice don’t even react to it – this mouse cowers in the corner frozen with fear. Its enhanced memory is both blessing and burden.

So, basically, he has lifelong post-traumatic stress. How delightful.

The article also mentions a medical case that really got me thinking (or maybe remembering).

Sherashevsky had such a perfect memory that he often struggled to forget irrelevant details. For instance, [he] was almost entirely unable to grasp metaphors, since his mind was so fixated on particulars.

…which sounds a little weird, until you think about the last time you tried to explain a joke that your audience just didn’t “get.” The more you explain the details, the less funny the joke becomes – the more the humor seems to vanish behind the words. Here’s where it gets absolutely mind-blowingly fascinating (for me, anyway):

Other researchers have used computer models of memory to demonstrate that memory is actually optimized by slight imperfections, which allow us to see connections between different but related events. “The brain appears to have made a compromise in that having a more accurate memory interferes with the ability to generalize,” Farah says. “You need a little noise in order to be able to think abstractly, to get beyond the concrete and literal.”

So, in short, we need to forget some of what we experience in order to think about what we’ve experienced. Even a magical mouse needs to forget in order to learn.

Now, if he could just get that other mouse named “Pinky” to stop asking him so many inane questions…


*On a random tangent, I realized that the word “Muninn” sounds almost exactly like the first word of the Iliad, Μῆνιν (“rage”…or “of rage” if you’re gonna get all grammatical). Anyway, I found this so completely awesome that my hands spontaneously typed this footnote, then burst into flames.

Other researchers have used computer models of memory to demonstrate that memory is actually optimized by slight imperfections, which allow us to see connections between different but related events. “The brain appears to have made a compromise in that having a more accurate memory interferes with the ability to generalize,” Farah says. “You need a little noise in order to be able to think abstractly, to get beyond the concrete and literal.”

What’s a Connectome?

I’ll give you two hints:

1) You cannot buy one.

2) You cannot eat one.

Despite these two facts (and the fact that my word processor strongly suspects “connectome” is not a word at all), I will bravely attempt to blog about what a connectome is.

Because the concept of a connectome, I think, is about to radically change the way we talk about consciousness. And unconsciousness. And death. In fact, I’m going to go ahead and say that ten years from now, it’s going to be hard to remember how we talked about any of those things without the concept of a connectome.

A digitally rendered map of a connectome. Not as sexy as the real thing, but still...

If the word “connectome” was a person, though, it would have just barely graduated from kindergarten. The neuroscientist Olaf Sporns coined it in a jaw-dropping (for me, anyway) 2005 paper titled, “The Human Connectome: A Structural Description of the Human Brain.”

While “The Human Connectome” sounds like an awesome title for a horror movie, the title of the paper conveys a fairly clear idea of what Sporns meant by the term. Still, I like how he defines it in his own words:

The connectome is the complete description of the structural connectivity (the physical wiring) of an organism’s nervous system.

It’s as simple (and as face-smashingly complex) as that.

In fact, it’s so complex that Sporns himself said, “The full connectome, at cellular resolution, seems presently out of reach, at least for the human brain.”

But, this being science, it wasn’t long before other researchers started taking that statement as a challenge.

In 2008, the MIT neuroscientist (and cognitive scientist; and theoretical physicist; and mathematician; and undefeated master of 7 martial-arts styles named after poisonous animals) Sebastian Seung tackled the concept with his paper “Connectomics: Tracing the Wires of the Brain.” After a few introductory paragraphs, Seung got down to business:

The structure of the brain is extraordinarily complex. … But I and others are now optimistic that the connectome will eventually be transformed from dream into reality. A new field of neuroscience will be created: “connectomics.”

Now, even if you’re not a professional genius like Seung, you can tell that what he’s saying boils down to, “Yeah, that thing we’ve all been hearing is out of reach? I’m actually reaching for it right now; and, oh yeah, I’m gonna have to invent a new scientific field to deal with it.”

So, what have Seung and his team been up to since 2008? Oh, nothing much; just taking thousands of slices of mouse brains and reconstructing every single synapse in 3-D.

He admits it’s going to take a while: it’s roughly estimated that the human brain has anywhere from 100 billion to 1010 neurons, linked by anywhere from 100 trillion to 1014 synaptic connections. Yeah, that’s a 10 followed by 14 zeroes – far more than the number of genes in a human genome.

But Seung isn’t just going to wait around for his lab assistants to hand-dye every neuron in a mouse brain – he and his colleagues are also designing computer systems that will be powerful enough to map billions of neural connections simultaneously. This, he says, will allow them to explore complex synaptic chains in real time.

Also, it will pretty much be the neuroscientific equivalent of Real Steel.


Get every new post delivered to your Inbox.

Join 74 other followers