Posts Tagged ‘ science to the rescue! ’

Neuroscience Friends!

I’ve just returned from a thrilling weekend at the BIL Conference in Long Beach, California (yes, the pun on “TED” is very intentional) where I met all kinds of smart, fun people – including lots of folks who share my love for braaaiiins!

The conference was held in... The Future!

So I thought I’d introduce you guys to some of the friends I made. I think you’ll be as surprised – and as excited – as I am.

Backyard Brains
Their motto is “neuroscience for everyone” – how cool is that? They sell affordable kits that let you experiment at home with the nervous systems of insects and other creatures. They gave a super-fun presentation where I got to help dissect a cockroach and send electrical signals through its nerves.

They build all kinds of cutting-edge tools that let home users study their brain activity, and even control machines and art projects with it. Their founder, Ariel Garten, has a great TED talk here – I’ve rarely met anyone else who was so excited to have weird new neuroscience adventures.

Deltaself and Dangerously Hardcore
Two blogs by the very smart Naomi Most – the first is about how scientific data is changing the way we all understand our minds and bodies; the second is about hacking your own behavior to stay healthier and live better.

Halcyon Molecular
Their aim is to put the power to sequence and modify genomes in everyone’s hands within the next few decades. They’re getting some huge funding lately, and lots of attention in major science journals.

Bonus – XCOR Aerospace
They’re building a privately-funded suborbital spacecraft for independent science missions. If there’s anybody who can help us all join the search for alien life in the near future, I bet it’s these guys.

So check those links out and let me know what you think. I’d love to get these folks involved in future videos, especially if you’re interested in any of them.

Consider This an Invitation

This photo got me thinking. Only 24 percent? Really?

We’re finding weird new exoplanets every day – hell, NASA hasn’t even ruled out the possibility that there could be life on Europa and Titan, two moons in our own solar system – yet so many people have lost faith in space’s limitless potential to surprise us.

But we’re entering an age when that potential is no longer the exclusive domain of first-world governments and media conglomerates. The fact that we even have a contest like Google’s X Prize proves that independent space exploration is becoming a very real possibility for each one of us.

The question isn’t whether a private company is going to mount an alien-hunting expedition – it’s who’s gonna be the first to try?

Crazy? Of course it’s crazy! Every awesome expedition is!

So what do you guys say? I say it’s possible if we put our resources and our heads together. Even if we don’t find E.T., we’ll have one hell of a story to tell our grandkids.

5 Ways to Fight the Blues…with Science!

So you’re stuck in that mid-week slump…the weekend lies on the other side of a scorching desert of work, and you have no canteen because you gave up water for Lent (in this metaphor, “water” refers to alcohol…just to be clear).


But fear not! Neuroscience knows how to cheer you up! Nope, this isn’t another post about sex or drugs…though those are coming soon. This one’s about five things science says you can do right now – with your mind – to chase your cranky mood away.

1.Take a look around
Research shows that people who focus on the world around them, instead of on their own thoughts, are much more likely to resist a relapse into depression. This is easy to do – just find something interesting (or beautiful) to look at, and think about that for a few seconds…you’ll be surprised how quickly your worries fade.

2. Do some mental math
Scientists say doing a little simple arithmetic – adding up the digits of your phone number, for example – reroutes mental resources from worry to logic. Don’t worry; your emotions will still be there when you’re done…but they’re less likely to hog the spotlight if you don’t give them center stage.

3. Get out and about
Lots of studies show that physical activity raises levels of endorphins – the body’s own “feel-good” chemicals – and helps improve your mood throughout the day. You don’t have to run a marathon; even a quick walk around the block will get your blood pumping and help clear your mind.

4. Find some excitement
Some very interesting studies have found that courage – a willingness to face some of your fears – feeds on itself; in other words, the more adventurous your behavior is, the fewer things your brain considers threatening. In a way, it’s a “fake it ’til ya make it” situation…but instead of trying to be someone you’re not, you’re becoming more comfortable with the person you are.

5. Remember, it’s not always a bad thing
It sometimes helps to remember that stress is a natural phenomenon…as natural as digestion or sleep. Though stress (or sadness, or worry) can sometimes get out of hand, our bodies have evolved these responses to help us, and there’s nothing “wrong” with you just because you’re feeling annoyed or down in the dumps today. Instead of trying to make the feeling go away, sometimes the best thing to do is acknowledge it, and think about what’s triggering it. You might surprise yourself with an insight.

So, those tips are pretty simple, right? Try some of ‘em out, and let me know which ones worked best for you. After all, that’s why scientists study this stuff – to help us all understand more about what our minds are up to.

Sacred Values

Principles on which we refuse to change our stance are processed via separate neural pathways from those we’re more flexible on, says a new study.

Some of our values can be more flexible than others...

Our minds process many decisions in moral “gray areas” by weighing the risks and rewards involved – so if the risk is lessened or the reward increased, we’re sometimes willing to change our stance. However, some of our moral stances are tied to much more primal feelings – “gut reactions” that remind us of our most iron-clad principles: don’t hurt innocent children, don’t steal from the elderly, and so on.

These fundamental values – what the study calls “sacred” values (whether they’re inspired by religious views or not) – are processed heavily by the left temporoparietal junction (TPJ), which is involved in imagining others’ minds; and by the left ventrolateral prefrontal cortex (vlPFC), which is important for remembering rules. When especially strong sacred values are called into question, the amygdala – an ancient brain region crucial for processing negative “gut” reactions like disgust and fear – also shows high levels of activation.

These results provide some intriguing new wrinkles to age-old debates about how the human mind processes the concepts of right and wrong. See, in many ancient religions (and some modern ones) rightness and wrongness are believed to be self-evident rules, or declarations passed down from on high. Even schools that emphasized independent rational thought – such as Pythagoreanism in Greece and Buddhism in Asia – still had a tendency to codify their moral doctrines into lists of rules and precepts.

But as scientists and philosophers like Jeremy Bentham and David Hume began to turn more analytical eyes on these concepts, it became clear that exceptions could be found for many “absolute” moral principles – and that our decisions about rightness and wrongness are often based on our personal emotions about specific situations.

The epic battle between moral absolutism and moral relativism is still in full swing today. The absolutist arguments essentially boil down to the claim that without some bedrock set of unshakable rules, it’s impossible to know for certain whether any of our actions are right or wrong. The relativists, on the other hand, claim that without some room for practical exceptions, no moral system is adaptable enough for the complex realities of this universe.

But now, as the journal Philosophical Transactions of the Royal Society B: Biological Sciences reports, a team led by Emory University’s Gregory Berns has analysed moral decision-making from a neuroscientific perspective – and found that our minds rely on rule-based ethics in some situations, and practical ethics in others.

The team used fMRI scans to study patterns of brain activity in 32 volunteers as the subjects responded “yes” or “no” to various statements, ranging from the mundane (e.g., “You are a tea drinker”) to the incendiary (e.g., “You are pro-life.”).

At the end of the questionnaire, the volunteers were offered the option of changing their stances for cash rewards. As you can imagine, many people had no problem changing their stance on, say, tea drinking for a cash reward. But when they were pressed to change their stances on hot-button issues, something very different happened in their brains:

We found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval.

In other words, people have learned to process certain moral decisions by bypassing their risk/reward pathways and directly retrieving stored “hard and fast” rules.

This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Of course, this makes it much easier to understand why “there’s no reasoning” with some people about certain issues – because it wasn’t reason that brought them to their stance in the first place. You might as well try to argue a person out of feeling hungry.

That doesn’t mean, though, that there’s no hope for intelligent discourse about “sacred” topics – what it does mean is that instead of trying to change people’s stances on them through logical argument, we need to work to understand why these values are sacred to them.

For example, the necessity of slavery was considered a sacred value all across the world for thousands of years - but today slavery is illegal (and considered morally heinous) in almost every country on earth. What changed? Quite a few things, actually – industrialization made hard manual labor less necessary for daily survival; overseas slaving expeditions became less profitable; the idea of racial equality became more popular…the list could go on and on, but it all boils down to a central concept: over time, the needs slavery had been meeting were addressed in modern, creative ways – until at last, most people felt better not owning slaves than owning them.

My point is, if we want to make moral progress, we’ve got to start by putting ourselves in the other side’s shoes – and perhaps taking a more thoughtful look at out own sacred values while we’re at it.

A Memory Menagerie

What we call “memory” isn’t just one process – or even one type of process. In fact, neuropsychologists classify memories using a system that can be a little bewildering – which is why I’m going to do my best to clear up some distinctions.

Buckle up - this is gonna be a blog post to remember!

So, without further ado, let’s take a tour of this memory zoo.

Part I. Memory time ranges
Scientists today usually divide memory into three basic ranges of time: working memory, short-term memory, and long-term memory. However, these distinctions are hotly debated, as I’ll explain below.

1. Working memory
Strange as it might seem, we need a certain kind of memory to assemble a coherent sense of what we think of as “the present.” The exact nature of working memory is still a argued quite a bit, but it’s generally agreed that our overall sense of “the present” refreshes about every 20 to 30 seconds, and contains a very limited number of “items” on which our attention can simultaneously focus. These items can be auditory/phonological, visual/spatial, or conceptual/abstract – and they can be triggered by external stimuli or synthesized from stored data.

Working memory is crucial for tasks like math and reading, which require us to “hold” a representation of an item in our focus of attention for several seconds or more. In short, working memory is roughly synonymous with “what you’re thinking about” in the current moment.

2. Instantaneous memory
Unlike working memory, which by its nature forms a part of our conscious attention, immediate (or “instantaneous”) memories are sensory memories that form almost instantly, fade in a few hundred milliseconds, and may or may not enter our conscious awareness. In the visual domain, this is sometimes known as “iconic memory;” while it’s sometimes called “echoic memory” in the world of sound.

Because instantaneous memory allows informational patterns to persist in our brains after an actual stimulus is no longer present, it helps our brains assemble a cohesive flow of subjective experience. Thus, it allows us to detect changes in visual or auditory information, which lets us perceive movement in animation and film, and detect musical harmonies in sets of tones.

A brief aside: Priming
A topic that often comes up in connection with working memory and instantaneous memory is that of priming – when exposure to a certain stimulus influences our perceptions of later stimuli. One simple example of priming is that when people are shown a word – say, “table” – then asked to complete a word starting in “tab__,” they’re more likely to answer “table” than people who weren’t primed with that word.

A more intriguing example is that when people are shown a set of dots moving in a clockwise direction, and then look at a set of dots moving in a counterclockwise direction, they’re likely to report that the second set of dots was moving more dramatically away from the direction of the first set that it actually way.

The general idea here is that the present moment isn’t discrete from the past – our recent events and ideas leave impressions on our senses, and those impressions are constantly influencing what we perceive at any given instant.

3. Short-term memory
It’s important to clear up a bit of confusion right from the start – many laypeople (and even some scientists) use the terms “working memory” and “short-term memory” interchangeably – today’s neuroscientists and psychologists typically classify working memory as just one subset or type of short-term memory.

Keep in mind that the term “working memory” refers to attention processes – such as reading and math – that temporarily store and manipulate the information that composes our sense of “the present moment.” Though some scientific papers do refer to working memory as short-term memory, the phrase “short-term memory” is more widely used to describe any information that’s kept “on hand” for instant recall – such as a sentence you read a minute ago, or what room you just left.

A lot of debate surrounds the exact capacity of short-term memory, but the range of 7±2 items has been highly influential. More recent research points to a number around 3 or 4. The rate at which short-term memories “decay” is also debated, but it’s widely agreed that without repetition, most of these memories are eventually lost for good.

4. Long-term memory
A lot of scientists think these memories are encoded (written into long-term storage) from short-term ones during sleep. They often stick around for a person’s entire life – at the very least, they’re much harder to forget than short-term or working memories are. They can also be harder to recall, but once they’re loaded into working memory, it’s easier to recall them in the future. One interesting feature is that the mere act of recalling a long-term memory seems to change it.

Part II: Two memory models
Before we go any further, there’s a very important point to make: the exact boundary between short-term and long-term memory is (you guessed it) the cause of a lot of debate. In fact, not all scientists agree that there’s a clear distinction between the three memory systems at all. The two main competing theories about this distinction are known as the dual-store and single-store memory models:

1. The Dual-store memory model
This is the more conventional one – it draws a distinction between short-term and long-term memories (without making any particular statement about the distinction between working memory and short-term memory). One argument in favor of the dual-store model is that long-term memory’s capacity seems to be much larger than that of the short-term and working memory systems.

Objections against the dual-store model mainly center around two arguments:
a) even if experimental subjects are distracted from recalling a recently-performed task, they’ll still perceive the task’s various steps as “recent” and “contiguous” with one another;
b) the length of time an item spends in short-term memory isn’t a direct predictor of its strength in long-term memory.

2. The Single-store memory model
This one posits that there’s only one type of memory, in which context provides the sense of recency. In the single-store model, short-term memory and long-term memory (and, presumably, working memory) are all just different ways of perceiving memories encoded in a single system.

Objections against the single-store model point out that although this theory helps explain some features of memory – such as the fact that recall fades gradually until about 10 minutes have passed, and then fades much more gradually over the next few months – it still doesn’t provide a very clear explanation for some brain phenomena, such as why we apparently need sleep to organize our short-term memories into long-term ones, and why more permanent memories appear to be encoded in separate synaptic maps from short-term ones.

These arguments help to demonstrate that even if short-term and long-term memory are different systems, we may not be defining their boundaries correctly. And so, the discussion continues.

Part III. Memory data types
Within the ranges listed above, many scientists also divide memories along another set of lines. They mainly have to do with what type of information the memories primarily focus on:

1. Explicit memory
In general, it’s easiest to think of explicit memories as “memories you’re conscious of having.” Facts you memorize, events you recall, and numbers you manipulate to solve a math problem are all explicit memories.

Some scientists classify explicit memories into several sub-types:

a) Visuo-spatial memoriesworking memory items dealing with visual images, real or imagined; they seem to refresh about every 10 seconds
b) Phonological memoriesworking memory items dealing with sound and speech, real or imagined; they seem to refresh on a loop that’s about 3 to 4 seconds long
c) Declarative memories – memories for specific facts and events. In the dual-store model, declarative memories are typically considered part of the long-term memory system, and they fall into two sub-sub-categories (yeah, I know, I know…)
     i) Semantic memories – memories of facts/understandings
     ii) Episodic memories – memories of occurrences/events

2. Implicit memory
It’s easiest to think of these memories as “memories for how to do things,” or “how things happened.” They’re stored in a very different way from explicit ones – instead of being consciously learned and recalled, they’re stored through experience and/or practice, and come into play when we involuntarily remember how to ride a bike, how to swim, how falling in love feels, or how embarrassing it was to spill a drink. Instead of focusing on specific facts, they’re focused on associations and environmental stimuli. In the dual-store model, they’re considered part of the long-term memory system.

Studies of implicit memory are mainly concerned with procedural memories – memories for how to perform a task. A procedural memory could be something as simple as how to tie a shoe, or as complex as how to play a sonata. It might even be a cognitive skill. The unifying characteristic is that it’s learned through practice.

The wild card: Emotional memory
These are memories about how a past event felt, or about emotional associations with an explicit memory. Scientists haven’t resolved the question of whether these memories are actually part of the implicit memory system, or represent a system of their own.

Emotional memories are an unusual breed – unlike procedural memories, they seem to form almost instantly; but unlike explicit memories, recalling or suppressing them isn’t under our conscious control. They’re also very hard (but not impossible) to forget. Some scientists have proposed that emotional memories represent an entirely separate – and more primitive – memory system: one that involves the amygdala. The amygdala also helps strengthen explicit and procedural memories, though, so its exact overall role in memory remains unsure.

Part IV: Memory time directions
There’s one last important memory distinction that needs to be mentioned. Like the ones described in Part I above, they’re also related to time – but instead of pertaining to a time range, they’re related to time’s direction.

1. Retrospective memory
This just means a memory for any event, fact, or procedure encountered in the past, after some delay. It can be explicit or implicit, episodic or semantic, long-term or short-term, or even emotional. This term doesn’t apply to working memory, though, because it implies an interruption between the experience that triggered the memory and the act of recalling it.

2. Prospective memory
These are memories involving the timing of events and actions that haven’t happened yet, such as an appointment that’s coming up, a chore that needs to be performed, or a shower you’re about to take. These memories are sometimes called memories for the future (which I think is insanely confusing), but the term basically just refers to our ability to think and plan about events we haven’t actually experienced.

Well, there you have it. I hope I’ve helped make things clearer instead of thoroughly confusing you. If you’ve got any questions, or would like me to tidy any of this up, feel free to drop me a line and I’ll do what I can. But I hope this has piqued your interest in memory research, and shown you how far the field still has to go before anyone agrees on…well, on much of anything.

In all honesty, it seems silly to me that the different types of memory are so hard to remember – if that’s not evidence that the universe is absurd, I don’t know what is!

Perfect Memory and the Ten Percent Myth

This story begins in a bar.

On the wall-mounted TV, the popular channel Sports Channel was taking a break from the popular show Men in Suits Awkwardly Attempting Rapport to show a trailer for the movie Limitless.

"I haven't seen a shot like that since the last time a sports player took a shot like that!"

I’m not gonna lie – I was intrigued by the plot at first; and not only because literally anything on TV is made more awesome by alcohol consumption.

Plus, I’ve always been fascinated by Flowers for Algernon-type stories, where human minds contain vast potential for genius, just waiting to be unleashed by the right combination of drugs or electrical signals or meditation techniques. As you might guess, it’s one of the major reasons why I like to study the human mind so much – and why I launched this blog.

But a few seconds into the trailer, I heard a line that makes me cringe (or do a whisky-spitting double-take) every time I hear it: “You know, they say we only use ten percent of our brains.” Then the narrator explains he’s taking pills that give him super-human intelligence, which boils down to an ability to remember everything he’s ever seen or heard.

[SCIENCE UPDATE! #1 - I watched Limitless this weekend, and it turns out I wasn't quite right about that last point. At one point, the movie's narrator/protagonist describes his experience on the pill by saying that his memories of everything he'd ever seen or heard were incredibly accessible and organized. So presumably the drug did act as a sort of associative and attentive filter (see #2 below).]

Now, I understand that a movie is a work of fiction. But Limitless provides a great opportunity to clear up some confusion that surrounds this ten percent myth. I also want to talk about why the idea that a flawless memory leads to a genius intellect is… well, not so flawless.

Not to point fingers, but psychics are mainly the ones behind this “ten percent” nonsense. The claim seems to have originated in a 1997 book called Reasons to Believe: A Practical Guide to Psychic Phenomena. Since I know this thanks to Benjamin Radford of, I’ll let him tell the rest:

Author Michael Clark mentions a man named Craig Karges. Karges charges a lot of money for his “Intuitive Edge” program, designed to develop natural psychic abilities. Clark quotes Karges as saying: “We normally use only 10 to 20 percent of our minds. Think how different your life would be if you could utilize that other 80 to 90 percent known as the subconscious mind.”

I don’t want to put words in anyone’s mouth, but think what Karges may have been trying to say is that the majority of the human mind’s processing takes place outside the “spotlight” of the subjective consciousness – in other words, that at any given time, we’re only consciously devoting around ten percent of our connectome’s total processing capacity to cognitive tasks, while the rest is being used by the unconscious for its own ends.

Now, at the conceptual level, there’s probably some merit in considering an idea like this. Studying dreams and other subconscious processes can help us acknowledge and confront our true desires and feelings. It’s also important, I think, to acknowledge the authority that emotions and other semi-conscious feelings often hold over our cognitive minds.

But the problem is, a confusion tends to sneak in when people repeat the claim: instead of Karges’ rather vague word “mind,” the word “brain” is usually the one that ends up in the statement – and that’s a more serious inaccuracy. None of this Freudian theory has anything to do with what percent of our brain our conscious mind is capable of using. Nor does it have anything to do with any hidden abilities that can be unleashed with the right code-words or pills. And on a neurophysiological level, such a statement reveals a major lack of understanding about how the human brain actually works.

Now that we’ve got a better grip on what the ten percent myth is, I’m going to explain two reasons why any claims of this nature are just untrue. I’m also going to explain why a perfect memory isn’t as great as it might sound. But I’m going to end on a hopeful note, by talking about how every connectome actually does contain untapped potential – for those willing to put in the effort.

So to start, let’s talk about some reasons why this ten percent idea makes no sense neurologically.

1. There’s no such thing as an “unused part” of the brain.
Brain scanning technologies like functional magnetic resonance imaging (fMRI) show that throughout an ordinary day, every structure in the brain is active in some kind of processing. Although various kinds of tasks are related to increased activity in certain areas of the brain, there are no areas that just sit around unused. And processing-heavy tasks like social interaction bring vast regions of the brain together at once in a synchronized symphony, often many times a day.

Now, this is not to say there’s no signal redundancy built into the nervous system – on the contrary, there’s quite a bit. Even so, every single neuron in your connectome plays a part in some function or other, on a daily basis. Well, in normal, healthy human brains, anyway – and that’s what I’m going to talk about next.

2. If a part of the brain does fall out of use temporarily, it gets reconnected.
There’s another important reason we know the ten percent myth can’t be true: the scientific literature is full of examples of patients who were temporarily using less than 100 percent of their brains – those with severe cerebral damage, or who received a lack of sensory stimulation during their developmental years – and scientists have watched as those brains either quickly recruited the unused neurons for other purposes, or allowed them to die off and free up space.

Neurons bein' all friendly.

You can imagine the neurons in your connectome as little social-butterfly cells, because they’re always looking for new connections to make – and gradually ignoring the ones they don’t talk to much. This process is called  synaptic plasticity. In some cases, neurons can form whole new pathways when they’re craving some interaction.

This works because every neuron has a response threshold – a level of a certain neurotransmitter chemical it needs to receive in order to pass a signal on.*  When a neuron sits around unstimulated for a while, it eventually downregulates its own threshold, allowing it to receive signals from neurons that used to be too “quiet” or distant for it to sense. This means that even in those rare cases where some neurons aren’t receiving or responding to signals, they’ll start to look for new connections – and if they can’t find any, they’ll typically die. “Unemployed” neurons never stay unemployed for more than a few weeks, at most.

“OK,” you might say, “so every neuron in the brain has a use, and no neuron stays unstimulated for long. I’m with you so far. But what if our brains are so used to being ten-percent effective, they’ve learned to rewire themselves that way? What if those connections could be somehow… perfected?”

Well, that’s pretty close to the claim made in Limitless: that a near-perfect memory equates to super-intelligence. At first glance, this seems like it must be true – after all, wouldn’t we all be better at our jobs or classes if we never forgot anything we saw or heard? Not so much.

3. “Perfect” memory ain’t so perfect.
Actually, scientists have found that the reverse is more accurate: intelligence has less to do with the ability to remember, and more to do with the ability to distinguish between relevant and irrelevant details, and forget the irrelevant ones. If you read my post on magical mice, you’ll remember the story of the patient Sherashevsky:

Sherashevsky had such a perfect memory that he often struggled to forget irrelevant details. For instance, [he] was almost entirely unable to grasp metaphors, since his mind was so fixated on particulars.

He tried to read poetry, but the obstacles to his understanding were overwhelming. Each expression gave rise to a remembered image; this, in turn, would conflict with another image that had been evoked.

In short, a good memory is only as helpful as the filters applied to it. As a matter of fact, that’s how drugs like Adderall help memory and concentration – by helping the brain filter out irrelevant input and focus on what’s important to the task at hand. Without the ability to selectively forget, a “perfect” memory is as likely to cause total confusion as it is to offer up useful details.

“OK,” you might say, “but what if you had a super memory and took a drug like Adderall?”

Well, even that wouldn’t be much of a help if your mind wasn’t well-practiced at correlating and iterating your thoughts. If you’ve got a few minutes, check out the short story “Understand” by Ted Chiang. Its plot emphasizes that brilliance has less to to with physical synaptic connectivity (or perfect memory, which quickly becomes a burden), and much more to do with conceptual connectivity – the ability of a mind to form creative connections between abstract concepts – and iterative reasoning – the ability to view complex systems as elements within an even more complex system, which is itself an element within an even more complex system, and so on.

[SCIENCE UPDATE! #2 - The protagonist of Limitless actually does exhibit (or reference) most of these abilities throughout the course of the plot. He describes a clarity of purpose, and a level of correlative and sequential planning cognition that seem... exhausting. Still, as with the protagonist of "Understand,"  synaptic ultraplasticity and manic energy seem to work out fine for Our Hero - for a few Ferris Bueller-esque weeks, anyway. Also, despite a (presumably) more precise grasp of his place within the universe as a whole, his new-found sense of purpose seems to direct him toward life goals that are distinctly, shall we say, American. I found myself wondering what Bertrand Russell, or Isaac Newton, or the Dalai Lama (or, hell, Muhammar Qaddafi) might accomplish on these pills. In that sense, it's actually kind of a thought-provoking movie.]

In “Understand,” the patient gains both these abilities through an experimental brain treatment that’s never explained in much detail. But the upshot of all this is that both those abilities can be learned and practiced by anyone. All it takes is dedication. It might come as a surprise to some people that intelligence can be improved with practice, but the fact is, it’s just another set of skills, like those that contribute to physical fitness, or mastery of a musical instrument. We might not all have the potential to be Einsteins – or Olympic athletes or award-winning composers – but any skill-set can be improved with practice.

So it is true that some people only realize ten percent of their connectome’s cognitive potential – just as many people only realize a fraction of the athletic or musical potential they could develop if they chose to put in years of practice.

On the other hand, it’s easy to see why the “ten percent of the brain” myth has such an enduring popularity – it’s sort of like telling a crowd of people that they each have the potential to turn into Professor X, if they just… concentrate really hard, or something. As Dr. Barry Beyerstein puts it in this article, “It would be so darn nice if it were true.” But intelligence isn’t a box of magic that can just be opened with some secret key – it’s a reward that can only be earned through years of practice.

That may be a harder pill to swallow, but it’s the one that actually contains medicine.

* This is one of the reasons neural signals are more like “fuzzy” analog signals, like radio, than “on/off” digital signals. Another reason is that even when the threshold is reached, the neuron doesn’t automatically fire – it just becomes more likely to.

The Splort Hormone

At the end of my last post, I promised I’d explain more about inner dialogue, and get into some practical tips on self-programming. A draft of that write-up is almost finished [SCIENCE UPDATE! It's here.] but I came across an article today that brought up some intriguing points – and some common misconceptions – about neurochemistry. I couldn’t resist such a perfect opportunity to explain some concepts more clearly.

Eye (and nose) contact.

The article is mainly about the chemistry of eye contact, and…well, I’d better let the author speak for herself.

A loved one’s lingering look can trigger a rush of happiness, but too much eye contact with an acquaintance or a stranger can bring on sudden discomfort. How, exactly, does eye contact affect us, anyway?

Sounds pretty frickin’ fascinating so far, right? So let’s dive in. After a few introductory paragraphs, the author gets to the good part – the neuroscience. She explains that authentic expressions affect other individuals’ emotional responses differently than faked ones do – which is accurate, and backed up by some intriguing scientific research.

But the next bit was what made my Science Radar bleep a warning:

Oxytocin, also known as the “love” or “cuddle” hormone, plays a big part in [making our hearts flutter]. It’s a feel-good chemical that’s released when we feel bonded with someone, either emotionally or physically. The release is prompted by a warm hug, holding hands, falling in love, and so forth.

Well… that’s only sort of true. And sort-of-true statements often lead to confusion, which is why I want to explain the oxytocin situation as clearly as I know how.

Oxytocin is often called the “love hormone” or the “cuddle hormone.” That’s probably because it can be detected at raised levels in human blood plasma around the time of orgasm. If you Google oxytocin, most of the results will contain words like “love” or “cuddle.” You’ll also see a lot of articles asking, “Can oxytocin do [X]?” and “Does it do [Y]?” There’s plenty of speculation floating around – but the fact is, scientists are still unraveling the complex relationship between oxytocin and human emotions.

For example, oxytocin levels seem to rise during physical sexual arousal in women, and “spike” around the time of orgasm. But women’s bodies also release high levels of oxytocin during cervical dilation (i.e., expansion of the vaginal canal) during second- and third-stage labor, as well as when their nipples are physically stimulated for breastfeeding by an infant.

Meanwhile, in men’s bodies, oxytocin levels seem to just rise and then level off during sexual arousal. Some studies have found a mild spike around the time of orgasm, while others haven’t. Oddly enough, at least one study has found that oxytocin levels rise highest in men who stimulate themselves to orgasm. So we might just call oxtyocin the “splort hormone” and be done with it – but (as usually happens with science) there’s a lot more to it than that.

A sexy, naked oxytocin molecule.

First of all, who wants to know what oxytocin is? It’s a polypeptide hormone (i.e., a hormone created when a string of amino acids join together in a specific way). It’s produced in the pituitary gland of mammals. In very general terms, oxytocin is related to changes in the contractile properties of reproductive tissue. Some studies seem to show that oxytocin induces or promotes those changes; other scientists think its presence is just a reflection that they’re happening.

But even that’s just the tip of the oxytocin iceberg. Over the past few years, scientists have learned a lot about this hormone by studying some of the effects it can produce.

In mice, oxytocin in taste buds has been shown to inhibit the desire to keep eating. In the hypothalamus, it helps rodents time their birth cycles. In rats, a raised oxytocin level in the hippocampus decreases responsiveness to stress, and allows wounds to heal more quickly. In humans, it’s been shown to increase generosity toward strangers. Then again, it also makes people racist:

The love and trust oxytocin promotes are not toward the world in general, just toward a person’s in-group. It turns out to be the hormone of the clan, not of universal brotherhood.

Here’s where we finally come around to those ideas about oxytocin being a “cuddle hormone.” Oxytocin in blood plasma has been shown to spike when a person receives a friendly hug, or even an extended gaze. But whether the hormone is rising because we’re feeling loved – or if it’s just because we’re responding emotionally to the behaviors of other individuals in our species – is a question that’s still open.

So, like I said: sort of true.

To be honest, I’m glad I found that article, and I hope others do too – anything that helps get people excited about neurophysiology is awesome as far as I’m concerned. But I like to present things clearly, with plenty of specifics. I think that, as a journalist, if I can’t sit down and describe any given detail of my subject clearly and succinctly, I haven’t done enough research, and it’s time to hit the books again.

That’s just one guy’s opinion, obviously. But it’s the way my connectome is wired.


Get every new post delivered to your Inbox.

Join 74 other followers