Posts Tagged ‘ emotions ’

Why I Love and Hate “Game”

Yes, it’s that special time of year again – time for flamboyant bouquets and chalky candy to appear at office desks – time for Facebook pages to drown in cloying iconography – time for self-labeled “forever aloners” to dredge the back alleys of OKCupid in last-ditch desperation – and time for me to load up my trusty gatling crossbow with oxytocin-tipped darts and hit the streets.

Valentine's Day also means it's time to enjoy the traditional dish of Earlobe.

Oh, and it’s time for everyone to complain about how misogynistic all this “Game” stuff is.

So, while I guess I could write about, say, a new study that says cutting your romantic partner some slack can make him or her more capable of actual change, or this one that says love and chocolate are good for cardiovascular health, I think it’ll be much more interesting to talk about what’s really on most of our minds today:

What does science have to say about “getting the girl” (or guy) of your dreams? And what do actual girls (and guys) think about it?

Let’s start with some full disclosure: about this time last year, I decided to see what all the fuss was about, and I read The Game for myself – and then I read some of the other works it cites, too. And I started talking to my friends (both male and female) about what they thought of the ideas in those books – and I tested a lot of the ideas I read, the same way I’d test any hypothesis: I wrote down the predictions various authors made, and checked how well those predictions lined up with my own real-world experiences.

In short, I went Full Geek on the topic.

What I learned is that, on the spectrum of scientific rigorousness – a scale from, say, astrology (0) to molecular chemistry (10) – most of this stuff falls somewhere in the 4-to-6 range: It tends to be more evidence-based than, say, ghost-hunting; but it still falls firmly into the realm of the “softer” sciences, like psychotherapy and so on.

The reason for this is that – as many pick-up artists freely admit – their craft is at least as much an artistic pursuit as a scientific one. Much like, say, Aristotle and Hobbes and Descartes, PUAs do their best to ground their conclusions logically in real-world data that anyone is free to test and refute – but at the same time, like those great philosophers of old, PUAs tend to be more intent on constructing elaborate thought systems than on presenting their “ugly” raw data for independent labs to crunch through.

This means pick-up manuals tend to read more like philosophical treatises than scientific papers.

And I think it’s this very feature of pick-up art that explains why it’s such a polarizing topic – why many women (and plenty of men) find the very concept insulting and distasteful, while other men swear that it’s transformed them from self-loathing losers into sexually fulfilled alpha males.

See, many women will tell you in no uncertain terms that pickup “tricks” don’t work on someone as intelligent and experienced as them; and that even if such tricks did work, they don’t want to be “picked up” –  instead, they want to fall in love (or at least in lust) with a man who’s honest about his real self and his real feelings. Many men, too, would agree that crafty seduction techniques somehow cheapen the process – that it’s better to be “forever alone” than to be surrounded by adoring women who were manipulated into their romantic feelings.

Meanwhile, men who’ve had “success” (however they choose to define it) as a result of a pick-up system’s techniques will often defend that system to the death – much like how a person who’s found inner peace thanks to, say, Buddhism will often defend it passionately against anti-Buddhist viewpoints.

What I’m arguing here, though, is that none of these reactions pertain directly to the underlying process of seduction at all – rather, they’re reactions to the (often sleazy-sounding) thought-systems that various writers have constructed around their experiences with that process.

Because – let’s get right down to it – in all our interactions with other humans, we’re hoping to manipulate the outcome somehow. Double entendres, pop-cultural references, stylish clothes and makeup, kind gestures, subtle dishonesty – even honesty itself – all these are tools and techniques that we hope will garner us a certain response.

For example, if you choose to callously manipulate the people around you, you may get a lot more sex than you would otherwise - but you’ll also end up with a lot of shallow relationships, which you’ll probably come to regret eventually. If you choose to be completely honest all the time, you may repel some people – but you’ll probably also find that those who stick around end up respecting you for who you really are.

It’s Game Theory 101: Players who “win” are those who understand the rules, risks and rewards of the game - and play accordingly. All the sleazy lingo and tricks – all the elaborate systems – are just various people’s attempts to explain these dynamics as they play out in gender relations, and to sell their vision of the process to a demographic of sex-starved men, whose desires they understand quite well.

But still – the underlying process itself is no more and no less sleazy than the mind of the person using it.

In other words, when you read between the lines of these PUA systems, most of them turn out to be geared toward the same premises: That to grow as a person, you need to 1) be fully honest with yourself about what you want from the people around you, 2) acknowledge the personal changes that need to be made in order to achieve those results, and 3) steadily work to make those changes in yourself.

From an evolutionary psychology perspective, it’s hard for me to see how that’s inherently more “cheap” than, say, a woman learning how to dress and speak seductively in order to get what she wants.

Yes, there are a lot of sleazy men out there who objectify women and sweet-talk them into one-night stands. There are also plenty of sweet-talking women out there who milk men for the contents of their wallets, then move on. And so we label each other “douchebags” and “bitches,” and keep engaging in the same defensive behaviors, and no one’s really happy.

And I hate that Game. I despise it.

At the same time, though, it’s clear that we humans, like many other animals, have evolved to play competitive social games – there’s no getting around that fact. But unlike many animals, we don’t have to play the game exactly as our instincts tell us to – we’re metacognitive, so we can learn to play using strategies that don’t result in zero-sum outcomes: We can develop tactics that help both sides get more of what they want. We can harness our evolutionary drives to mutually-beneficial behavior patterns.

Doesn’t that make you want to learn to play more creatively, instead of trying not to play at all?

I mean, at the end of the day, it kinda fills me with love for the Game.

What do you think?

Sacred Values

Principles on which we refuse to change our stance are processed via separate neural pathways from those we’re more flexible on, says a new study.

Some of our values can be more flexible than others...

Our minds process many decisions in moral “gray areas” by weighing the risks and rewards involved – so if the risk is lessened or the reward increased, we’re sometimes willing to change our stance. However, some of our moral stances are tied to much more primal feelings – “gut reactions” that remind us of our most iron-clad principles: don’t hurt innocent children, don’t steal from the elderly, and so on.

These fundamental values – what the study calls “sacred” values (whether they’re inspired by religious views or not) – are processed heavily by the left temporoparietal junction (TPJ), which is involved in imagining others’ minds; and by the left ventrolateral prefrontal cortex (vlPFC), which is important for remembering rules. When especially strong sacred values are called into question, the amygdala – an ancient brain region crucial for processing negative “gut” reactions like disgust and fear – also shows high levels of activation.

These results provide some intriguing new wrinkles to age-old debates about how the human mind processes the concepts of right and wrong. See, in many ancient religions (and some modern ones) rightness and wrongness are believed to be self-evident rules, or declarations passed down from on high. Even schools that emphasized independent rational thought – such as Pythagoreanism in Greece and Buddhism in Asia – still had a tendency to codify their moral doctrines into lists of rules and precepts.

But as scientists and philosophers like Jeremy Bentham and David Hume began to turn more analytical eyes on these concepts, it became clear that exceptions could be found for many “absolute” moral principles – and that our decisions about rightness and wrongness are often based on our personal emotions about specific situations.

The epic battle between moral absolutism and moral relativism is still in full swing today. The absolutist arguments essentially boil down to the claim that without some bedrock set of unshakable rules, it’s impossible to know for certain whether any of our actions are right or wrong. The relativists, on the other hand, claim that without some room for practical exceptions, no moral system is adaptable enough for the complex realities of this universe.

But now, as the journal Philosophical Transactions of the Royal Society B: Biological Sciences reports, a team led by Emory University’s Gregory Berns has analysed moral decision-making from a neuroscientific perspective – and found that our minds rely on rule-based ethics in some situations, and practical ethics in others.

The team used fMRI scans to study patterns of brain activity in 32 volunteers as the subjects responded “yes” or “no” to various statements, ranging from the mundane (e.g., “You are a tea drinker”) to the incendiary (e.g., “You are pro-life.”).

At the end of the questionnaire, the volunteers were offered the option of changing their stances for cash rewards. As you can imagine, many people had no problem changing their stance on, say, tea drinking for a cash reward. But when they were pressed to change their stances on hot-button issues, something very different happened in their brains:

We found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval.

In other words, people have learned to process certain moral decisions by bypassing their risk/reward pathways and directly retrieving stored “hard and fast” rules.

This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Of course, this makes it much easier to understand why “there’s no reasoning” with some people about certain issues – because it wasn’t reason that brought them to their stance in the first place. You might as well try to argue a person out of feeling hungry.

That doesn’t mean, though, that there’s no hope for intelligent discourse about “sacred” topics – what it does mean is that instead of trying to change people’s stances on them through logical argument, we need to work to understand why these values are sacred to them.

For example, the necessity of slavery was considered a sacred value all across the world for thousands of years - but today slavery is illegal (and considered morally heinous) in almost every country on earth. What changed? Quite a few things, actually – industrialization made hard manual labor less necessary for daily survival; overseas slaving expeditions became less profitable; the idea of racial equality became more popular…the list could go on and on, but it all boils down to a central concept: over time, the needs slavery had been meeting were addressed in modern, creative ways – until at last, most people felt better not owning slaves than owning them.

My point is, if we want to make moral progress, we’ve got to start by putting ourselves in the other side’s shoes – and perhaps taking a more thoughtful look at out own sacred values while we’re at it.

Stress Intervention

Scientists have discovered a way to shut down the brain’s “stress process” before it gets going, says a new study.

Stress, or just a very acute case of the munchies? It's hard to say.

By blocking the brain’s ability to manufacture certain chemicals called neurosteroids, researchers have managed to temporarily cut off a biological process crucial for stressful behavior – and for many stressful feelings as well.

Animals from amphibians all the way up to humans produce a hormone called corticosterone in their adrenal glands. Corticosterone levels become elevated under stress, and this hormone is a major ingredient in a number of stress-related biological processes, from feelings of nervousness to aggressive behavior.

Corticosterone does most of its direct work within a brain pathway known as the hypothalamic-pituitary-adrenal axis (also called the HPA or HTPA axis). To be honest, the word “pathway” is a bit of an oversimplification – the HPA is actually a whole set of neurochemical feedback circuits involved in regulating digestion, immune response, and mood, among other things.

The HPA’s activity is mostly regulated by a neurotransmitter chemical called gamma-Aminobutyric acid (GABA to its friends). GABA is typically an inhibitory neurotransmitter, which means it prevents electrochemical signals from being passed beyond a certain point. It often works closely with a neurosteroid called tetrahydrodeoxycorticosterone (THDOC), which helps its inhibitory effects spread even more widely throughout the HPA.

But when we come under stress, everything changes: the adrenal glands start cranking out extra-large doses of THDOC and sending them up into the HPA. And here’s where things get weird – those conditions trigger a certain electrochemical shift that causes GABA and THDOC to activate the HPA rather than inhibit it.

As the Journal of Neuroscience reports, the discovery of that neurochemical mechanism is the first half of a two-part breakthrough made Jamie Maguire‘s team at Tufts University:

We have identified a novel mechanism regulating the body’s response to stress by determining that neurosteroids are required to mount the physiological response to stress.

But how did they discover this mechanism, you ask? Well, since the team suspected that neurosteroidogenesis - the production of neurosteroids like THDOC – was a crucial component in stress-related HPA activation, they got a bright idea: they wondered if a drug that blocked neurosteroidogenesis might be able to stop the brain’s stress response before it could even get into gear.

As it turned out, they were right – they cut off the THDOC rush by administering a drug called finasteride – which you might’ve heard of under the brand name Propecia. Yep, the baldness drug:

Blocking neurosteroidogenesis with finasteride is sufficient to block the stress-induced elevations in corticosterone and prevent stress-induced anxiety-like behaviors in mice.

In other words, the researchers found that finasteride does more than just control stress – it blocks the chemical cascade that causes stress-related feelings and behavior. As far as they can tell, it prevents animals from experiencing stress at all - at least temporarily.

This has the potential to develop into a far more powerful treatment than benzodiazepines like Xanax and Ativan, which work by helping GABA inhibit more activity than it normally would. By contrast, a finasteride-like drug would make it almost impossible to feel stressed, even if you tried – meaning this drug might also be used to treat diseases like epilepsy and major depression, which have been linked to excessive activation of the HPA.

Right now, Maguire’s team is focused on isolating more of the exact neural connections that play roles in disorders like these. That means it may be a few years before this “wonder drug” becomes available. In the meantime, I wouldn’t recommend swallowing handfuls of Propecia when you’re feeling stressed – the drug needs to be applied in a pretty targeted way to make this work, which means a major part of pharmaceutical development will be the creation of an effective chemical delivery system.

Even so, it’s exciting to think that before long, depression and anxiety may be as easy to prevent as, say, polio and malaria are today. The thought’s enough to get my hormones pumping, anyway.

Harry Potter and the Nature of the Self

Hooray for Google Image Search!

Yup, this is what we’re doing today. I finally got to see Deathly Hallows Part 2, and it got me thinking about neuroscience like frickin’ everything always does, and I came home and wrote an essay about the nature of consciousness in the Harry Potter universe.

And we’re going to talk about it, because it’s the holidays and can we please just pull it together and act like a normal family for the length of one blog post? Thank you. I really mean it. Besides, I guarantee you that this stuff is gonna bug you too once I’ve brought it up.

So in the movie, there’s this concept of Harry and Voldemort sharing minds; mental resources – each of them can occasionally see what the other one sees; sometimes even remember what the other one remembers.

That idea is not explored to anywhere near a respectable modicum of its full extent.

First of all, are these guys the only two wizards in history who this has happened to? Yeah, I’m sure the mythology already has an answer for this – one that I will devote hours to researching just as soon as that grant money comes through. Ahem. Anyway, the odds are overwhelming that at least some other wizards have been joined in mental pairs previously – I mean, these are guys who can store their subjective memories in pools of water to be re-experienced at will; you can’t tell me nobody’s ever experimented; bathed in another person’s memories; tried to become someone else, or be two people at once. Someone, at some point, must’ve pulled it off. Probably more than one someone.

OK, so there’ve been a few pairs of wizards who shared each others’ minds. Cool. Well, if two works fine, why not three? Hell, why not twelve, or a thousand? With enough know-how and the right set of minds to work with, the wizarding world could whip us up a Magic Consciousness Singularity by next Tuesday.

But there’s the rub: Who all should be included in this great meeting of the minds? Can centaurs and house-elves join? What about, say, dragons, or deer, or birds? Where exactly is the cutoff, where the contents of one mind are no longer useful or comprehensible to another? As a matter of fact, given the – ah – not-infrequent occurrence of miscommunication in our own societies, I’d say it’s pretty remarkable that this kind of mental communion is even possible between two individuals of the same species.

Which brings us to an intriguing wrinkle in the endless debate about qualia – those mental qualities like the “redness” of red, or the “painfulness” of pain, which are only describable in terms of other subjective experiences. Up until now, of course, it’s been impossible to prove whether Harry’s qualia for, say, redness are exactly the same as Voldemort’s – or to explain just how the concept of “exactly the same” would even apply in this particular scenario. But now Harry can magically see through Voldemort’s eyes; feel Voldemort’s feelings – he can experience Voldemort’s qualia for himself.

Ah, but can he, really? I mean, wouldn’t Harry still be experiencing Voldemort’s qualia through his own qualia? Like I said, this is a pretty intriguing wrinkle.

The more fundamental question, though, is this: What  does this all tell us about the concept of the Self in Wizard Metaphysics? (It’s capitalized because it’s magical.) Do Harry and Voldemort together constitute a single person? A single self? Is there a difference between those two concepts? Should there be?

I don’t ask these questions idly – in fact, here’s a much more pointed query: What do we rely on when we ask ourselves who we are? A: Memories, of course; and our thoughts and feelings about those memories. Now, if some of Harry’s thoughts and feelings and memories are of things he experienced while “in” Voldemort’s mind (whatever that means) then don’t some of Voldemort’s thoughts and feelings and memories comprise a portion of Harry’s? You can see where we run into problems.

Just one last question, and then I promise I’ll let this drop. When you read about Harry’s and Voldemort’s thoughts and feelings and memories, and you experience them for yourself, what does that say about what your Self is made of?

I’ll be back next week to talk about neurons and stuff.

Chemical Parasites

A certain brain parasite actually turns off people’s feelings of fear by increasing levels of the neurotransmitter chemical dopamine, says a new study.

T. gondii, gettin' ready to blow your %@&#$ mind.

Toxoplasma gondii, a parasitic protozoan (a kind of single-celled organism), mostly likes to live in the brains of cats - but it also infects birds, mice, and about 10 to 20 percent of people in the U.S. and U.K. This might sound like science fiction, but plenty of microbiologists will assure you it’s very real.

In fact, T. gondii isn’t the only parasite that controls its hosts’ behavior – a fungus called Ophiocordyceps unilateralis makes infected ants climb to the highest point they can find, sprout fungal spore pods from their heads, then stay there and starve to death; at which point the spores are unleashed to recruit more ants for the fungus’s zombie army. Other microbes force spiders to weave cocoons for them, or make roaches lay immobile while larvae grow inside their bodies, then chew their way out. Um, yeah, so… nature is pretty frickin’ hardcore.

Anyway, back to the parasite at hand. Throughout the past few years, a University of Leeds microbiologist named Glenn McConkey has worked at the forefront of T. gondii research – in 2009, his team made the astonishing discovery that the microbe’s genome encodes instructions for producing dopamine: in essence, this bug is living cocaine, and it’s bending the minds of millions of people at this very moment.

And now, as the journal PLoS ONE reports, McConkey’s team has made a breakthrough that is, if anything, even more incredible: once the parasite has taken up residence in a brain, it triggers the production and release of dopamine at a much greater level than normal, causing infected animals (including people) to engage in impulsive, compulsive and/or fearless behavior:

In this study, infection of mammalian dopaminergic cells with T. gondii enhanced the levels of K+-induced release of dopamine several-fold, with a direct correlation between the number of infected cells and the quantity of dopamine released … Based on these analyses, T. gondii orchestrates a significant increase in dopamine metabolism in neural cells.

In short, by changing the electrochemical properties of dopaminergic neurons (those that deal with dopamine transmission and reception), T. gondii basically causes its host’s brain to shout “I’m awesome!” ceaselessly at top volume. You can imagine the havoc this wreaks.

If the host is, say, a mouse or a bird, impulsive and fearless behavior will typically get it gobbled up by a predator, which allows the parasite to move into a new host and spawn a new generation. But if the host happens to be a human being – well, there’s no telling what might happen. For one thing, studies have found a strong link between T. gondii infection and schizophrenia.

Thanks to Science, though, there’s hope – McConkey’s team is optimistic that these new results will help doctors diagnose T. gondii infections more quickly and accurately, and perhaps use dopamine antagonists – drugs that block dopaminergic activity – to fight some of the psychotic symptoms these crazy little guys cause.

So, I guess one big question remains: why the hell isn’t this story making front-page news? Your guess is as good as mine. Kinda spooky, isn’t it?

Digital Friendships

Those of us who have loads of Facebook friends tend to have greater development in several specific brain regions, says a new study.

"Guys, guys - I just felt my entorhinal cortex triple in size!"

Researchers have found a strong correlation between large numbers of Facebook connections and increased development of gray matter – tissue containing neuron cell bodies, where dense communication occurs – in several regions crucial for social interaction: the amygdala, the right superior temporal sulcus (STS), the left middle temporal gyrus (MTG), and the right entorhinal cortex (EC).

Intriguingly, the size of some of these regions seems to correlate only with the size of people’s online social networks – not their real-world ones. It’s not clear yet, though, which factor is cause and which is effect – whether increased development in these regions enables people to develop larger online social networks, or vice versa. Even so, this is one of the first studies to directly link neuroanatomical data with online behavior.

As the journal Proceedings of the Royal Society B reports, a team led by Geraint Rees at University College London (UCL) performed MRI scans of the brains of 125 university students, and correlated this data with information on the size of these students’ Facebook friend groups:

The number of social contacts declared publicly on a major web-based social networking site was strongly associated with the structure of focal regions of the human brain. Specifically, we found that variation in the number of friends on Facebook strongly and significantly predicted grey matter volume in left MTG, right STS and right entorhinal cortex.

The exact links between these regions and online communication remain to be studied – but many of them have been correlated with social interaction in other studies.

The amygdala helps us process negative emotions like fear and sadness, both in ourselves and others – and people with larger amygdalas tend to have larger social networks overall, both online and otherwise.

Though no studies so far have correlated the size of the right STS with social network size, this structure is known to be involved in our ability to think of some objects as alive, as well as in helping us understand what others are looking at, and what emotions they’re expressing. A malfunctioning STS is thought to be a major factor in autism.

The exact role of the left MTG isn’t precisely understood, but many neuroscientists think this structure is involved in our ability to recognize familiar faces, and to process the semantic associations (i.e., meanings) of words.

The right EC works closely with the hippocampus to help us form and consolidate explicit/declarative memories – i.e., memories of specific facts and events, and specific associations between them (e.g., “kitties and bunnies are both mammals”). The EC is also one of the first areas attacked by Alzheimer’s. The researchers found that this region correlated especially strongly with online friend group size, but not particularly with real-world friend group size:

The right entorhinal cortex is implicated in associative memory formation for pairs of items including pairs of names and faces. Such memory capacity for name–face associates would constitute an important function for maintaining a large social network as observed in social network websites.

In short, the ability to mentally “tag” photos and posts with the correct associations is a central skill for maintaining digital friendships.

It’s easy to see how all the brain regions above could play central roles in a person’s ability to maintain a wide-ranging network of online friends. What’s especially interesting, though, is that all of them deal with features of social interaction that port from the real world to the online interaction space in straightforward ways - social hierarchies, facial expressions, repeatable facts, and so on – but that many other vital aspects of real-world social interaction – such as body language, and tone of voice – don’t appear to be nearly as crucial in an online social network. It’s enough to make you wonder what natural selection may have in store for our brains.

I don’t know about you, but I’m picturing a future where men compete in flame-wars for the right to woo attractive females. So while you my competitors are out hitting the bars and clubs this weekend, I’ll be – ah – honing my skills on 4chan and Reddit. Which is basically what I’d be doing this weekend anyway.

Modified Memories

Each time we retell a story, our actual memories of its events change, says a new study.

"I don't remember being an alcoholic...but I guess if you guys say so..."

When we receive hints – true or not-so-true – about a story’s details from our friends, we often revise our version if what they say makes sense to us. But what’s incredible is, it isn’t just our retelling of the story that changes – fMRI scans show that our brains actually rewrite our memories, and we end up remembering the new version as “what really happened.”

To understand how this can work at a neurological level, a team led by Micah Edelson at Israel’s Weizmann Institute of Science gathered thirty adult volunteers, split them into groups of five, and showed those groups a short film about police arresting people. Three days later, the volunteers returned to the lab and answered a questionnaire that tested their memory of the film; and four days after that, they returned and lay in an fMRI scanner while answering more questions.

For the second questionnaire, though, the volunteers got to see “lifeline” answers, which were supposedly taken from correct responses by other participants. What the subjects didn’t know, though, was that these lifeline answers were actually incorrect answers to questions they themselves had answered correctly and confidently on the first questionnaire.

How often would you guess they revised their answers? As it turned out, a full 70 percent of the time:

Our behavioral data revealed that our manipulation induced memory errors. Strikingly, participants conformed to the majority opinion in 68.3 ± 2.9% of manipulation trials, giving a false answer to questions they had previously answered correctly with relatively high confidence.

But the researchers wanted to know if these changes reflected something deeper than a willingness to buckle to peer pressure – so they performed two more tests to check.

First, the researchers brought the subjects back into the lab a fourth time, told them the incorrect lifelines were a mix of truths and falsehoods that had been generated at random by a computer, and asked if the volunteers would like to change back to their original answers. As the subjects checked their false answers against their memories, 40 percent chose to remain with the incorrect answers to questions they’d originally answered correctly – even when they’d been sure of their correct answers to begin with.

And finally, by correlating the subjects’ answers with their fMRI data, the researchers noticed an intriguing phenomenon: as the volunteers were changing their answers from correct ones to incorrect ones, their brains showed a strong co-activation between the hippocampus – a structure known to be involved in the consolidation of long-term memories from short-term ones – and the amygdala – a structure crucial for processing strong negative emotions, such as fear, embarrassment, and sadness:

Enhanced activation in the bilateral amygdala and heightened functional connectivity with the anterior hippocampus were a signature of longterm memory change induced by the social environment. This indicates that the incorporation of external social information into memory may involve the amygdala’s intercedence, in accordance with its special position at the crossroads of social cognition and memory.

In short, it seems that as the amygdala processes scary feelings of social pressure, it signals the hippocampus to wipe and rewrite our long-term memories to fit the socially agreed-upon version of events.

You don’t need me to tell you how huge the implications of this are. How many of your own long-term memories do you think coincide with what actually happened? How much do the stories you tell yourself and others about your past  shape your and their actions in the present? In a court of law, would you stake your future on the testimony of an eyewitness?

It’s hard not to be reminded of Alan Moore’s oft-repeated comments to the effect that art – in his case, writing in particular – is magic: it reshapes people’s thoughts and memories, which reshape their perceptions of the past, present, and future – and those perceptions, in turn, reshape reality itself. It’s pretty amazing to think that you have such abilities.

But what’s really going to bake your noodle later on is, Would your brain still be doing all this if I hadn’t told you this story?

The Sound of Fear

A certain inaudible sound frequency may directly trigger feelings of “creepiness” and physical symptoms of fear, one scientist says.

Don't look now, but I think I see a g-g-g-gh-gh-sound wave!

A sound frequency of around 19hz – just below the range of human hearing – has been detected in several “haunted” places, including a laboratory where staff had reported inexplicable feelings of panic, and and a pub cellar where many people have claimed to see ghosts.

Though no peer-reviewed studies have examined this phenomenon yet, I think it’s still intriguing enough to be worth talking about – and after all, it is that special time of year. So huddle up close, and let me tell you a tale – the tale of… The Frequency of Fear!

Back in the 1980s, an engineer named Vic Tandy began hearing strange stories from his otherwise-scientifically minded coworkers: whenever they spent time working in a certain laboratory, they’d experience inexplicable feelings of unease, and glimpses of ghostly apparitions.

At first, Tandy chalked these reports up to stress, or to the irritating wheeze of life-support machines that permeated the building. But one foreboding night, as Tandy toiled alone in the lab, he suddenly broke into a cold sweat, and felt the hairs on his neck stand up. He was overcome with the feeling that he was being watched. From the corner of his eye, he glimpsed a sinister gray form moving toward him – but when he turned to face it, it vanished. Tandy fled the lab for the safety of his home, his keen scientific mind churning, asking what could have triggered this bizarre episode.

The next day, Tandy happened to catch sight of a clue: in the lab, he noticed that a foil blade clamped in a vice was vibrating at a rapid rate. Fetching his trusty frequency meter, he discovered that the sound wave behind these vibrations was bouncing off the walls of the lab, and that its peak intensity was focused in the room’s center. Its frequency was 19hz – slightly below the minimum human-audible frequency of 20hz, but easy for a human body to feel as a subtle vibration.

Tandy began to delve into ancient forbidden texts (OK, actually he started reading biology papers) and learned that frequencies near this range can cause animals to behave nervously, hyperventilate, stumble dizzily, and even have trouble seeing clearly.

It’s likely that these animals’ sensitivity to these vibrations evolved as an early-warning system for earthquakes, tsunamis and related disasters, and may explain why animals flee the sites of these disasters en masse long before humans suspect anything’s the matter.

Over the years, subsequent investigations have found that similar frequencies occur in other reputedly haunted spots, which seems to indicate that we humans may be sensitive to these frequencies as well.

If you ask me, though, the scariest part of this story is that as you read this, scientists with less noble purposes could potentially be developing devices to project these frequencies directly into a target’s body. Not to be paranoid here, but I’m not too keen on the idea of a fear ray. Just putting that out there.

On the whole, I think right now is a pretty awesome time to be alive – we’ve got mind-controlled computers, we’ll soon be able to record videos of our thoughts and dreams, and it won’t be long before we can see, hear and even touch virtual worlds. But we’ve also learned that magnetic stimulation can make people want to lie, that electrical stimulation can alter our decision-making processes, and that sound waves can make us feel pain and fear.

We’re on the brink of an unprecedented epoch in human history, when miracle-working may quite literally lie within any person’s grasp – but with that power also comes the potential to create truly unimaginable hells at the push of a button. All I can say is, I hope with all my might that our better nature wins out.

Because, I don’t know about you guys, but I can hardly wait to see what the future holds.

Gimme Some Sugar

Hunger weakens the ability of regulatory brain areas to put the brakes on reward-oriented ones, a new study has found.

Possession by an insectoid alien, or just hypoglycemia? You decide!

When our brains have enough glucose to go around, a region called the medial prefrontal cortex (mPFC) helps regulate our emotions and hold our attention in a particular spot. But brain scans show that when glucose levels drop, the mPFC loses its control over areas involved in feelings of hunger and reward.

We all know what it’s like to feel cranky from hunger, or to experience a mid-afternoon energy crash – but scientists are only now beginning to fit together the neurological pieces that compose processes like these. As it turns out, hunger and emotion are even more closely interrelated than anyone expected.

As the Journal of Clinical Investigation reports, a team led by Yale’s Rajita Sinha and USC’s Kathleen Page monitored the blood glucose levels of volunteers sitting in fMRI scanners, while showing them photos of high-calorie food, low-calorie food and non-food items. By correlating brain responses to each picture with each subject’s glucose levels at that moment, the researchers found some intriguing changes in the brain’s activity patterns:

Mild hypoglycemia [low glucose] preferentially activates limbic-striatal brain regions in response to food cues to produce a greater desire for high-calorie foods. In contrast, euglycemia [normal glucose] preferentially activated the medial prefrontal cortex and resulted in less interest in food stimuli.

In other words, the more healthy a person’s blood glucose level is, the more likely that person’s mPFC is to take control.

But what’s happening the rest of the time? Well, a brain structure called the hypothalamus – which detects and automatically responds to basic needs like hunger and thirst – triggers activity in areas like the the insula, which helps provide emotional context for our bodily sensations. Another reaction, which seems to be closely related, is an increase of the stress hormone cortisol, which acts on regions like the striatum to contribute to feelings of anticipation and reward.

What this all means is that as you get hungrier, you gradually lose conscious control over your emotions – especially those related to your desire to eat.

The good news is, higher glucose levels reverse these effects, and lead to greater activation of the mPFC.

One interesting twist to this study is that the brains of obese people seem to respond to high-calorie foods as though they never have enough glucose:

These findings … suggest that this glucose-linked restraining influence is lost in obesity. Obese individuals may have a limited ability to inhibit the impulsive drive to eat, especially when glucose levels drop below normal.

The researchers are hopeful that future studies may help us understand where this vicious cycle of obesity and unhealthy eating begins – and how to break it.

For now, though, the takeaway is that our brains function best when we keep our glucose levels on a fairly even keel:

Strategies that temper postprandial reductions in glucose levels might reduce the risk of overeating, particularly in environments inundated with visual cues of high-calorie foods.

In other words, eat snacks that don’t have too many empty calories, but that still provide plenty of energy - especially if ads for junk food tend to give you cravings. When it comes to glucose, a good offense is the best defense.

Take that, Oreos! (That’s a phrase I don’t get to use very often.)

Optimistic Genetics

For the first time, scientists have pinpointed a particular gene variation linked with optimism and self-esteem, a new study reports.

A Genuine Scientific Image of the OXTR gene's G/G allele.

Two different versions – alleles - of the oxytocin receptor gene (OXTR) exist: an allele with the nucleotide “A” (adenine) at a certain location, and an allele with “G” (guanine) at that same location. Previous studies had found that people with at least one “A” molecule at that location tended to have heightened sensitivity to stress, and worse social skills.

But as the journal Proceedings of the National Academy of Sciences (PNAS) reports, a team led by UCLA’s Shelley E. Taylor were able to correlate certain alleles of OXTR with specific psychological symptoms:

We report a link between the oxytocin receptor (OXTR) SNP rs53576 and psychological resources, such that carriers of the “A” allele have lower levels of optimism, mastery, and self-esteem, relative to G/G homozygotes. OXTR was also associated with depressive symptomatology.

In other words, people who have either two “A” nucleotides, or one “A” and one “G,” at that specific location have lower-than-normal levels of optimism and self-esteem, and much higher levels of depressive symptoms, than people with two “G” nucleotides at that location on the gene.

Meanwhile, people with two “G” nucleotides at a certain location on their OXTR gene are more likely to be able to buffer themselves against stress. This is the most precise correlation between nucleotide differences and psychological traits that’s ever been discovered. And while this correlation isn’t a determiner of behavior, it does look like it’ll turn out to be an accurate predictor.

To figure this out, Taylor’s team studied the DNA from the saliva of 326 volunteers, and examined this data along with questionnaires each subject had completed. The questionnaires measured subjective feelings like self-worth, confidence, and positivity. The subjects also completed a set of questionnaires used to diagnose depression.

As is usual when stories like this – that is, about genes linked with certain traits – hit the press, there’ll probably be a flurry of articles with titles like “The Happy Gene,” making vague claims that the “gene for optimism” has been isolated. And that’s not what this study is about, at all – it’s about a connection between certain versions of a gene and the availability of certain psychological resources:

Some people think genes are destiny, that if you have a specific gene, then you will have a particular outcome. That is definitely not the case. This gene is one factor that influences psychological resources and depression, but there is plenty of room for environmental factors as well. Even people with the “A” variant can overcome depression and manage stress.

In short, what these allele differences mainly predict is a person’s susceptibility to certain psychological disorders if they encounter certain types of stress – not the likelihood that they’ll actually develop a given disorder.

It also remains to be seen what role, exactly, the neurotransmitter oxytocin, and its receptors, play in managing psychological troubles. As I’ve mentioned before, it’s been shown to lower stress and increase generosity – and it’s also involved in timing birth, encouraging hunger, and… heightening racist feelings.

Still, studies like this continue to being us more accurate and precise methods of diagnosing mental disorders – and even discovering if a person might be at risk for them. It’s also more evidence that our minds, like our bodies, are not all created equal – each of them is a unique neurochemical environment with its own thresholds of responsiveness.

So, next time somebody’s getting on your nerves, just tell them, “You better hope my oxytocin receptor genes are G/G alleles.” Take it from me: they’ll know exactly what you mean, and will probably back off and offer an apology.


Get every new post delivered to your Inbox.

Join 74 other followers