Yes, it’s that special time of year again – time for flamboyant bouquets and chalky candy to appear at office desks – time for Facebook pages to drown in cloying iconography – time for self-labeled “forever aloners” to dredge the back alleys of OKCupid in last-ditch desperation – and time for me to load up my trusty gatling crossbow with oxytocin-tipped darts and hit the streets.
So, while I guess I could write about, say, a new study that says cutting your romantic partner some slack can make him or her more capable of actual change, or this one that says love and chocolate are good for cardiovascular health, I think it’ll be much more interesting to talk about what’s really on most of our minds today:
What does science have to say about “getting the girl” (or guy) of your dreams? And what do actual girls (and guys) think about it?
Let’s start with some full disclosure: about this time last year, I decided to see what all the fuss was about, and I read The Game for myself – and then I read some of the other works it cites, too. And I started talking to my friends (both male and female) about what they thought of the ideas in those books – and I tested a lot of the ideas I read, the same way I’d test any hypothesis: I wrote down the predictions various authors made, and checked how well those predictions lined up with my own real-world experiences.
In short, I went Full Geek on the topic.
What I learned is that, on the spectrum of scientific rigorousness – a scale from, say, astrology (0) to molecular chemistry (10) – most of this stuff falls somewhere in the 4-to-6 range: It tends to be more evidence-based than, say, ghost-hunting; but it still falls firmly into the realm of the “softer” sciences, like psychotherapy and so on.
The reason for this is that – as many pick-up artists freely admit – their craft is at least as much an artistic pursuit as a scientific one. Much like, say, Aristotle and Hobbes and Descartes, PUAs do their best to ground their conclusions logically in real-world data that anyone is free to test and refute – but at the same time, like those great philosophers of old, PUAs tend to be more intent on constructing elaborate thought systems than on presenting their “ugly” raw data for independent labs to crunch through.
This means pick-up manuals tend to read more like philosophical treatises than scientific papers.
And I think it’s this very feature of pick-up art that explains why it’s such a polarizing topic – why many women (and plenty of men) find the very concept insulting and distasteful, while other men swear that it’s transformed them from self-loathing losers into sexually fulfilled alpha males.
See, many women will tell you in no uncertain terms that pickup “tricks” don’t work on someone as intelligent and experienced as them; and that even if such tricks did work, they don’t want to be “picked up” – instead, they want to fall in love (or at least in lust) with a man who’s honest about his real self and his real feelings. Many men, too, would agree that crafty seduction techniques somehow cheapen the process – that it’s better to be “forever alone” than to be surrounded by adoring women who were manipulated into their romantic feelings.
Meanwhile, men who’ve had “success” (however they choose to define it) as a result of a pick-up system’s techniques will often defend that system to the death – much like how a person who’s found inner peace thanks to, say, Buddhism will often defend it passionately against anti-Buddhist viewpoints.
What I’m arguing here, though, is that none of these reactions pertain directly to the underlying process of seduction at all – rather, they’re reactions to the (often sleazy-sounding) thought-systems that various writers have constructed around their experiences with that process.
Because – let’s get right down to it – in all our interactions with other humans, we’re hoping to manipulate the outcome somehow. Double entendres, pop-cultural references, stylish clothes and makeup, kind gestures, subtle dishonesty – even honesty itself – all these are tools and techniques that we hope will garner us a certain response.
For example, if you choose to callously manipulate the people around you, you may get a lot more sex than you would otherwise – but you’ll also end up with a lot of shallow relationships, which you’ll probably come to regret eventually. If you choose to be completely honest all the time, you may repel some people – but you’ll probably also find that those who stick around end up respecting you for who you really are.
It’s Game Theory 101: Players who “win” are those who understand the rules, risks and rewards of the game – and play accordingly. All the sleazy lingo and tricks – all the elaborate systems – are just various people’s attempts to explain these dynamics as they play out in gender relations, and to sell their vision of the process to a demographic of sex-starved men, whose desires they understand quite well.
But still – the underlying process itself is no more and no less sleazy than the mind of the person using it.
In other words, when you read between the lines of these PUA systems, most of them turn out to be geared toward the same premises: That to grow as a person, you need to 1) be fully honest with yourself about what you want from the people around you, 2) acknowledge the personal changes that need to be made in order to achieve those results, and 3) steadily work to make those changes in yourself.
From an evolutionary psychology perspective, it’s hard for me to see how that’s inherently more “cheap” than, say, a woman learning how to dress and speak seductively in order to get what she wants.
Yes, there are a lot of sleazy men out there who objectify women and sweet-talk them into one-night stands. There are also plenty of sweet-talking women out there who milk men for the contents of their wallets, then move on. And so we label each other “douchebags” and “bitches,” and keep engaging in the same defensive behaviors, and no one’s really happy.
And I hate that Game. I despise it.
At the same time, though, it’s clear that we humans, like many other animals, have evolved to play competitive social games – there’s no getting around that fact. But unlike many animals, we don’t have to play the game exactly as our instincts tell us to – we’re metacognitive, so we can learn to play using strategies that don’t result in zero-sum outcomes: We can develop tactics that help both sides get more of what they want. We can harness our evolutionary drives to mutually-beneficial behavior patterns.
Doesn’t that make you want to learn to play more creatively, instead of trying not to play at all?
I mean, at the end of the day, it kinda fills me with love for the Game.
What do you think?
If you continue to practice a skill even after you’ve achieved mastery of it, your brain keeps learning to perform it more and more efficiently, says a new study.
As we perform a task – say, dunking a basketball or playing a sweet guitar solo – over and over again, we eventually reach a point that some psychologists call “unconscious competence,” where we execute each movement perfectly without devoting any conscious attention to it at all. But even after this point, our bodies keep finding ways to perform the task more and more efficiently, burning less energy with each repetition.
This story’s got it all – brain-hacks, mysterious discoveries, robots – but to put it all in perspective, we’ve gotta start by talking about this idea we call perfection.
“Practice makes perfect,” the old saying goes – but what’s this “perfect” we’re trying to reach? Isn’t it often a matter of opinion? What I mean is, how do we judge, say, a “perfect” backflip or a “perfect” dive? We compare it to others we’ve seen, and decide that it meets certain criteria better than those examples did; that it was performed with less error.
But where do these criteria for perfection come from? Well, some have said there’s a Platonic realm of “perfect forms” that our minds are somehow tapping into – a realm that contains not only “The Perfect Chair” but “the perfect version of that chair” and “the perfect version of that other chair” and “the perfect version of that molecule” and so on, ad infinitum. Kinda weird, I know – but a lot of smart people believed in ideas like this for thousands of years, and some still do.
Science, though, works in a different way: Instead of trying to tap into a world of perfect forms, scientists (and engineers and mathematicians and programmers and so on) work to find errors and fix them.
And it turns out that the human body is quite talented at doing exactly that. A team led by Alaa Ahmed at the University of Colorado at Boulder found this out firsthand, with the help of robots, the Journal of Neuroscience reports:
Seated subjects made horizontal planar reaching movements toward a target using a robotic arm.
These researchers weren’t interested in brain activity – instead, as the volunteers practiced moving the arm, the researchers measured their oxygen consumption, their carbon dioxide output, and their muscle activity.
As you might expect, the scientists found that as people got better at moving the arm, their consumption of oxygen and production of carbon dioxide, and their overall muscle activity, steadily decreased:
Subjects decreased movement error and learned the novel dynamics. By the end of learning, net metabolic power decreased by ∼20% from initial learning. Muscle activity and coactivation also decreased with motor learning.
But the volunteers’ bodies didn’t stop there. As people kept practicing, their gas consumption and output continued to decrease – and so did their muscle activation. In short, their bodies kept learning to move the arm with measurably less and less physical effort.
Though this study didn’t record any data from the subjects’ brains, it’s easy to see how this continual improvement is just one reflection of a very versatile ability. For instance, we know that when two neurons get really friendly, they become more sensitive to each others’ signals – and we also know that underused neural pathways gradually fade away, making room for new ones. Self-improvement impulses are woven deeply into our bodies – into our cells.
When I say that our brains and bodies are cities, I’m not just speaking metaphorically – you are, quite literally, a vast community – an ecosystem composed of trillions of interdependent microorganisms, each one constantly struggling for its own nourishment and safety.
And though your conscious mind is one part – a very significant part – of this great microscopic nation, it’s not the only part that can learn. At this moment, all throughout the lightless highways and chambers of your body, far below your conscious access, networks of cells are changing, adapting, learning, adjusting – finding errors and fixing them.
So, you can think about “perfection” all you want – but even at that magical moment when you achieve it, the multitudes within you are still hard at work, figuring out how to reach beyond that ideal.
What do you think they’re up to right now?
Researchers have isolated a specific pathway our brains use when learning new beliefs about others’ motivations, a new study says.
Though this type of learning, like many others, depends heavily on the neurotransmitter chemical dopamine‘s influence in a set of ancient brain structures called the basal ganglia, it’s also influenced by the rostral anterior cingulate cortex (ACC) – a structure that helps us weigh certain emotional reactions against others – indicating that emotions like empathy also play crucial roles.
As we play competitively against other people, our brains get to work constructing mental models that aim to predict our opponents’ future actions. This means we’re not only learning from the consequences of our own actions, but figuring out the reasons behind others‘ actions as well. This ability is known as theory of mind, and it’s thought to be one of the major mental skills that separates the minds of humans – and of our closest primate cousins – from those of other animals.
Though plenty of studies have examined the neural correlates of straightforward cause-and-effect learning, the process by which we learn from the actions of other people still remains somewhat unclear – largely because complex emotions like empathy and regret seem to involve many areas of the brain, including parts of the temporal, parietal and prefrontal cortices, as well as more ancient structures like the basal ganglia and cingulate cortex.
That’s why a team led by the University of Illinois’ Kyle Mathewson set out to track exactly what happens in our brains as we learn new ideas about other’s motivations, the journal Proceedings of the National Academy of Sciences reports.
The team used functional magnetic resonance imaging (fMRI) to study activity deep within volunteers’ brains as they played a competitive betting game against one another – focusing especially on moments when players learned whether they’d won or lost a round, and how much their opponents had wagered.
The researchers then used a computational model to match up patterns of brain activity with patterns of play – and found that the volunteers’ brains learned others’ behaviors and motivations through a complex interplay of several regions:
We found that the reinforcement learning (RL) prediction error was correlated with activity in the ventral striatum.
In other words, the ventral striatum – an area of the basal ganglia – was crucial for learning by reinforcement, much as the researchers expected…
In contrast, activity in the ventral striatum, as well as the rostral anterior cingulate (rACC), was correlated with a previously uncharacterized belief-based prediction error. Furthermore, activity in rACC reflected individual differences in degree of engagement in belief learning.
…while the anterior cingulate, on the other hand, seemed to dictate how attentively players watched their opponents’ patterns of play, and how much thought they put into predicting those patterns.
Thus, it appears that theory of mind is built atop an ancient “substructure” of simple reinforcement learning, which supports layers of more emotionally complex attitudes and beliefs about others’ thoughts, feelings and motivations – many of which are influenced by our perceptions of our own internal feelings.
And that points back to an important aspect of subjective experience in general: Many of our perceptions of the external world are extrapolated from our perceptions of our internal states. When we say, “It’s hot,” we really mean, “I feel hot;” when we say, “It’s loud in here,” we really mean, “It sounds loud to me.” In fact, the great philosopher Bertrand Russell has gone so far as to suggest that instead of saying, “I think,” it’d be more accurate to say “It thinks in me,” the same way we say “It’s raining.”
Anyway, no matter how you choose to phrase it, the point is that thinking isn’t a single process, but a relationship of many processes to one another. Which means that no matter how much we think we know, there’s always plenty left to learn.
Here it is – the first Connectome podcast!
Click here to subscribe in iTunes.
Join us as we talk with Joshua Vogelstein, a leading connectomics researcher, about the Open Connectome Project, an international venture to make data on neural connectivity available to everyone, all over the world. It’s like Google Maps for your brain.
Here’s a direct link to download the mp3.
We’ve learned a lot while working on this first episode, and future ones will be much cleaner and higher-fi.
A new study throws some light on how musical aptitude can offset one very specific aspect of the aging process.
In research comparing older patients with musical training to those without, older people who’d spent time regularly practicing or teaching music consistently displayed much faster neural reaction times to certain kinds of sounds.
The idea that the human brain has a deep relationship with music is obviously nothing new – but lately, research has been demonstrating more and more ways in which music is a major ingredient in mental health. For example, a 2007 study found that the brain reacts to music by automatically heightening attention, and one in 2010 found that an ear for harmony was correlated with a better ability to distinguish speech from noise.
The therapeutic implications of all this haven’t gone unnoticed. The neuroscientist Michael Merzenich has cured patients of chronic tinnitus (ear-ringing) by prescribing them musical training – and he’s had remarkable success using it to improve the responsiveness of autistic children.
Inspired by Merzenich’s work, a team led by Northwestern University’s Nina Kraus made up an experiment: They decided to record the reaction times of musicians‘ brains when they heard certain sounds, and compare those against the reaction times of people with no musical training.
As the journal Neurobiology of Aging reports, the team inserted electrodes directly into the patients’ brains during surgery, like this (WARNING – the following image is a very cool but very bloody photo of brain surgery): here, and recorded exactly how quickly their auditory cortex reacted to a variety of speech sounds.
They found that older musicians’ brains seemed to keep their youthful reaction speeds; at least when it came to a certain kind of sound: The syllable “da” – one of the “hard” vowel sounds known as formant transitions in science slang:
Although younger and older musicians exhibited equivalent response timing for the formant transition, older nonmusicians demonstrated significantly later re-sponse timing relative to younger nonmusicians … The main effect of musicianship observed for the neural response to the onset and the transition was driven solely by group differences in the older participants.
In other words, a musicians’ brain responds to the “da” sound just as quickly as it did in youth – but a nonmusician’s response time slows down significantly as it ages.
The slowdown isn’t much – only a few milliseconds – but in brain time, that can be enough to cause problems. See, we’re not talking about conscious reaction time here – this is electrophysiological reaction time – the speed at which information travels in the brain.
Why does this matter? Because mental issues like autism, senile dementia and schizophrenia are all related to very slight timing errors in the brain’s elaborate communication patterns. An aging brain isn’t so much an old clock as an old city. Ever notice how the most ancient cities tend to be the ones with the weirdest cultures? Well, there ya go.
Just like old cities, though, autism and dementia and schizophrenia – and aging – can be scary sometimes, but they’re also the sources of great breakthroughs, and remarkable insights, and all sorts of conversations that couldn’t have happened otherwise.
What I’m saying is, the only measurable difference between a disorder and a gift is that one is helpful and the other isn’t. And in most cases, that difference really comes down to timing.
Principles on which we refuse to change our stance are processed via separate neural pathways from those we’re more flexible on, says a new study.
Our minds process many decisions in moral “gray areas” by weighing the risks and rewards involved – so if the risk is lessened or the reward increased, we’re sometimes willing to change our stance. However, some of our moral stances are tied to much more primal feelings – “gut reactions” that remind us of our most iron-clad principles: don’t hurt innocent children, don’t steal from the elderly, and so on.
These fundamental values – what the study calls “sacred” values (whether they’re inspired by religious views or not) – are processed heavily by the left temporoparietal junction (TPJ), which is involved in imagining others’ minds; and by the left ventrolateral prefrontal cortex (vlPFC), which is important for remembering rules. When especially strong sacred values are called into question, the amygdala – an ancient brain region crucial for processing negative “gut” reactions like disgust and fear – also shows high levels of activation.
These results provide some intriguing new wrinkles to age-old debates about how the human mind processes the concepts of right and wrong. See, in many ancient religions (and some modern ones) rightness and wrongness are believed to be self-evident rules, or declarations passed down from on high. Even schools that emphasized independent rational thought – such as Pythagoreanism in Greece and Buddhism in Asia – still had a tendency to codify their moral doctrines into lists of rules and precepts.
But as scientists and philosophers like Jeremy Bentham and David Hume began to turn more analytical eyes on these concepts, it became clear that exceptions could be found for many “absolute” moral principles – and that our decisions about rightness and wrongness are often based on our personal emotions about specific situations.
The epic battle between moral absolutism and moral relativism is still in full swing today. The absolutist arguments essentially boil down to the claim that without some bedrock set of unshakable rules, it’s impossible to know for certain whether any of our actions are right or wrong. The relativists, on the other hand, claim that without some room for practical exceptions, no moral system is adaptable enough for the complex realities of this universe.
But now, as the journal Philosophical Transactions of the Royal Society B: Biological Sciences reports, a team led by Emory University’s Gregory Berns has analysed moral decision-making from a neuroscientific perspective – and found that our minds rely on rule-based ethics in some situations, and practical ethics in others.
The team used fMRI scans to study patterns of brain activity in 32 volunteers as the subjects responded “yes” or “no” to various statements, ranging from the mundane (e.g., “You are a tea drinker”) to the incendiary (e.g., “You are pro-life.”).
At the end of the questionnaire, the volunteers were offered the option of changing their stances for cash rewards. As you can imagine, many people had no problem changing their stance on, say, tea drinking for a cash reward. But when they were pressed to change their stances on hot-button issues, something very different happened in their brains:
We found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval.
In other words, people have learned to process certain moral decisions by bypassing their risk/reward pathways and directly retrieving stored “hard and fast” rules.
This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.
Of course, this makes it much easier to understand why “there’s no reasoning” with some people about certain issues – because it wasn’t reason that brought them to their stance in the first place. You might as well try to argue a person out of feeling hungry.
That doesn’t mean, though, that there’s no hope for intelligent discourse about “sacred” topics – what it does mean is that instead of trying to change people’s stances on them through logical argument, we need to work to understand why these values are sacred to them.
For example, the necessity of slavery was considered a sacred value all across the world for thousands of years – but today slavery is illegal (and considered morally heinous) in almost every country on earth. What changed? Quite a few things, actually – industrialization made hard manual labor less necessary for daily survival; overseas slaving expeditions became less profitable; the idea of racial equality became more popular…the list could go on and on, but it all boils down to a central concept: over time, the needs slavery had been meeting were addressed in modern, creative ways – until at last, most people felt better not owning slaves than owning them.
My point is, if we want to make moral progress, we’ve got to start by putting ourselves in the other side’s shoes – and perhaps taking a more thoughtful look at out own sacred values while we’re at it.