Principles on which we refuse to change our stance are processed via separate neural pathways from those we’re more flexible on, says a new study.
Some of our values can be more flexible than others...
Our minds process many decisions in moral “gray areas” by weighing the risks and rewards involved – so if the risk is lessened or the reward increased, we’re sometimes willing to change our stance. However, some of our moral stances are tied to much more primal feelings – “gut reactions” that remind us of our most iron-clad principles: don’t hurt innocent children, don’t steal from the elderly, and so on.
These fundamental values – what the study calls “sacred” values (whether they’re inspired by religious views or not) – are processed heavily by the left temporoparietal junction (TPJ), which is involved in imagining others’ minds; and by the left ventrolateral prefrontal cortex (vlPFC), which is important for remembering rules. When especially strong sacred values are called into question, the amygdala – an ancient brain region crucial for processing negative “gut” reactions like disgust and fear – also shows high levels of activation.
These results provide some intriguing new wrinkles to age-old debates about how the human mind processes the concepts of right and wrong. See, in many ancient religions (and some modern ones) rightness and wrongness are believed to be self-evident rules, or declarations passed down from on high. Even schools that emphasized independent rational thought – such as Pythagoreanism in Greece and Buddhism in Asia – still had a tendency to codify their moral doctrines into lists of rules and precepts.
But as scientists and philosophers like Jeremy Bentham and David Hume began to turn more analytical eyes on these concepts, it became clear that exceptions could be found for many “absolute” moral principles – and that our decisions about rightness and wrongness are often based on our personal emotions about specific situations.
The epic battle between moral absolutism and moral relativism is still in full swing today. The absolutist arguments essentially boil down to the claim that without some bedrock set of unshakable rules, it’s impossible to know for certain whether any of our actions are right or wrong. The relativists, on the other hand, claim that without some room for practical exceptions, no moral system is adaptable enough for the complex realities of this universe.
But now, as the journal Philosophical Transactions of the Royal Society B: Biological Sciences reports, a team led by Emory University’s Gregory Berns has analysed moral decision-making from a neuroscientific perspective – and found that our minds rely on rule-based ethics in some situations, and practical ethics in others.
The team used fMRI scans to study patterns of brain activity in 32 volunteers as the subjects responded “yes” or “no” to various statements, ranging from the mundane (e.g., “You are a tea drinker”) to the incendiary (e.g., “You are pro-life.”).
At the end of the questionnaire, the volunteers were offered the option of changing their stances for cash rewards. As you can imagine, many people had no problem changing their stance on, say, tea drinking for a cash reward. But when they were pressed to change their stances on hot-button issues, something very different happened in their brains:
We found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval.
In other words, people have learned to process certain moral decisions by bypassing their risk/reward pathways and directly retrieving stored “hard and fast” rules.
This suggests that sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.
Of course, this makes it much easier to understand why “there’s no reasoning” with some people about certain issues – because it wasn’t reason that brought them to their stance in the first place. You might as well try to argue a person out of feeling hungry.
That doesn’t mean, though, that there’s no hope for intelligent discourse about “sacred” topics – what it does mean is that instead of trying to change people’s stances on them through logical argument, we need to work to understand why these values are sacred to them.
For example, the necessity of slavery was considered a sacred value all across the world for thousands of years – but today slavery is illegal (and considered morally heinous) in almost every country on earth. What changed? Quite a few things, actually – industrialization made hard manual labor less necessary for daily survival; overseas slaving expeditions became less profitable; the idea of racial equality became more popular…the list could go on and on, but it all boils down to a central concept: over time, the needs slavery had been meeting were addressed in modern, creative ways – until at last, most people felt better not owning slaves than owning them.
My point is, if we want to make moral progress, we’ve got to start by putting ourselves in the other side’s shoes – and perhaps taking a more thoughtful look at out own sacred values while we’re at it.