When it comes to some problems, our logic and intuition alone aren’t enough to reach an accurate answer. Sometimes, we’ve got to **train **our logic to run through specific **steps**.

I’ll explain what I mean by this in just a second – but first, a quick recap: in Part 1 of this little **series**, I talked about how **intuitive** and **aesthetic** judgments shape our expectations about the outcomes of our chains of reasoning. In yesterday’s Part 2, I showed you some ways that even our most seemingly logical intuitions can be **misled**.

At the end of that post, I hinted that certain reasoning **techniques** can help us overcome linguistic **traps** that tend to trip up our best logical intentions – and it’s those techniques that I’ll explain today.

Let’s start with an example from the World Science Festival’s panel on “Risk, Probability and Chance.” Panelist Gerd Gigerenzer started his talk by pointing out that **language** often biases probabilistic judgments, such as the biases Mlodinow demonstrated in the African countries example from yesterday. Then Gigerenzer posed a **question** to test the audience’s ability to intuitively assess **complex probabilities**:

He described the probabilities associated with a breast cancer test: 1% of women tested have the disease, and the test is 90 percent accurate, with a 9% false positive rate. With all that information, what do you tell a woman who tests positive about the likelihood they have the disease?

If it helps, here’s the **same** question phrased another way:

100 out of 10,000 women at age forty who participate in routine screening have breast cancer. 90 of every 100 women with breast cancer will get a positive mammography. 891 out of 9,900 women without breast cancer will also get a positive mammography. If 10,000 women in this age group undergo a routine screening, about what percentage of women with positive mammographies will actually have breast cancer?

Take a moment and think about how **you’d** figure out the answer to this question. If you look at both **phrasings**, you’ll probably realize that your **intuition **about the correct answer leans slightly **higher** for the second phrasing than it does for the first one – even though the two questions are mathematically **identical**. That should be your first clue that our intuition is pretty easy to **trick** when we’re dealing with complex probabilities.

Anyway, now that I’ve told you that, just try to **estimate** a reasonable answer, using whichever phrasing makes more sense to you. We’ll come back to this question in a little bit.

But before we go any further, I should mention that** studies** by Gigerenzer and others confirm that questions phrased like this stump most medical professionals around the world – most of them use their rational **intuition** about probabilities and guess pretty high, around **70%** or **80%**. But that’s actually **way** off.

Wouldn’t you like to know how to **out-think** all those doctors? Shouldn’t there should be a way to retrain our intuition to point toward more **accurate** answers to this sort of question? Well, as it turns out, there is.

Welcome… to the world of **Bayesian** inference!

Bayesian thinking is based on **Bayes’ theorem**, named for Thomas Bayes, an English mathematician/theologian who lived in the 1700s. This theorem (and principles based on it) are used in **statistical** analysis, **philosophical** logic, and a wide variety of other fields – some math-heavy; others not so much.

In general, Bayes’ theorem tells us that the likelihood of a particular **hypothesis** being true, given some observed **evidence**, is dependent **both** on the probability of the hypothesis being true **at all**, *and* on the probability of that hypothesis being **compatible** with that particular piece of evidence. That’s just about all there is to it.

But if you’re **curious** to learn more about how this works (and I hope you are), let’s go back to our breast cancer example. The likelihood that a patient has breast cancer, **given** that the patient has tested positive for it, depends **not only** on the accuracy of mammograms (i.e., the likelihood that a positive test result will happen if the patient actually has breast cancer), but **also** on the likelihood of the patient having breast cancer **at all**, *and* on the likelihood of that patient getting a positive test result **at all**.

Our connectomes don’t seem to be able to handle **complex probabilities** like these with as much accuracy as more straightforward estimates. As we saw above, even the intuitions of well-trained doctors are easily **tricked** by questions like this. Those doctors (and, in fact, most of us) tend to assume that if the false positive rate is **low**, and the mammogram’s accuracy is **high** overall, then the majority of women who test positive will actually have the disease. Seems obvious, doesn’t it?

Except…that’s **not true**. And now that we’re equipped to handle these kinds of tricky questions, we’re about to see just **how** not true it is. Let’s work through that same breast cancer question step by step with our new-found powers of **Bayesian inference**:^{1}

**1.** Out of every **10,000** patients, **100** actually **have** breast cancer (**1%**).

**2.** Out of those **100** patients, only **90** will get positive test results (**90%** accuracy).

**3.** Out of the total **10,000** patients, **9,900** will **not** have breast cancer (**10,000** – **100**).

With me so far? Cool – now here’s where the **Bayesian** thinking leaps into action, to save us from our **intuition** that a low false positive rate and a high accuracy mean that most patients who test positive for breast cancer will actually have it. Watch as Bayes slices and dices:

**4.** Out of our **9,900** healthy patients, **891** (**9%**) will get **false** positive test results.

**5.** A grand total of **981** patients (**891** + **90**) out of every 10,000 will get positive test results.

Now there’s only one question left to ask – and it’s the question on which this whole example hinges. Pay close attention, because you’re about to feel your previous intuition get **flipped** upside-down. The last question is, “Of all those **981** total patients who got positive test results, what percentage **actually** have breast cancer?” Well, we already know from step **#2** that only **90** out of **10,000** patients will both** a**) have breast cancer *and ***b**) get positive test results. So if you remember your junior-high math, this last step is a simple percentage problem:

**6.** Of the **981 **patients with positive test results, only about **9%** (**90** ÷ **981**, expressed as a **percentage**) actually have breast cancer. This means **91%** of patients who test positive will actually be **healthy**!

Can you tell why this is? It’s because the number of patients who **actually** have breast cancer is such a **small** percentage of the patients who** test positive** for it – essentially the **opposite** of what our logic and intuition told us!

If you need to work through that problem a few times for it to make sense to you, don’t worry – I had to work it out on paper about ten times before I really felt I understood it. But once you grasp the **principle** behind Bayes’ theorem, you’ll find that it influences your logic – and your intuition – in all sorts of ways.

If there’s one take-home gift from this whole post, it’s this: when you’re considering a new scientific **hypothesis** – or even an **idea** someone’s telling you about – stop and think about not only **1**) the likelihood of that idea being true **at all**, but also **2**) the likelihood that it **applies**, fully and accurately, to the whole **range** of data they’re applying it to. You may find that your new Bayesian instincts guide your **intuition** in some unexpected directions.

____________

1. I should point out that this is purely a mathematical example – random patient samples in the real world (obviously) don’t correspond to statistics with perfect precision.

## 3 thoughts on “Rational Intuition”