All Labs

Bayesian Inference Lab

Explore how prior beliefs combine with new evidence to produce updated probabilities. Discover why a 95%-accurate test can still be wrong most of the time, and build intuition for conditional probability through tree diagrams, natural frequencies, and sequential updating.

Guided Experiment: The Base Rate Fallacy

If a medical test has 95% sensitivity and 5% false positive rate, and 1% of the population has the disease, what is the probability that a person who tests positive actually has the disease? Write your intuitive guess before calculating.

Write your hypothesis in the Lab Report panel, then click Next.

Controls

Prior and Test Parameters

Base rate of the condition

Probability of testing positive when condition is present

Probability of testing positive when condition is absent

Bayes' Theorem

P(AB)=P(BA)P(A)P(BA)P(A)+P(BA)P(A)P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B|A) \cdot P(A) + P(B|A') \cdot P(A')}
Posterior Probability
16.10%Given a positive test, the probability of having the condition
P(AB)=0.95×0.010.059=0.161P(A|B) = \frac{0.95 \times 0.01}{0.059} = 0.161
P(B) Total
5.90%
LR+
19
Specificity
95.00%
PPV
16.67%
NPV
100.00%
Sensitivity
95.00%

Probability Tree

P(A) = 1.0%P(A') = 99.0%P(B|A) = 95.0%P(B'|A) = 5.0%P(B|A') = 5.0%P(B'|A') = 95.0%PopHas ANo ATrue +10False −0False +50True −940P(A|B)16.10%

Data Table

(0 rows)
#TrialPrior P(A)SensitivityFalse Pos RatePosterior P(A|B)PPVNPV
0 / 500
0 / 500
0 / 500

Reference Guide

Bayes' Theorem

Bayes' theorem tells us how to update the probability of a hypothesis after observing evidence.

P(AB)=P(BA)P(A)P(B)P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)}

The posterior P(A|B) depends on three ingredients: the prior P(A), the likelihood P(B|A), and the total probability of the evidence P(B).

Natural Frequencies

Rather than juggling abstract probabilities, think in terms of concrete counts.

Out of 1000 people: 10 sick, 9 test +; 990 healthy, 50 test +\text{Out of 1000 people: 10 sick, 9 test }+\text{; 990 healthy, 50 test }+

Of 59 positive results, only 9 (about 15%) actually have the condition. Counting makes Bayes intuitive.

Sensitivity and Specificity

Sensitivity measures how well a test catches true positives. Specificity measures how well it avoids false positives.

Sensitivity=P(BA),Specificity=P(BA)=1FPR\text{Sensitivity} = P(B|A), \quad \text{Specificity} = P(B'|A') = 1 - \text{FPR}

A highly sensitive test rarely misses a case. A highly specific test rarely gives a false alarm. The best tests score high on both.

The Base Rate Fallacy

People tend to ignore the base rate (prior) when interpreting test results. A 95%-sensitive test sounds nearly perfect, but if the condition is rare, most positives are false positives.

P(disease)=1%,  Sensitivity=95%    P(disease+)16%P(\text{disease}) = 1\%, \; \text{Sensitivity} = 95\% \implies P(\text{disease}|+) \approx 16\%

This is why screening tests for rare conditions often require a second, more specific follow-up test.