Bayesian Inference Lab
Explore how prior beliefs combine with new evidence to produce updated probabilities. Discover why a 95%-accurate test can still be wrong most of the time, and build intuition for conditional probability through tree diagrams, natural frequencies, and sequential updating.
Guided Experiment: The Base Rate Fallacy
If a medical test has 95% sensitivity and 5% false positive rate, and 1% of the population has the disease, what is the probability that a person who tests positive actually has the disease? Write your intuitive guess before calculating.
Write your hypothesis in the Lab Report panel, then click Next.
Controls
Base rate of the condition
Probability of testing positive when condition is present
Probability of testing positive when condition is absent
Bayes' Theorem
Probability Tree
Data Table
(0 rows)| # | Trial | Prior P(A) | Sensitivity | False Pos Rate | Posterior P(A|B) | PPV | NPV |
|---|
Reference Guide
Bayes' Theorem
Bayes' theorem tells us how to update the probability of a hypothesis after observing evidence.
The posterior P(A|B) depends on three ingredients: the prior P(A), the likelihood P(B|A), and the total probability of the evidence P(B).
Natural Frequencies
Rather than juggling abstract probabilities, think in terms of concrete counts.
Of 59 positive results, only 9 (about 15%) actually have the condition. Counting makes Bayes intuitive.
Sensitivity and Specificity
Sensitivity measures how well a test catches true positives. Specificity measures how well it avoids false positives.
A highly sensitive test rarely misses a case. A highly specific test rarely gives a false alarm. The best tests score high on both.
The Base Rate Fallacy
People tend to ignore the base rate (prior) when interpreting test results. A 95%-sensitive test sounds nearly perfect, but if the condition is rare, most positives are false positives.
This is why screening tests for rare conditions often require a second, more specific follow-up test.