Type I and Type II Errors infographic - False Positives and False Negatives

Click image to open full size

In hypothesis testing, statisticians make decisions using sample data, but those decisions can be wrong. Type I and Type II errors describe the two main ways a test can fail when deciding whether to reject a null hypothesis. These ideas matter because every real test, from medical screening to quality control, balances the risk of false alarms against the risk of missed effects. Understanding both errors helps students interpret test results more carefully.

A Type I error happens when the null hypothesis is actually true, but the test rejects it anyway. A Type II error happens when the null hypothesis is actually false, but the test fails to reject it. The probability of a Type I error is called α\alpha, and the probability of a Type II error is called β\beta. Test power is 1β1 - \beta, so reducing missed detections usually means increasing power through better design, larger samples, or stronger effects.

Key Facts

  • Type I error: reject H0H_0 when H0H_0 is true.
  • Type II error: fail to reject H0H_0 when H0H_0 is false.
  • P(Type I error) = α\alpha.
  • P(Type II error) = β\beta.
  • Power = 1β1 - \beta.
  • Lowering α\alpha usually makes rejecting H0H_0 harder and can increase β\beta if sample size stays fixed.

Vocabulary

Null hypothesis
The default claim, usually written as H0H_0, that says there is no effect, no difference, or no change.
Alternative hypothesis
The competing claim, usually written as HaH_a or H1H_1, that says an effect, difference, or change exists.
Significance level
The chosen cutoff α\alpha that sets the maximum tolerated probability of a Type I error.
Type I error
A false positive in which the test rejects the null hypothesis even though it is actually true.
Type II error
A false negative in which the test does not reject the null hypothesis even though it is actually false.

Common Mistakes to Avoid

  • Saying a Type I error means the null hypothesis is false, because a Type I error actually happens when the null hypothesis is true but gets rejected anyway.
  • Thinking failing to reject H0H_0 proves H0H_0 is true, because the test may simply lack enough evidence or power to detect a real effect.
  • Confusing α\alpha with β\beta, because α\alpha is the probability of a false positive while β\beta is the probability of a false negative.
  • Assuming lowering α\alpha improves everything, because making rejection harder can increase β\beta and reduce power when sample size does not change.

Practice Questions

  1. 1 A medical test uses α=0.05\alpha = 0.05. What is the probability of a Type I error, and what does that mean in words?
  2. 2 A hypothesis test has β=0.18\beta = 0.18. Calculate the power of the test.
  3. 3 A researcher lowers α\alpha from 0.050.05 to 0.010.01 without increasing sample size. Explain how this change is likely to affect the chances of Type I and Type II errors.