Big-O & Algorithm Analysis Lab

Explore how different complexity classes grow, compare sorting and searching algorithms side by side, and apply the Master Theorem to analyze divide-and-conquer recurrences. Record your findings and build a lab report.

Guided Experiment: When Does Complexity Matter?

At what input size does a quadratic algorithm become 10x slower than a linear one? How does the gap grow as n increases?

Write your hypothesis in the Lab Report panel, then click Next.

Growth Curves

02k4k7k9k11k020406080100n (input size)OperationsO(n)O(n²)
O(n) - LinearO(n²) - Quadratic

Controls

Select 2-4 complexity classes

Presets

Results

Operations at n = 100

100
10,000

Comparison at key input sizes

nO(n)O(n²)
1010100
10010010,000
1,0001,0001.00e+6
10,00010,0001.00e+8

Data Table

(0 rows)
#ScenarioComplexitynOperationsNotes
0 / 500
0 / 500
0 / 500

Reference Guide

Common Complexities

How many operations an algorithm needs as input size grows.

Notation Name Example
O(1)O(1)ConstantArray index
O(logn)O(\log n)LogarithmicBinary search
O(n)O(\sqrt{n})Square rootTrial division
O(n)O(n)LinearLinear search
O(nlogn)O(n \log n)LinearithmicMerge sort
O(n2)O(n^2)QuadraticBubble sort
O(n3)O(n^3)CubicMatrix multiply
O(2n)O(2^n)ExponentialSubset generation
O(n!)O(n!)FactorialPermutations

Master Theorem

For divide-and-conquer recurrences of the form

T(n)=aT ⁣(nb)+Θ(nd)T(n) = aT\!\left(\frac{n}{b}\right) + \Theta(n^d)

where a is the number of subproblems, b is the factor by which input shrinks, and n^d is the cost of combining.

Case 1 (d < log_b a): T(n)=Θ(nlogba)T(n) = \Theta(n^{\log_b a})

Case 2 (d = log_b a): T(n)=Θ(ndlogn)T(n) = \Theta(n^d \log n)

Case 3 (d > log_b a): T(n)=Θ(nd)T(n) = \Theta(n^d)

Best, Average, and Worst Case

Best case is the minimum number of operations for any input of size n. For bubble sort on an already-sorted array, this is O(n).

Average case is the expected number of operations over all possible inputs. For bubble sort this is O(n²).

Worst case is the maximum number of operations. For quick sort with a bad pivot, this is O(n²), while average and best are O(n log n).

Worst-case analysis is most common because it provides a guarantee. Average-case is useful when inputs are random. Best-case is rarely useful on its own.

Amortized Analysis

Amortized analysis averages the cost of a sequence of operations, even when individual operations vary in cost.

Example: a dynamic array doubles in size when full. Most insertions are O(1), but resizing is O(n). Over n insertions, the total work is O(2n), so the amortized cost per insertion is O(1).

Three methods are commonly used: the aggregate method (total cost divided by operations), the accounting method (assign credits to cheap operations), and the potential method (track a potential function over the data structure state).