Big-O & Algorithm Analysis Lab
Explore how different complexity classes grow, compare sorting and searching algorithms side by side, and apply the Master Theorem to analyze divide-and-conquer recurrences. Record your findings and build a lab report.
Guided Experiment: When Does Complexity Matter?
At what input size does a quadratic algorithm become 10x slower than a linear one? How does the gap grow as n increases?
Write your hypothesis in the Lab Report panel, then click Next.
Growth Curves
Controls
Select 2-4 complexity classes
Presets
Results
Operations at n = 100
Comparison at key input sizes
| n | O(n) | O(n²) |
|---|---|---|
| 10 | 10 | 100 |
| 100 | 100 | 10,000 |
| 1,000 | 1,000 | 1.00e+6 |
| 10,000 | 10,000 | 1.00e+8 |
Data Table
(0 rows)| # | Scenario | Complexity | n | Operations | Notes |
|---|
Reference Guide
Common Complexities
How many operations an algorithm needs as input size grows.
| Notation | Name | Example |
|---|---|---|
| Constant | Array index | |
| Logarithmic | Binary search | |
| Square root | Trial division | |
| Linear | Linear search | |
| Linearithmic | Merge sort | |
| Quadratic | Bubble sort | |
| Cubic | Matrix multiply | |
| Exponential | Subset generation | |
| Factorial | Permutations |
Master Theorem
For divide-and-conquer recurrences of the form
where a is the number of subproblems, b is the factor by which input shrinks, and n^d is the cost of combining.
Case 1 (d < log_b a):
Case 2 (d = log_b a):
Case 3 (d > log_b a):
Best, Average, and Worst Case
Best case is the minimum number of operations for any input of size n. For bubble sort on an already-sorted array, this is O(n).
Average case is the expected number of operations over all possible inputs. For bubble sort this is O(n²).
Worst case is the maximum number of operations. For quick sort with a bad pivot, this is O(n²), while average and best are O(n log n).
Worst-case analysis is most common because it provides a guarantee. Average-case is useful when inputs are random. Best-case is rarely useful on its own.
Amortized Analysis
Amortized analysis averages the cost of a sequence of operations, even when individual operations vary in cost.
Example: a dynamic array doubles in size when full. Most insertions are O(1), but resizing is O(n). Over n insertions, the total work is O(2n), so the amortized cost per insertion is O(1).
Three methods are commonly used: the aggregate method (total cost divided by operations), the accounting method (assign credits to cheap operations), and the potential method (track a potential function over the data structure state).