Big-O Complexity Visualizer
Drag the slider to change n and see how different time complexities compare. The table shows operation counts and flags which complexities are practical at your chosen input size.
Input Size
Growth Comparison
Operations at n = 100
| Complexity | Name | Operations | Practical? | Example |
|---|---|---|---|---|
| Constant | 1 | Array index lookup, hash table get | ||
| Logarithmic | 6.64 | Binary search, balanced BST lookup | ||
| Linear | 100 | Linear search, single loop | ||
| Linearithmic | 664.39 | Merge sort, heap sort | ||
| Quadratic | 10,000 | Bubble sort, nested loops | ||
| Cubic | 1.00e+6 | Naive matrix multiply, triple loops | ||
| Exponential | 1.07e+9 | Recursive Fibonacci, power set |
Reference Guide
What is Big-O?
Big-O notation describes the upper bound of an algorithm's growth rate as input size increases. It tells you how an algorithm scales, not its exact speed.
Efficient Complexities
O(1), O(log n), and O(n) are considered efficient. O(n log n) is the best possible for comparison-based sorting. These scale well to millions of elements.
Polynomial Complexities
O(n^2) and O(n^3) are polynomial. They work for small inputs but become slow for large ones. Nested loops are the typical cause.
Exponential Growth
O(2^n) doubles with each additional input element. Even n=30 requires over a billion operations. Brute-force approaches often fall into this category.