All Tools

Big-O Complexity Visualizer

Drag the slider to change n and see how different time complexities compare. The table shows operation counts and flags which complexities are practical at your chosen input size.

Input Size

Growth Comparison

n (input size)OperationsO(1)O(log n)O(n)O(n log n)O(n²)O(n³)
O(1) - ConstantO(log n) - LogarithmicO(n) - LinearO(n log n) - LinearithmicO(n²) - QuadraticO(n³) - CubicO(2ⁿ) - Exponential

Operations at n = 100

ComplexityNameOperationsPractical?Example
O(1)O(1)Constant1Array index lookup, hash table get
O(logn)O(log n)Logarithmic6.64Binary search, balanced BST lookup
O(n)O(n)Linear100Linear search, single loop
O(nlogn)O(n log n)Linearithmic664.39Merge sort, heap sort
O(n2)O(n²)Quadratic10,000Bubble sort, nested loops
O(n3)O(n³)Cubic1.00e+6Naive matrix multiply, triple loops
O(2n)O(2ⁿ)Exponential1.07e+9Recursive Fibonacci, power set

Reference Guide

What is Big-O?

Big-O notation describes the upper bound of an algorithm's growth rate as input size increases. It tells you how an algorithm scales, not its exact speed.

O(f(n)) means growth is at most proportional to f(n)O(f(n)) \text{ means growth is at most proportional to } f(n)

Efficient Complexities

O(1), O(log n), and O(n) are considered efficient. O(n log n) is the best possible for comparison-based sorting. These scale well to millions of elements.

Polynomial Complexities

O(n^2) and O(n^3) are polynomial. They work for small inputs but become slow for large ones. Nested loops are the typical cause.

Exponential Growth

O(2^n) doubles with each additional input element. Even n=30 requires over a billion operations. Brute-force approaches often fall into this category.