A brief intro to algorithmic time complexity

The other day I was struck by the thought that something that is extremely obvious to computer scientists may, in fact, be completely unintuitive to other folks. It was this:

The running time of an algorithm does not necessarily scale linearly to the size of the input

That is, if a program*see note below takes one minute to perform some function on one thousand items, it is not guaranteed that it will take ten minutes to perform this function on ten thousand items - it could take much, much longer.

*Here, I am actually talking about algorithms, not programs. An algorithm is a step-by-step outline of how to solve a problem - sort of like a recipe. It is not anything like the actual program you'd use to solve the problem. We use algorithms every day without ever applying that term to them! Every time you follow a recipe, a guide or tutorial, a process or routine, you're following an algorithm to accomplish a task.

In everyday life, we are used to dealing with linear relationships: if it takes one hour to drive a hundred kilometers, it will take two hours to drive two hundred kilometers; if it takes a day to read a 200 page novel, it will take a week to read a 1400 page book. As one factor increases, the other factor increases by a constant, linear proportion. (e.g. one hour per 100 kilometers; one day per 200 pages) Linearity is very natural, and we conceptualize it easily.

There are, however, many other kinds of relationships. It is possible - and quite common - to have an algorithm that has a quadratic relationship between the size of input and the time taken to run. That is, for every n-fold increase in the size of input, there is a n2-fold increase in the time it takes! Some other possible relationships between runtime and input-size are cubic (n3), factorial (n!), exponential (2n, 3n...), and logarithmic (logn). (See "Examples," below)

This relationship between runtime and input size is called the time complexity of an algorithm. We use the time complexity as a way to compare the performance of different algorithms. An algorithm could run very quickly when it performs a function on only one hundred items, but its runtime could quickly balloon out to unreasonable amounts of time when given a million items. When evaluating the time complexity of an algorithm, we do not care about the exact amount of time it takes an algorithm to run; at no point do we make any actual time measurements. The only thing that comes close to being "counted" is the number of instructions executed, and even that is a very fuzzy "count." What we're interested in is the growth rate of the runtime when compared to the size of input.

I've made a simple chart depicting the curves of several common functions, to demonstrate the drastic difference in the rates of growth of these functions:

To express this relationship between runtime and input size, we use "Big-O" notation. An algorithm is said to be O(n) if there is a linear relationship between its runtime and the worst-case input. It is O(n2) if the relationship is quadratic, and O(logn) if it's logarithmic, etc. Big O is not intended to be exact, as it represents an absolute worst-case runtime growth, regardless of how the algorithm can be expected to perform most of the time.

I'm not going to go too far into the details of how one finds the Big-O. I will state that in seeking the time complexity, we analyze an algorithm and (loosely) count the number of times that loops will run, instructions will be executed, and recursive calls will be made, relative to the input n. Then, we throw away everything but the largest term (related to the input size) in the equation. For example, on an input of size n, if an algorithm is found to perform at most (5n2 + 3n + logn + 15) instructions, we throw everything else away and say it is a O(n2) algorithm.

A more detailed explanation of big-O and time complexity can be found here (Stack Overflow)
A video explanation of Big-O, and introduction to Big-Theta and Big-Omega, can be found here (Xoax: YouTube)


It's hard to imagine algorithms which can perform worse (or, better) than linear time, but with some examples it becomes clear that it's a common occurrence.


An algorithm that executes in constant time is one that takes the same amount of time regardless of the size of input. If it takes one second on an input size of n, it will still take one second on an input that is a million times that size!

As you can imagine, there aren't many non-trivial algorithms that operate in constant time... but one that is well-known is that of determining whether a number is odd of even. It doesn't matter how many digits there are in the number in question, you need only look at the very last digit to see if it is odd or even. Therefore, the algorithm to test that a number is odd or even runs in constant time.


Logarithms show up an awful lot in math and computer science, considering it's a function that is so rare in our everyday lives. The logarithm is the inverse of the exponential. An exponent ax gives the value of consecutively multiplying the base (a) to itself (x) times, while a logarithm logb(Y) gives the number of times a value (Y) can be consecutively divided by the logarithm's base (b) until it reaches 1.

24 = 2 * 2 * 2 * 2 = 16
log2(16) : (16/2 = 8) (8/2 = 4) (4/2 = 2) (2/2 = 1)
There are four divisions by two until we reach 1; therefore the log2(16) = 4

Many algorithms use approaches that result in a logarithmic big-O. Typically, if an algorithm can ignore huge portions of the input by dividing it into pieces and discarding most of them, it will be logarithmic. Interestingly, the base of the logarithm doesn't matter: a base-10 logarithm and a base-2 logarithm are exactly the same in big-O (something that isn't true for every big-O function)

As an example, say you want to look up the number for Alan Turing in a phone book. One way you could search the book is to open it to the halfway mark. If the name you find at the halfway point is Alan Turing, you're done! (and incredibly lucky.) The chances are high that it is not Alan Turing, but since the phone book is sorted in alphabetical order, you now know if you will find Turing's number in the first half or the second half of the book. You've just halved the number of names you have to search!

You could then perform the same operation again: find the halfway point of half the phone book, to see which subportion of the phone book you have to search. You now only need to evaluate one quarter of the phone book! You can keep doing this halving process until you've narrowed down the number of names you need to search through to just one - Alan Turing, or the spot where Alan Turing would be listed if he were living in your city.

This is called a binary search. At each step, you break your input into half and reapply the algorithm to just one half. The number of steps the algorithm will need to run is bounded by the number of times you can consecutively divide the input (n) by 2. In the worst case, it performs log2(n) steps, which is O(logn).


Linear-time algorithms don't need much explaining! They are algorithms where the time taken to execute scales by a constant factor to the size on the input. A very simple and common example is the problem of finding the smallest value in a list. We need to check each value in the list at least once to see if it is smaller than the value we currently think is the smallest. This simple algorithm is linear time, as we only check each value once.


Polynomial time is a quite common algorithmic time complexity. Any algorithm that runs in O(nc), where c is some constant (like 2, 4, e) is said to be "polynomial." Unlike logarithmic time, an algorithm that is O(n2) is not the same time complexity as an algorithm that is O(n3.

Polynomial time complexity usually arises if, for each element in the input, you do some computation with the chosen element and each of the other elements. You end up doing, for example, n computations a total of n times. The computations could be as simple as a comparison! An easy example is selection sort.

Since it is a sorting algorithm, you are given a list of numbers and the goal is to return the sorted list. In selection sort, we begin by searching the list for the smallest number. We then swap the smallest number and the number that was in the first place in the list. Then, we search the rest of the list (now containing (n-1) items), find the smallest number in that portion, and swap it with the second number in the list. This process continues until we have exhausted all the elements, and the list we are left with is sorted!

How many numbers total do we examine? The first step examines n numbers; the second step examines n-1 numbers; the third step examines n-2 numbers... this is clearly the arithmetic progression,
(n) + (n-1) + (n-2) + ... + 2 + 1
Which, if you remember your high school math at all, simplifies to,
(n2 + n) / 2
Which is on the order of O(n2)


An exponential-time Big-O is any function where the input-size n is exponentiated. In Big-O, the base of the exponent matters, so a O(2n) algorithm is in a different time complexity class than a O(3n) algorithm.

A problem a friend of mine recently faced is that of counting the maximum value of a cribbage hand. In a crib hand, you are dealt six cards, and you get to keep four of them. There are two problems here: the immediate problem of determining which cards to keep, and the problem of totaling the points. Determining which cards are best to keep is a challenging problem in itself, since the points for that round depend on what card comes up on the cut (a question of probability) and on how well the cards perform on the pegging step (which depends on what the other players have in their hands!) in addition to the worth of the cards you hold.

However, even totaling the points is tricky! There are several different points-giving combinations, and they can overlap each other. A single card can contribute to multiple points-giving combinations, such as a flush, a run and adding up to fifteen.

One possible way to systematically count the points is to calculate the set of all possible groupings of cards in a five-card hand (four cards in your hand, plus one card for the cut), and check the points value of each grouping. That is, in the five-card grouping, consider all five of the cards. (Are they a run? Do they add up to fifteen?) In the four-card groupings, consider all four cards. (Are they a run? Do they add up to fifteen?)

The hand's value will be the sum of the points found for each grouping. Each set of cards in a group should be unique (that is, you won't end up looking at both (5♦, 10♣) and (10♣, 5♦) - this pairing will only be evaluated once), so you will be able to look at every possible combination of cards, of size 1 through 5. If we label our five cards as a,b,c,d,e we will need to calculate:

{a}, {b}, {c}, {d}, {e},
{a,b}, {a,c}, {a,d}, {a,e}, {b,c), {b,d}, {b,e}, {c,d}, {c,e}, {d,e},
{a,b,c}, {a,b,d}, {a,b,e}, {a,c,d}, {a,c,e}, {a,d,e}, {b,c,d}, {b,c,e}, {b,d,e}, {c,d,e},
{a,b,c,d}, {a,b,c,e}, {a,b,d,e}, {a,c,d,e}, {b,c,d,e},

This is called the power set. The power set of a set of size n contains 2n elements, meaning that any algorithm that calculates every element of the power set must operate on the order of 2n.

A nice article on how to actually calculate the power set can be found here


Like logarithms, factorials are a function that are rarely used in day-to-day life. A factorial is the consecutive product of an ever-decreasing value. At each multiplication step, the value always decreases by 1:

n! = n * (n-1) * (n-2) * ... * 3 * 2 * 1

These numbers get very large, very fast.

5! = 5 * 4 * 3 * 2 * 1
= 20 * 3 * 2 * 1
= 60 * 2 * 1
= 120

6! = 6 * 5 * 4 * 3 * 2 * 1
= 30 * 4 * 3 * 2 * 1
= 120 * 3 * 2 * 1
= 360 * 2 * 1
= 720

Because they grow so quickly, factorial-time algorithms are religiously avoided and tend to only come about as the result of naive solutions to problems. A naive solution is the "first glance" attempt at a solution that often fails to take advantage of patterns or clever shortcuts. Such a naive or brute force solution is sometimes presented to solve the Travelling Salesman Problem, a well-known problem in computer science.

In a common version of the Travelling Salesman Problem, we are given a map of cities that a salesman wants to visit. Each road in the map is assigned a cost. We want to determine a route for the salesman to take that will allow him to visit each of the cities on the map exactly once, while minimizing the total cost of the trip.

The naive algorithm is to exhaustively calculate every possible route for the salesman to take, and then choose the route that costs the least. What is the worst-case runtime of this approach? Consider a map where every city is connected by a road to every other city. We will say the input-size n is equal to the number of cities.

Showing the calculation of a single path. Every red line shows a valid choice.

We randomly pick a city to start with. When leaving the first city, we can choose from (n-1) roads - one road for each unvisited city, minus the one city we're currently visiting. So, we start with (n-1) potential routes. From each of these secondary cities, we can choose to leave on one of (n-2) roads - all the remaining cities, minus the first city and the one we're currently visiting. Now, we are considering [(n-1) * (n-2)] possible routes. At the third city, we can choose to travel to one of the remaining (n-3) cities. Our list of all possible routes grows to size [(n-1) * (n-2) * (n-3)]

It's easy to see that there are n! possible routes that visit every city, in this particular map. We now know that an algorithm that calculates all possible routes to solve the Travelling Salesman Problem is O(n!)

Continuous-line artwork generated by solving the traveling salesman problem for carefully-arranged cities - unrelated to time complexity, but just damn cool.

0 things about

A brief intro to algorithmic time complexity

Post a Comment

Copyright 2012 Phile not Found. See About
Powered by Blogger

"Whenever you find that you are on the side of the majority, it is time to pause and reflect."