Understanding Complexity (Asymptotic notations)

Big O Notation

Big O notation is used in Computer Science to describe the performance or complexity of an algorithm.
Big O specifically describes the worst-case (not exactly) scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.

O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

bool IsFirstElementNull(IList<string> elements)  
{
    return elements[0] == null;
}

O(n) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.

bool ContainsValue(IList<string> elements, string value)  
{
    foreach (var element in elements)
    {
        if (element == value) return true;
    }

    return false;
}

O(n2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set.

bool ContainsDuplicates(IList<string> elements)  
{
    for (var outer = 0; outer < elements.Count; outer++)
    {
        for (var inner = 0; inner < elements.Count; inner++)
        {
            // Don't compare with self
            if (outer == inner) continue;

            if (elements[outer] == elements[inner]) return true;
        }
    }

    return false;
}

O(2n) denotes an algorithm whose growth doubles with each additon to the input data set.

int Fibonacci(int number)  
{
    if (number <= 1) return number;

    return Fibonacci(number - 2) + Fibonacci(number - 1);
}

O(log n) basically means time goes up linearly while the n goes up exponentially. So if it takes 1 second to compute 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on.

Usually we find logarithmic complexity when we do divide and conquer. In the case of binary search, time complexity is in order of the maximum number of iterations, and therefore the maximum number of times the search space can be split in half. This is accomplished by dividing the size of the search space, n, by 2 repeatedly until you get to 1 (We are trying to narrow down the exact place the element would be if it existed in the search space. Since we can only check one space at a time, we can only make our final conclusion about whether or not it's in the search space if we can get that precise location. That's why we're trying to reduce the search space to 1. After we do that, we'll check that 1 spot to see if it contains the element.).
Let's give the number of times you need to divide n by 2 the label x. Since dividing by 2, x times is equivalent to dividing by 2x, you end up having to solve for this equation:
$$ \frac{n}{2^{x}}= 1 $$

To reiterate: x is the number of times you can split a space of size n in half before it is narrowed down to size 1.
The solution to this equation can be found by taking the logarithm of both sides:

$$ 2^{x} = n $$

$$ \log 2^{x} = \log n $$

$$ x\log 2 = \log n $$

$$ x = \frac{\log n}{\log 2} = \log 2^{n} $$

So from this, we've determined that the maximum number of steps you need to search a space of n elements is log2(n). As 2 is just a constant factor, binary search is in O(log(n)).

An important detail is that it doesn't matter what constant you're dividing n by (as long as it's greater than one); if you divide by the constant k, it will take logk(n) steps to reach 1. Thus any algorithm that repeatedly divides the input size by some constant factor will need O(log n) iterations to terminate. Those iterations might take a lot of time and so the net runtime needn't be O(log n), but the number of steps will be logarithmic.

Processing values 1 digit at a time might result in logarithmic complexity. How many digits are in the base-10 number n? Well, if there are k digits in the number, then we'd have that the biggest digit is some multiple of 10k. The largest k-digit number is 999...9, k times, and this is equal to 10k + 1 - 1. Consequently, if we know that n has k digits in it, then we know that the value of n is at most 10k + 1 - 1. If we want to solve for k in terms of n, we get

$$ n \leq 10^{k+1} - 1 $$

$$ n+1 \leq 10^{k+1} $$

$$ \log_{10} {n+1} \leq k+1 $$

$$ \log_{10} {(n+1)} - 1 \leq k $$

So k, the number of digits in n is O(log n).

For example, let's think about the complexity of adding two large numbers that are too big to fit into a machine word. Suppose that we have those numbers represented in base 10, and we'll call the numbers m and n. One way to add them is to add 1 digit from each at a time starting from right to left. Thus we do a total of O(1) work per digit (that is, a constant amount of work), and there are O(max{log n, log m}) total digits that need to be processed. This gives a total of O(max{log n, log m}) complexity. Assuming log n is greater, we get O(log n). Other examples include radix sort, ford-fulkerson algorithm etc.

Formal Definition

In math, Let f and g be two functions defined on some subset of the real numbers. One writes

$$ f(x) = O(g(x)) \text{ as } {x}\rightarrow \infty $$

if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x).
That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that

$$ \left | f(x) \right | \leq M\left | g(x) \right | \text{ for all } {x}\geq {x_0} $$

In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that

$$ f(x) = O(g(x)) $$

The notation can also be used to describe the behavior of f near some real number a (often, a = 0): we say

$$ f(x) = O(g(x)) \text{ as } {x}\rightarrow {a} $$

if and only if there exist positive numbers δ and M such that

$$ \left | f(x) \right | \leq M\left | g(x) \right | \text{ when } 0< \left | x-a \right |< \delta $$

In typical usage, the formal definition of O notation is not used directly; rather, the O notation for a function f is derived by the following simplification rules:

  • If f(x) is a sum of several terms, if there is one with largest growth rate, it can be kept, and all others omitted.
  • If f(x) is a product of several factors, any constants (terms in the product that do not depend on x) can be omitted.

So in short Big-O notation just describes asymptotic upper bound, so it is correct to say something like this for example, "Quicksort is in O(n!)," even though Quicksort's actual worst-case running time will never exceed O(n^2). All Big-O is saying is "for an input of size n, there is a value of n after which quicksort will always take less than n! steps to complete."

Just as Big O notation provides an asymptotic upper bound on a function, Ω notation provides an asymptotic lower bound.

Sources:

Mehedi Hasan Khan

Mehedi Hasan Khan

Programmer, Entrepreneur, Tech enthusiast.

 

Related Post

Comments powered by Disqus