4

The main question is, how exactly is the big O analysis calculated on routines? Is there a specific formula that relates what each function in a program does to a big O calculation?

Also, what about more complex iterations, such as colour conversions etc?

I would like to point out that this is not a homework question, rather, it is a question from my own research/programming learning curve. I have code that I am working on, but would like to know how this analysis is carried out.

Raphael
  • 73,212
  • 30
  • 182
  • 400

2 Answers2

3

There is no such formula. If there were, it would solve the halting problem, because we could take an algorithm (Turing machine), find its big-$\Theta$ complexity, then return true or false depending on if it was $\Theta (\infty)$.

For Big-O (upper bounds), there's technically a formula, since all functions are in $O(\infty)$. However, there's no way to find a tight upper bound, nor to test if an algorithm has a bound tighter than $O(\infty)$.

That said, there are certainly tools and heuristics for determining these things, as well as limited models of computation for which the problem is decidable.

For example, if you can express your routine using recursion, often you can find a recurrence relation for the running time, which you can solve to find the Big-O time. However, none of these techniques will work all the time.

Joey Eremondi
  • 30,277
  • 5
  • 67
  • 122
2

There is no specific formula for calculating it as jmite already mentioned. You have to realize that $\mathcal{O}$ notation serves to merely estimate the number of cycles a certain process takes to execute. It's not an exact representation or quantity.

For instance, lets say you had the following function:

int add(int a, int b)
{
  return a+b;
}

This function would be $O(1)$ because it is a linear operation. There are no loops, tree or list traversals (if you've gotten up to those topics already), etc. It's a simple single instruction function.

Now if I modified that function to say the following

int redundantAdd(int a, int b)
{
  int c = a+b;
  c += 0;
  b+= c;
  return c;
}

you'll notice that I now have 3 more instructions than the previous snippet. However, it is still $O(1)$. Again, we are just trying to give a general idea of the amount of time spent on a process, not an exact number.

Now suppose I created another function, like the one below.

int summation(int array[])
{
  int sum = 0;
  for(int i = 0; i < (sizeof array)/(sizeof array[0]); ++i)
  {
    sum += array[i];
  }
}

This block of code is now $O(\text{length of array})$ (we say $O(n)$ if the array length is $n$), since we are generally executing $n$ number of tasks. We do not take into consideration the initialization of sum at the beginning. It's usually insignificant unless $n$ is small.

Now suppose I modified that code to add all the elements in a matrix

    int summation(int matrix[][], int width, int height)
    {
      int sum = 0;
      for(int i = 0; i < width; ++i)
      {
        for(int j = 0; j < height; ++j)
         {
            sum += matrix[i][j];
         }
      }
    }

This time we have $O(n^2)$ if if the width == height = n , we essentially iterate through the loop $n^2$ times. Again, we're only estimating here, so we don't care if they're different, it just makes for easy representation. I won't bother you with logarithmic big-O notation yet as you need to drill down on the key purpose of the notation first. Doing so will only confuse you.

audiFanatic
  • 310
  • 1
  • 3
  • 9