3
    i<--2
    while (i<n)
      someWork (...)
      i <-- power (i,2)
    done

Given that someWork(...) is an O(n) algorithm, what is the worst case time complexity?

I've found this question answered on this site with the solution of O(n log n), however I don't quite understand why. I know that the power function has O(log n), but I don't understand why the overall Big O of the loop becomes O(n log n) instead of just O(n). Can someone please explain this to me?

2 Answers2

1

The loop runs O(log(n)) times since it runs until i > n and each iteration i := i^2.

Assuming someWork() takes O(n) the total run time will be O(nlogn) since you're performing a task that take O(n), O(logn) times (loop).

Nir Alfasi
  • 111
  • 3
1

You're missing a very crucial component of Big-Oh analysis. That question is: given some number n and some code p how does the MAXIMUM runtime of p(n) differ with respect to time per each n -- that is, what are the bounds of p(n) with respect to n. When code tracing, create a tally.

Here, we see that i is initialized to 2. Then the code enters into a while loop. Then, while i<n we perform some work n number of times, since we know that someWork(...) has a maximum time complexity of n -- that is, linear time. We add n to our tally.

The next line of code is i gets initialized to i being raised to the power of 2. This variable, i directly impacts the runtime of the loop, since i with respect to n governs the condition upon entering and exiting the loop. We now know that we will perform the loop log(n) times.

Putting it all together, we know that since someWork(...) takes linear time (n) and that we will perform someWork(...) log(n) times, we can compute the tally for the big-oh analysis of the function as n*log(n)

Does this make more sense now?

alvonellos
  • 682
  • 1
  • 7
  • 15