1

An algorithm has a calculation whose worst case time complexity $T(x)=O(x^2)$ that outputs solutions $y$ & $z$ for input $x$ (the algorithm's worst case time complexity is greater than quadratic time):

for (i = 1; i <= x; i++)
    {
    for (j = 1; j <= i; j++)
        {
           // Some quadratic time calculation of x that gives y & z
        }
    }

A different algorithm using successive loops is created for a given $x$ that outputs the same $y$ & $z$:

for (i = 1; i <= x; i++)
    {
         // Some quadratic time calculation of x that gives y
    }
for (i = 1; i <= x; i++)
    {
         // Some quadratic time calculation of x that gives z
    }

Is the time complexity of the second algorithm $T_s(x)=O(x^2)$?

Raphael
  • 73,212
  • 30
  • 182
  • 400
Jeff
  • 113
  • 1
  • 9

3 Answers3

3

I think that the first algorithms running time is actually $\mathcal{O}(x^4)$.

  • The outer for loop executes exactly $x$ times,
  • the inner for loop executes at most $x$ times during every one of these $x$ iterations of the outer loop

In fact, the inner loop executes exactly $\frac{x(x+1)}{2}$ times, and $\frac{x(x+1)}{2} = \frac{x^2 + x}{2} = \mathcal{O}(x^2)$, since our asymptotic notation allows us to essentially ignore the constant factors.

So already we have a growing complexity of $\mathcal{O}(x^2)$. If we now consider the contents of the inner for loop, which you say is a quadratic time calculation of $x$. I read this as follows: the complexity of the inner loops body is a quadratic function of x $\rightarrow \mathcal{O}(x^2)$. My answer is based largely upon this assumption.

If my assumption is correct, then a time complexity of $\mathcal{O}(x^4)$ easily follows from this reasoning: the outer and inner for loop collectively produce $\mathcal{O}(x^2)$ iterations of some algorithm that takes $\mathcal{O}(x^2)$ steps to complete itself. An extremely informal (read: not quite right, but right enough to demonstrate a point) mathematical argument for this might be:

($x$ outer loop iterations) $\cdot$ ($x$ inner loop iterations) $\cdot$ ($x^2$ loop body steps) $= x^4$ total steps.

With the second algorithm, the same reasoning can be used. Again, we have some procedure, which takes $\mathcal{O}(x^2)$ steps to run, which is being executed $x$ times. This gives us $\mathcal{O}(x^3)$ steps in total. Since we have two loops of this form, we can conclude that algorithm 2 has a time complexity of:

$\mathcal{O}(x^3) + \mathcal{O}(x^3) = \mathcal{O}(x^3)$

Note that in this case we have two consecutive loops that, asymptotically speaking, do essentially the same amount of work. In reality it might be that our first loop which calculates $y$ performs some constant factor amount more or less steps than our second, which calculates $z$, but here that doesn't matter, since we're focusing on the rate of growth of our running time. One way of thinking about it is as follows:

$\mathcal{O}(x^3) + \mathcal{O}(x^3) = \mathcal{O}(x^3 + x^3) = \mathcal{O}(2 \cdot (x^3)) = \mathcal{O}(x^3)$.

Here we can drop the 2, since its a constant factor.

1

Algorithm 2 seems better in running time at first look, Provided that when your body inside loops takes same amount of time in iterations in both of the Algorithms. i.e. If I consider the asymptotic time as quadratic for the body of loop in each iteration for both algorithms, then Algorithm 1 will have O(x^4) time and Algorithm 2 will have O(x^3) time (As suggested in other answers given).

But what we see is not always exactly true. As there also exists 'Amortised Analysis' in the algorithms, which should be considered in the case, when the body inside the loops can take worst case time for few iterations and for other large part of iterations it takes totally different asymptotic time.

( If we have to consider the amortized analysis, then more information about the operations within the loop is needed to be sure. )

If more detailed information is needed on Amortised analysis, you can refer to "Chapter 17: Amortized Analysis" in "CLRS Introduction to Algorithms" book. It'll be an awesome read.

Freemn
  • 21
  • 5
0

The time complexity of the second algorithm would be $T_s(x)=O(x)$. This is because the algorithm runs for a total of $2x$ times, which is $O(x)$.

The first algorithm would run $x$ times for its first run, $x-1$ for its second and so on so you get:

$\text{Algorithm 1} = 1 + 2 + ... + x-1 + x = O(n^2)$

The difference between the 2 algorithms is as such,

  • Algorithm 1: will run the outer loop $i$ times, and inner loop will run $i-j$ times. This is equivalent to $O(x^2)$, since $n ( n - y ) = n^2 - ny = O(n^2)$.

  • Algorithm 2: Will always run the first loop $x$ times, and the second loop $x$ times, giving $O(2x) = O(x)$.

Although your complexity time is correct due to the definition of $O$, since Algorithm 2 is $g(n)= O(n) \leq O(n^2)$, and so $g(n) =O(n^2) $.

This has to do with the actual definition of O-notation. Take a look at this link for more information.

Edited to reflect quadratic function in body loop:

Where the quadratic is taking some time (the exponent can be 2 for quadratic, or for any number x >= 1)

$\text{Work required} = O(n^x)$

$\text{Algorithm 1(without taking quadratic into account)} = O(n^2)$

$\text{Algorithm 1 complete} = n^x * n^2 = n^{x+2} = O(n^{x+2})$

For algorithm 2, we can assume

$\text{Algorithm 2} = n^x * n + n^x * n = n^{x+1} + n^{x+1} = 2n^{x+1} = O(n^{x+1})$

So in your example $\text{x = 2}$, so it would actually be $O(n^3)$ and not $O(n^2)$

SDhaliwal
  • 56
  • 3