1

This is the question-

Let $A[1,....,n]$ ba an array storing a bit $(1\,\,or\,\,0)$ at each location, and $f(m)$ is a function whose time complexity is $\theta(m)$. Consider the following program -

counter = 0;
for( i = 1; i <= n; i++)
{
    if(A[i] == 1)
        counter++;
    else
    {
        f(counter);
        counter = 0;
    }
}


Options are :

(A)$\Omega(n^2)$
(B)$\Omega(n\log n)$ and $O(n^2)$
(C)$\Theta(n)$
(D)$\mathcal{O}(n)$

The solution says - "Since it contains only one loop, so compute with linear complexity, hence the complexity will be $\Theta(n)$".

My doubt is, what if the time complexity of the function $f(m)$ was given to be $\theta(2^n)$ or of the order that grows in exponential time? Would the solution given in the book be correct in case of exponential time?

Raphael
  • 73,212
  • 30
  • 182
  • 400

1 Answers1

1

This is actually an example of amortized analysis. The easiest way to see what's going on is to separate $A$ as follows: $A=1^{n_1}01^{n_2}0\cdots1^{n_k}$. You can check that the function $f$ is called with arguments $n_1,n_2,\ldots,n_{k-1}$. Since $n_1+\cdots+n_k = n-(k-1)$, in particular $n_1+\cdots+n_{k-1} \leq n$, and so the total time spent running $f$ is $O(n_1+\cdots+n_{k-1}) = O(n)$. The time spent on executing the rest of the program is $\Theta(n)$, and so the total running time is $\Theta(n)$.

If we did want to use amortized analysis, the idea would be as follows. Suppose for simplicity that $f(m)$ runs in time $m$ (exactly). Each time the counter is incremented, we set aside another "time unit", and withdraw the time units set aside when $f$ is called. You can check that we always have enough time units in store. Under this scheme of accounting, incrementing the counter takes time $O(1)$, and calling $f(m)$ takes no time; this can only overestimate the actual running time (since we might have leftover time units). Therefore the total running time is $O(n)$.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514