3

This might be very basic but I am interested in evaluating the algorithmic complexity of an estimator of the form:

$$\hat{\theta} = \text{argmin}_{\theta} \;\; Q_n (\theta)$$

where $Q_n(\theta)$ denotes some objective function of interest (e.g. - log likelihood) computed on a sample of length $n$. $\hat{\theta}$ is assumed to be obtain through some numerical optimization methods (typically a stepwise procedure). Under this setting, how could I compute the algorithmic complexity of $\hat{\theta}$?

I am really not sure if this makes sense but here is how I approach this problem:

  • Suppose that the numerical procedure used to compute $\hat{\theta}$ requires $S$ steps to converge.
  • Assume that there exist a deterministic function, say $f(p)$, where $p$ denotes the dimension of $\theta$ such that $S \leq f(p)$ and that $f(p) < \infty$ for $p < \infty$.
  • Assume that $\mathcal{O} (Q_n(\theta)) = g(n)$ for all $\theta$.
  • Then $\mathcal{O}(\hat{\theta})$ is simply given by

$$\mathcal{O}(\hat{\theta}) = \mathcal{O}(Q_n(\theta) S) = g(n),$$

since $p$ is assume to be fixed (and bounded). This seems really too simple... Any comments would be more than welcome. Thank you very much.

Raphael
  • 73,212
  • 30
  • 182
  • 400
dskjhsk
  • 31
  • 1

1 Answers1

1

The bound $O(Q_n(θ) \cdot S(p))$ represents only the cost of evaluating $Q_n$ once per step in the "numerical optimization method"; you ignore all other cost that incurs.

Without looking at the whole algorithm, little more can be done.

Note: I very deliberately replaced $S$ with $S(p)$. That is because there is no reason, per se, to believe that $S$ is a constant. You need to be more careful about setting up your cost functions; our reference question may be helpful.

Raphael
  • 73,212
  • 30
  • 182
  • 400