2

For what I know of complexity measures in CS, they are aimed at rather large problems. With today's computing power, most people don't care about comparing the complexity of simple problems as they would all be solved in about the same time.

Yet, I turn out to be interested in some cost measures for small problems. For instance, although both 12 + 14 and 123503 + 589034 can be solved in the blink of an eye by today's computers, there seems to be a sense in which 123503 + 589034 is still more "costly" than 12 + 14.

So my question is : do you know of any such cost measures that would be appropriate for such small size problems?

Note 1 : I am interested in rather general answer for all kinds of problems, the addition one being just an example.

Note 2 : the best I have found so far is Kolmogorov's complexity, but for what I understand of it, it seems awfully hard to operationalize. First it appears to be dependent on the reference language, and second, once a language is chosen, it seems really hard to prove that a description of a string is of minimal length.

  • Do you know of any measure which would be easier to implement?
  • Am I missing something about Kolmogorov's complexity and is it easier to implement than I suggest?

1 Answers1

2

When we talk about complexity in computer science we are usually talking about asymptotic complexity. We are interested in characterizing the complexity of a process as the size of its inputs increase. This is useful because it abstracts away implementation-specific characteristics, such as processor speed or architecture or available memory. So while asymptotic analysis does, in this sense, concern itself with "large problems," its intent is not so much to treat large problems as it is to provide a meaningful metric for comparing algorithms in terms of time and space efficiency.

Kolmogorov complexity does not, to my knowledge, find applications in the practical analysis of programs or computational processes. It is an information-theoretic measure concerned with the minimal length of data as represented in a computational system. In so far as using it in any sort of "operational" analysis like those you've proposed, the big roadblock is that the Kolmogorov complexity of an arbitrary string is not, in general, computable!

It's difficult to pin down exactly what you're asking for, but it sounds more akin to counting CPU cycles required to execute a series of instructions. Back in the old days, when assembly language on punch-cards was the state-of-the-art, this was a fairly common technique used for optimization.

As it applies to your example, adding two numbers would involve loading them, bitwise, into registers, performing the addition, and storing the result. If the addends required more bits to represent than the size of the registers, you'd have to do it piecewise, loading and storing intermediate results as needed. You can count the cycles needed to execute tasks of varying 'difficulties' (adding two 1-byte numbers vs. adding two 2-byte numbers, for example) and compare them to give some sense of how much more or less 'complicated' the execution of a given sequence of instructions is.

It's not complexity in the traditional CS sense, but it would be a quantifiable, implementable metric. Donald Knuth has a program here that counts the cycles required to execute MMIX programs, as an example of such an implementation.

Qalnut
  • 196
  • 3