5

I'm wondering if it's possible for algorithms that have monotonically decreasing runtime with the input-size - just as a fun mental exercise. If not, is it possible to disprove this claim? I haven't been able to come up with an example or counterexample so far, and this sounds like an interesting problem.

P.S. Something like $O(\frac{1}{n})$, I guess (if it exists)

stoic-santiago
  • 423
  • 4
  • 7

4 Answers4

10

Try brute force searching of a key for a cryptographic algorithm. The more of the key you give it to start with, the less you have to search for. True that trend stops at the limit of keysize (but that's still monotonic), and there are probably other examples in the field of extensive search where the more input data, the easier it is to prune branches of the potential tree.

cinut
  • 101
  • 2
8

Well an algorithm with $O(0)$ fulfills the criterion. It basically does nothing. As soon as your algorithm does at least one operation on execution it has a runtime cost $t(n) > 0$. Since $$t(n)\in O(1/n) \Leftrightarrow \exists c,n_0\forall n >n_0: t(n) \leq c\cdot\frac 1 n$$ An algorithm with constant runtime doesn't have runtime $O(1/n)$. This means that for a runtime measure where every operation costs at least $1$ only the empty algo has runtime $O(1/n)$ but if you e.g. say that an if-stmt with the check of an condition has cost zero you can build algorithms whos runtime cost is 0 after a certain input is reached e.g.:

def algo(n):
  if n < 100:
    do something very expensive

This algo is if you declare condition checking as 0 cost operation an algorithm with runtime $O(0)$ and thus also runtime $O(1/n)$ even though it could do a very expensive operation for the first hundred values.

Generally a decreasing complexity is rather senseless because you can always express it as either $O(1)$ or $O(0)$. (e.g. $O(1/n+10) = O(1)$).

plshelp
  • 1,679
  • 6
  • 15
1

Just to mention something in addition to the other (correct) answers: Such complexities can arise when the runtime of the algorithm depend on more than just one parameter / if one does not care about the input size. For example, searching the minimum in $n$ elements is clearly in $O(n)$, however, if you do this in parallel using $p$ processors, the complexity is in $O(\frac{n}{p} + \log{p})$.

Other instances this phenomen can arise are when you have a precision parameter $\epsilon$ present, which you usually want to have as close to 0 as possible. In such cases, $O(\epsilon^{-1})$ is actually "worse" than $O(\epsilon)$ because you want the error to be as small as possible.

Fabian
  • 21
  • 2
0

Just reading in the (full) input is $O(n)$ for input of size $n$.

vonbrand
  • 14,204
  • 3
  • 42
  • 52