2

Given some function and assuming no concern for the time to compute the co-domain for its domain, when might it be preferable to compute and tabulate in advance the co-domain results of the function, and then retrieve it from the table rather than compute it?

For example, if the domain for $f(x)=x^2$ were sufficiently small, why, if ever, would one wish to compute the co-domain and then, for a given x, retrieve the stored value of $f(x)$?

Raphael
  • 73,212
  • 30
  • 182
  • 400
gallygator
  • 23
  • 2

3 Answers3

3

This sort of computation can be used for optimization purposes.

A classic simple example is computing the Fibonacci sequence. Apart from the base cases, $f(n) = f(n - 1) + f(n - 2)$, but then $f(n+1) = f(n) + f(n-1)$ and $f(n+2) = f(n+1) + f(n)$, and so on. So the value of $f(n)$ for each $n$ is used repeatedly. The effect is more dramatic if you use the naïve recursive algorithm.

This computation can be significantly optimized if we keep the values of $f$ that we have already computed and simply access the result as we need it.

If all we want to do is compute $f(n)$ for some $n$ once, this is already an observable improvement in the asymptotic running time of the algorithm (and it shows up very quickly in practice too). If we need to repeatedly use values of $f$ in a larger program, the improvement is even more effective.

Doing things like this form part of a technique called memoization, which is closely related with dynamic programming.

Luke Mathieson
  • 18,373
  • 4
  • 60
  • 87
3

If your domain $X$ is ordered and small, the values in $f(X)$ are small as well and $f$ is expensive to compute, then there's a simple reason: efficiency.

For instance, if $X$ is a range of natural numbers, you can store the image of $f$ in an array and obtain $O(1)$-time "comptutation" of $f$ at the cost of $\Theta(|X|)$ memory (assuming that the size of $f(x)$ is bounded by some constant).

Of course, this only pays off if you need values of $f$ often, $f$ is particularly expensive, and/or you can access the values quickly¹. A compromise can sometimes be to compute the values lazily, but you'd still have to pay with memory up-front.

Note how the scenario parameters that enable you to resolve the trade-off can change over time.

  1. In times without (universal) computers at every workstation, pre-computed tables for often used functions were indispensible. Logarithm tables, for example, were used in engineering disciplines.
  2. With ever-faster computers in most offices, storage (both analogue and digital) was more sensitive. Efficient algorithms allow to re-compute values quickly enough.
  3. Today, fast storage is cheaper than time and energy. Big players keep their data (which they concurrently update, all the time) in memory at all times in order to fulfill client queries as fast as possible. On the other end of the spectrum, mobile devices have to use their limited amount of energy conservatively, so storing (or pre-computing in times of energy abundance) beats re-computing.

So it's the same as ... all the time: inspect your situation, define your priorities, and pick the job for the tool.


  1. Consider memory hierarchy. If you traverse the values and they lie unordered on the heap, you are in for a hellpit of cache misses which potentially nullifies the advantage (or turns it around on you).
Raphael
  • 73,212
  • 30
  • 182
  • 400
1

This is actually a centuries-old technique that was largely killed by computers, but could possibly revive or still exist in some technical niche. It is even known to have been used by ancient Greek mathematicians and physicists.

The question asks why one would prefer to tabulate the results of a function, in a table indexed by its parameters, so as to replace computation by table lookup.

Of course, the minimum to expect is that table lookup is cheaper than computation of the result, but that is often the case with properly chosen data-structures.

The answer given by Luke Mathieson describes the best known case of function tabulation, which is memoisation, i.e. simply the preservation in a table of results that had to be computed previously, in case they are needed again.

Raphael argues that a systematic precomputing for a small domain can bring more efficiency to computations when the values are needed often, though, as he proposes lazy computation of the table, the difference with the previous answer is not too clear.

In a comment, I also suggested filling the table in advance, even with values not yet needed, when there is free/cheap computer time available for it.

But all this seems somehow restricted to a single program, which limits the usefulness of the effort.

However, the problem should probably be considered in a more general context, and has been in the past, before computers existed.

Complex calculations are an ancient problem in mathematics, sciences and engineering, and for a long time it was done by hand. These computations were used for all kinds of purposes, including astronomical (and astrological) predictions (including discovery of planets), computing tides, cartography and triangulation, compounded interest rates. In particular, logarithms were used to replace multiplication by addition, which implied using exponentiation on the results.

All these computations made use of hard to compute functions, such as trigonometric functions, logarithm, and others. So it became a business (and a very tedious job) to create precise tables for these functions, that were printed as books and sold to engineers and all people who needed them to conduct calculations. These were functions on the reals, and the tables were designed to reach a given precision, with some improvements using interpolation techniques. The tables were also designed so that they could be used reversibly: the same table could be used for logarithm and for exponential. These books were extremely valuable tools that one would keep around all the time. And they survived until the 1980's, when the microcomputers and especially sophisticated hand-held calculators became available, i.e. not much more than twenty to thirty years ago.

Using efficiently these tabulated functions was part of standard engineering curricula in universities and engineering schools.

Another way of tabulating functions was using a graphical form, called nomogram, nomograph, chart or abaque. They were tabulating a variety of complex functions used in exact sciences, either computed or obtained experimentally.

The formerly ubiquitous slide rule of engineers was yet another way of tabulating functions.

It could be that there are still useful functions that are too costly to compute with good precision, even with a standard computer. Then it can make sense to have them computed with powerful machines, then tabulated and made accessible, either on some memory device, or through the Internet. But I would not be enough in that kind of scientific work to know, and my search on the Internet was not fruitful.

babou
  • 19,645
  • 43
  • 77