2

I'm just starting my second year in computer science and one of my classes briefly touched upon deterministic vs. non-deterministic algorithms. This got me thinking - is there any use for algorithms which return deterministic output for certain inputs, but non-deterministic output for other inputs? Say, something along the lines of

$$ s\leftarrow n\in\mathbb{R}\\ \text{if }s\le 5:\text{ return }2\times s\\ \text{else }x\leftarrow\text{generateRandomNumber()};\text{ return }s\times x $$

This algorithm returns the input, doubled, if the input is less than or equal to five, and otherwise returns it multiplied by a random number (assuming the RNG returns a truly random number and is itself non-deterministic).

So once again, my main question: are algorithms of this nature useful in any practical application? Additionally, what would this class of algorithm be called? Semi-deterministic? Pseudo-deterministic?

Raphael
  • 73,212
  • 30
  • 182
  • 400
Jules
  • 632
  • 6
  • 19

2 Answers2

5

First of all, your terminology is at odds with computability and complexity theory. What you are asking is whether it makes sense for an algorithm to have both randomized and non-randomized components. A good example are algorithms which have to be unpredictable (say for cryptographic reasons), but for the purpose of testing have to be predictable. In this case, we substitute a pseudorandom number generator for the true (say, physical) random number generator for testing purposes.

That said, the class of algorithms you are describing are just randomized algorithms. Nothing forces a randomized algorithm to use randomization, it's just a possibility in the model, in the same way that a non-deterministic Turing machine can be deterministic, or that a real number can be rational.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
1

Such algorithms certainly exist and are of practical nature. Algorithms that are deterministic for some input instances and non-deterministic for others are still simply called non-deterministic. When I say "practical nature", it should be added that true non-determinism still does not exist in computers. Yet, the study of non-deterministic branching makes sense anyway as it is helpful computational model.

Note that many non-deterministic algorithms for inputs of arbitrary length will work in a deterministic manner on an empty input string. So that is one rather trivial example.

But coming to a more practical example, consider modern SAT (satisfiability) solving. Here, decisions are made seemingly non-deterministic (in reality, there are advanced heuristics to make these) and back-tracked whenever needed (so that a non-deterministic reasoning scheme is still complete on a deterministic computer). To make reasoning more efficient, so-called clause learning and unit propagation are applied, which reduce the number of decisions that the algorithm has to make. There are more simple input instances for which no non-deterministic decision ever has to be made, as unit propagation alone solves the instance already.

The SAT solving case may not be convincing to everyone as I am arguing that essentially, a non-deterministic search algorithm with unit propagation is executed on a deterministic machine by performing decision making and back-tracking. But since you are asking for practical applications of a non-deterministic algorithm (and we do not have "guessing computers"), I think that this is a fair answer. It should be added that the "clause learning" applied nowadays in SAT solving does not fit into this scheme, however.

DCTLib
  • 2,817
  • 17
  • 13