22

When considering machine models of computation, the Chomsky hierarchy is normally characterised by (in order), finite automata, push-down automata, linear bound automata and Turing Machines.

For the first and last levels1 (regular languages and recursively enumerable languages), it makes no difference to the power of the model whether we consider deterministic or nondeterministic machines, i.e. DFAs are equivalent to NFAs, and DTMs are equivalent to NTMs2.

However for PDAs and LBAs, the situation is different. Deterministic PDAs recognise a strictly smaller set of languages than nondeterministic PDAs. It is also a significant open question whether deterministic LBAs are as powerful as nondeterministic LBAs or not [1].

This prompts my question:

Is there a machine model that characterises the context-free languages, but for which non-determinism adds no extra power? (If not, is there some property of CFLs which suggests a reason for this?)

It seems unlikely (to me) that it would be provable that context-free languages somehow need nondeterminism, but there doesn't seem to be a (known) machine model for which deterministic machines are sufficient.

The extension question is the same, but for context-sensitive languages.

References

  1. S.-Y. Kuroda, "Classes of Languages and Linear Bound Automata", Information and Control, 7:207-223, 1964.

Footnotes

  1. Side question for the comments, is there a reason for the levels (ordered by set inclusion) of the Chomsky hierarchy to be number 3 to 0, instead of 0 to 3?
  2. To be clear, I'm talking about the languages that can be recognised only. Obviously questions of complexity are radically affected by such a change.
Luke Mathieson
  • 18,373
  • 4
  • 60
  • 87

3 Answers3

2

A nondeterministic machine leaves choice points open: points at which multiple continuations can be taken.

Two ways to 'determinize' this:

  • backtracking: follow the tree of possibilities in depth-first order
  • GLR: follow all branches at once, in parallel, breadth-first order

The resulting parsers are deterministic; they can be defined as automata. Do they satisfy your request? If not, why not?

reinierpost
  • 6,294
  • 1
  • 24
  • 40
0

As far as I know, we do not know whether Non- deterministic context-sensitive languages are more powerful than deterministic context-sensitive languages. And the simplest reason behind this is: Till now no-one has been able to give a language that is accepted by the non-deterministic LBA and not accepted by the deterministic LBA. So if someone can come up with one such language that is accepted by ND LBA and not accepted by deterministic LBA then only this will be proven. As we know this method of proving is called proof by contradiction. Because proving something is TRUE is a hard problem but proving something FALSE is easy, just get one instance where it fails and we're done.

-2

In my understanding of theory of computation, the only situation non-determinism does not give you extra flexibility (i.e,.. power) is at the recursively enumerable/ recursive level. This is primarily because of the halting problem and it's limitations on the TM's capabilities in decidability, which I believe this answers one of your questions in the foot notes as well as a sidebar. The Chomsky Hierarchy is a logical representation of moving up the flexibility latter (if I might say), allowing more power to the machine. Does this help any with your question/thoughts?

As far as the PDA's and LBA's I will have the other accomplished folks here in the community help with that, my experience has been more with TM's and the theory associated with the higher (more RE) part of the hierarchy (at least as taught in my undergrad).

Peter Linz theory of computation

https://www.amazon.com/Introduction-Formal-Languages-Automata/dp/1284077241/ref=pd_sbs_14_img_0?_encoding=UTF8&psc=1&refRID=6AA9FQJWZZNZDTQ6Z3K4

bmc
  • 119
  • 5