5

Over a year ago, I worked out a classical-Kleene combination logic that worked to preserve intuitionistic tautologies over the intuitionistic fragment with operators $\{ \neg, \vee, \wedge \}$, which works as follows:

I use $\{ 0, 1, 2 \}$ to correspond to false, unsure, and true, respectively. Kleene logic has no tautologies, so the move to grant it only intuitionistic tautologies was to do a move I called a "Gilvenko switch" (for the lemma in the theorem that bears his name). Propositional variables receive a unique truth-value column per the normal decision procedure for truth-table semantics for classical logic. When the proposition yields a classical contradiction, $1$ can switch to $0$.

It's a compositional procedure, with a structure containing a head Kleene value and a tail truth-table column for each unique atom. I'll give a relevant intuitionistic example below:

Procedure for $\neg \neg (A \vee \neg A)$:

  • $\neg \neg ([1,20] \vee \neg [1,20])$, Assignment
  • $\neg \neg ([1,20] \vee [1,02])$
  • $\neg \neg [1,22]$, preserving that the LEM is not an intuitionistic tautology.
  • $\neg \neg [1,2]$, Halving
  • $\neg [1,0]$
  • $\neg [0,0]$, Switch
  • $[2,2]$, maintainting, true to Gilvenko's theorem, that the double-negated LEM is an intuitionistic tautology.

The problem is that this method can't prove $P \rightarrow P$ unless it's interpreted as $\neg (P \wedge \neg P)$. But, that translation allows the DNE -- $(\neg \neg P \rightarrow P)$, translated $\neg (\neg \neg P \wedge \neg P)$.

That's all to be expected, since $\rightarrow$ is independent of the other operators in intuitionistic logic, except $\neg A$ if $\bot$ is also operable, which it is in the above formulation (it's just $[0,0]$).

Most every intuitionistic decision procedure I've found is "emergent" (propositions-to-atomics), rather than compositional (atomics-to-propositions): Beth, Gödel–McKinsey–Tarski translation to S4, lax logic, HA, G4ip. Also, half of them are full-on theorem provers, which are massive overkill for my use case -- building a (barely started) programming language with an unsure value that adheres to intuitionistic constraints.

So, is there some means of amending another algorithm that would perform the inverse of the aforementioned "Gilvenko switch", meaning set criteria that would perform an analogous reduction and move $[1,2]$ to $[2,2]$? Is there any literature on things resembling such a proposal?

The most efficient decision procedure I've found is an $O(n \log{} n)$ here. The closest bottom-up approach I've seen is here.

1 Answers1

0

Edit: I posted an article here covering the optimizations for the Leme-Coniglio-Lopes IPL decision procedure. Chief among them is that, since there are direct intuitionistic truth-tables, there are also indirect intuitionistic truth-tables.

Yes, there is such a way to do it, but it involves a fair amount of extra machinery that comes from Leme, Coniglio, and Lopes (2024). That work adapts Grätz's non-deterministic matrix for S4 into the following restricted, non-deterministic matrix tables (I've relabeled them from the original paper's $\{F, U, T\}$ to for consistency in the notation above.)

$$ \begin{array}{|c|c|} \hline A & \neg A \\ \hline 0 & \{1,2\} \\ \hline 1 & \{1,2\} \\ \hline 2 & \{0\} \\ \hline \end{array} ~ \begin{array}{|c|c|c|} \hline A & B & A \to B \\ \hline 0 & 0 & \{1,2\} \\ \hline 0 & 1 & \{1,2\} \\ \hline 0 & 2 & \{2\} \\ \hline 1 & 0 & \{1,2\} \\ \hline 1 & 1 & \{1,2\} \\ \hline 1 & 2 & \{2\} \\ \hline 2 & 0 & \{0\} \\ \hline 2 & 1 & \{0\} \\ \hline 2 & 2 & \{2\} \\ \hline \end{array} ~ \begin{array}{|c|c|c|} \hline A & B & A \vee B \\ \hline 0 & 0 & \{0\} \\ \hline 0 & 1 & \{0\} \\ \hline 0 & 2 & \{2\} \\ \hline 1 & 0 & \{0\} \\ \hline 1 & 1 & \{0\} \\ \hline 1 & 2 & \{2\} \\ \hline 2 & 0 & \{2\} \\ \hline 2 & 1 & \{2\} \\ \hline 2 & 2 & \{2\} \\ \hline \end{array} ~ \begin{array}{|c|c|c|} \hline A & B & A \wedge B \\ \hline 0 & 0 & \{0\} \\ \hline 0 & 1 & \{0\} \\ \hline 0 & 2 & \{0\} \\ \hline 1 & 0 & \{0\} \\ \hline 1 & 1 & \{0\} \\ \hline 1 & 2 & \{0\} \\ \hline 2 & 0 & \{0\} \\ \hline 2 & 1 & \{0\} \\ \hline 2 & 2 & \{2\} \\ \hline \end{array} $$

The process first involves building a deceptively truth-table-looking layout (more like a directed graph of evaluations, as would be expected with non-deterministically semantic approaches). Then, there's a validation process that involves checking the $1$-valued nodes for validators. Then, there's a reduction process that removes nodes from the graph. The validations and reductions continue until there is no more reduction to make (or, more preferably, if future reductions would have no impact on the final evaluation).


Example (also in the paper):

$$ \begin{array}{c} Create \dots \\ \begin{array}{|c|c|c|c|c|c|} \hline & a & b & c & d & e \\ \hline \text{ID} & A & \neg A & A \vee \neg A & \neg (A \vee \neg A) & \neg \neg (A \vee \neg A) \\ \hline 1000 & 0 & 1 & 0 & 1 & 1 \\ \hline 1001 & 0 & 1 & 0 & 1 & 2 \\ \hline 101 & 0 & 1 & 0 & 2 & 0 \\ \hline 110 & 0 & 2 & 2 & 0 & 1 \\ \hline 111 & 0 & 2 & 2 & 0 & 2 \\ \hline 20 & 2 & 0 & 2 & 0 & 1 \\ \hline 21 & 2 & 0 & 2 & 0 & 2 \\ \hline \end{array} \\\\ Validate \dots \\ \begin{array}{|c|c|c|c|c|c|} \hline & a & b & c & d & e \\ \hline \text{ID} & A & \neg A & A \vee \neg A & \neg (A \vee \neg A) & \neg \neg (A \vee \neg A) \\ \hline 1000 & 0 & 1 & 0 & 1 & \not 1:(e,101) \\ \hline 1001 & 0 & 1 & 0 & 1 & 2 \\ \hline 101 & 0 & \not 1 & 0 & 2 & 0 \\ \hline 110 & 0 & 2 & 2 & 0 & \not 1 \\ \hline 111 & 0 & 2 & 2 & 0 & 2 \\ \hline 20 & 2 & 0 & 2 & 0 & \not 1 \\ \hline 21 & 2 & 0 & 2 & 0 & 2 \\ \hline \end{array} \\\\ Reduce \dots \\ \begin{array}{|c|c|c|c|c|c|} \hline & a & b & c & d & e \\ \hline \text{ID} & A & \neg A & A \vee \neg A & \neg (A \vee \neg A) & \neg \neg (A \vee \neg A) \\ \hline 1001 & 0 & 1 & 0 & 1 & 2 \\ \hline 111 & 0 & 2 & 2 & 0 & 2 \\ \hline 21 & 2 & 0 & 2 & 0 & 2 \\ \hline \end{array} \end{array} $$

The ID numbering scheme I used just multiplies the ID by 10 and adds 0 to the upper and 1 to the lower (newly inserted) row when the valuation table admits two possible values.

Once built, we scan every 1-valued cell, preferably starting from the final column, to seek validators. To the best I made out (I'll confirm this with Leme and others in this somewhat new space), a row $r_v$ is a validator for a checked, 1-valued row $r_c$ if, and only if:

  • The value in the same column of $r_v$ is $0$ (as it is with $(e,101)$ at $(e,1000)$).
  • For every other $2$-valued cell in $r_c$, if it's value is $2$, then its corresponding $r_v$ value is $2$.
    • In the example, $(b,101)$ has no such validators, because either the row is already marked for deletion, or because, for instance $(d,101)=2$, but $(d,21)=0$, failing this stated condition.

Once we delete the offending rows, every cell in column $e$ is $2$, so it's a tautology, and we can stop here.


The completeness proof of this for IPL using this method is in the paper (Section 4.2.3), and I encourage people to try to read it.

Now, there are a boatload of optimizations for this. One among them is the lack of any need to continue building once the classical contradiction arises (the Gilvenko switch still works). That goes hand-in-hand with my only concern with this method: The tables, themselves assert prima facie strange things, like $v(\neg 0)=\{1,2\}$. We're meant to view these partial valuations very distinctly from how we'd normally view truth-values, so it's okay, but it's bit unnerving in the example because of $(c,1001)$. There are alternative non-deterministic matrix tables in Solares-Rojas (2021) (Section 5.7) called 3NI semantics, with more intuitionistically expected outputs. The section does not contain or explain an analogously table-based decision procedure, though. However, some of the example models and counter-models suggest that one could implement an analogous process. Leme has also built a 3NI decision procedure in Coq.