Edit: I posted an article here covering the optimizations for the Leme-Coniglio-Lopes IPL decision procedure. Chief among them is that, since there are direct intuitionistic truth-tables, there are also indirect intuitionistic truth-tables.
Yes, there is such a way to do it, but it involves a fair amount of extra machinery that comes from Leme, Coniglio, and Lopes (2024). That work adapts Grätz's non-deterministic matrix for S4 into the following restricted, non-deterministic matrix tables (I've relabeled them from the original paper's $\{F, U, T\}$ to for consistency in the notation above.)
$$
\begin{array}{|c|c|}
\hline A & \neg A \\
\hline 0 & \{1,2\} \\
\hline 1 & \{1,2\} \\
\hline 2 & \{0\} \\
\hline
\end{array} ~
\begin{array}{|c|c|c|}
\hline A & B & A \to B \\
\hline 0 & 0 & \{1,2\} \\
\hline 0 & 1 & \{1,2\} \\
\hline 0 & 2 & \{2\} \\
\hline 1 & 0 & \{1,2\} \\
\hline 1 & 1 & \{1,2\} \\
\hline 1 & 2 & \{2\} \\
\hline 2 & 0 & \{0\} \\
\hline 2 & 1 & \{0\} \\
\hline 2 & 2 & \{2\} \\
\hline
\end{array} ~
\begin{array}{|c|c|c|}
\hline A & B & A \vee B \\
\hline 0 & 0 & \{0\} \\
\hline 0 & 1 & \{0\} \\
\hline 0 & 2 & \{2\} \\
\hline 1 & 0 & \{0\} \\
\hline 1 & 1 & \{0\} \\
\hline 1 & 2 & \{2\} \\
\hline 2 & 0 & \{2\} \\
\hline 2 & 1 & \{2\} \\
\hline 2 & 2 & \{2\} \\
\hline
\end{array} ~
\begin{array}{|c|c|c|}
\hline A & B & A \wedge B \\
\hline 0 & 0 & \{0\} \\
\hline 0 & 1 & \{0\} \\
\hline 0 & 2 & \{0\} \\
\hline 1 & 0 & \{0\} \\
\hline 1 & 1 & \{0\} \\
\hline 1 & 2 & \{0\} \\
\hline 2 & 0 & \{0\} \\
\hline 2 & 1 & \{0\} \\
\hline 2 & 2 & \{2\} \\
\hline
\end{array}
$$
The process first involves building a deceptively truth-table-looking layout (more like a directed graph of evaluations, as would be expected with non-deterministically semantic approaches). Then, there's a validation process that involves checking the $1$-valued nodes for validators. Then, there's a reduction process that removes nodes from the graph. The validations and reductions continue until there is no more reduction to make (or, more preferably, if future reductions would have no impact on the final evaluation).
Example (also in the paper):
$$
\begin{array}{c}
Create \dots \\
\begin{array}{|c|c|c|c|c|c|}
\hline & a & b & c & d & e \\
\hline \text{ID} & A & \neg A & A \vee \neg A & \neg (A \vee \neg A) & \neg \neg (A \vee \neg A) \\
\hline 1000 & 0 & 1 & 0 & 1 & 1 \\
\hline 1001 & 0 & 1 & 0 & 1 & 2 \\
\hline 101 & 0 & 1 & 0 & 2 & 0 \\
\hline 110 & 0 & 2 & 2 & 0 & 1 \\
\hline 111 & 0 & 2 & 2 & 0 & 2 \\
\hline 20 & 2 & 0 & 2 & 0 & 1 \\
\hline 21 & 2 & 0 & 2 & 0 & 2 \\
\hline
\end{array}
\\\\ Validate \dots \\
\begin{array}{|c|c|c|c|c|c|}
\hline & a & b & c & d & e \\
\hline \text{ID} & A & \neg A & A \vee \neg A & \neg (A \vee \neg A) & \neg \neg (A \vee \neg A) \\
\hline 1000 & 0 & 1 & 0 & 1 & \not 1:(e,101) \\
\hline 1001 & 0 & 1 & 0 & 1 & 2 \\
\hline 101 & 0 & \not 1 & 0 & 2 & 0 \\
\hline 110 & 0 & 2 & 2 & 0 & \not 1 \\
\hline 111 & 0 & 2 & 2 & 0 & 2 \\
\hline 20 & 2 & 0 & 2 & 0 & \not 1 \\
\hline 21 & 2 & 0 & 2 & 0 & 2 \\
\hline
\end{array}
\\\\ Reduce \dots \\
\begin{array}{|c|c|c|c|c|c|}
\hline & a & b & c & d & e \\
\hline \text{ID} & A & \neg A & A \vee \neg A & \neg (A \vee \neg A) & \neg \neg (A \vee \neg A) \\
\hline 1001 & 0 & 1 & 0 & 1 & 2 \\
\hline 111 & 0 & 2 & 2 & 0 & 2 \\
\hline 21 & 2 & 0 & 2 & 0 & 2 \\
\hline
\end{array}
\end{array}
$$
The ID numbering scheme I used just multiplies the ID by 10 and adds 0 to the upper and 1 to the lower (newly inserted) row when the valuation table admits two possible values.
Once built, we scan every 1-valued cell, preferably starting from the final column, to seek validators. To the best I made out (I'll confirm this with Leme and others in this somewhat new space), a row $r_v$ is a validator for a checked, 1-valued row $r_c$ if, and only if:
- The value in the same column of $r_v$ is $0$ (as it is with $(e,101)$ at $(e,1000)$).
- For every other $2$-valued cell in $r_c$, if it's value is $2$, then its corresponding $r_v$ value is $2$.
- In the example, $(b,101)$ has no such validators, because either the row is already marked for deletion, or because, for instance $(d,101)=2$, but $(d,21)=0$, failing this stated condition.
Once we delete the offending rows, every cell in column $e$ is $2$, so it's a tautology, and we can stop here.
The completeness proof of this for IPL using this method is in the paper (Section 4.2.3), and I encourage people to try to read it.
Now, there are a boatload of optimizations for this. One among them is the lack of any need to continue building once the classical contradiction arises (the Gilvenko switch still works). That goes hand-in-hand with my only concern with this method: The tables, themselves assert prima facie strange things, like $v(\neg 0)=\{1,2\}$. We're meant to view these partial valuations very distinctly from how we'd normally view truth-values, so it's okay, but it's bit unnerving in the example because of $(c,1001)$. There are alternative non-deterministic matrix tables in Solares-Rojas (2021) (Section 5.7) called 3NI semantics, with more intuitionistically expected outputs. The section does not contain or explain an analogously table-based decision procedure, though. However, some of the example models and counter-models suggest that one could implement an analogous process. Leme has also built a 3NI decision procedure in Coq.