3

I'm trying to prove (using natural deduction):

$$\forall x\forall y(\alpha(x)\vee\beta(y))\implies\forall x(\alpha(x)\vee\forall x\beta(x))$$

But I'm struggling with this point:

Elimination of all free variables in hypothesis: for the introduction of $\forall$ we need that all the free appearances of the variable are discharged.

Sorry for the image, but I don't know how to write this type of thing in latex

enter image description here

Any help will be appreciated.

Carinha logo ali
  • 477
  • 2
  • 10

2 Answers2

2

The formula is provable in natural deduction, but its derivation is not "easy" to find because it necessarily uses the rule of reductio ad absurdum.

The intuitive way to try to prove it in natural deduction is just below. Still, it is not a derivation because the rule ${\color{red}{\forall_I}}$ in red is not valid: it universally quantifies the variable $y$ that occurs free in the assumption $\beta(y)$. $$ \dfrac{ \dfrac{ \dfrac {[\forall x \forall y(\alpha(x) \lor \beta(y))]^*} {\forall y(\alpha(x) \lor \beta(y))} \forall_E } { \alpha(x) \lor \beta(y) } \forall_E \qquad \dfrac {[\alpha(x)]^\circ} {\alpha(x) \lor \forall x \beta(x)} \lor_{I_1} \qquad \dfrac { \dfrac {[\beta(y)]^\circ} {\forall x \beta(x)} {\color{red}{\forall_I}} } {\alpha(x) \lor \forall x \beta(x)} \lor_{I_2} } { \dfrac{ \dfrac{ \alpha(x) \lor \forall x \beta(x) } { \forall x (\alpha(x) \lor \forall x \beta(x)) } \forall_I } { \forall x \forall y (\alpha(x) \lor \beta(y)) \to \forall x (\alpha(x) \lor \forall x \beta(x)) } \to_I^* }\lor_E^\circ $$

There is a way to fix the issue. The intuition is that the formula $\alpha(x) \lor \forall y \beta(y)$ is (classically) equivalent to $\lnot \alpha(x) \to \forall y \beta(y)$. So, under the assumption $\forall x \forall y(\alpha(x) \lor \beta(y))$, let us first derive $\lnot \alpha(x) \to \forall y \beta(y)$, to yield the derivation $\pi_1$ which instantiates the rule ${\color{green}{\forall_I}}$ in a valid way (indeed, it universally quantifies the variable $y$, which does not occur free in any undischarged assumptions).

$$ \pi_1 = \dfrac{ \dfrac{ \forall x \forall y(\alpha(x) \lor \beta(y)) } { \dfrac {\forall y(\alpha(x) \lor \beta(y))} {\alpha(x) \lor \beta(y)} \forall_E } \forall_E \qquad \dfrac {[\lnot \alpha(x)]^\bullet \qquad [\alpha(x)]^\circ} {\dfrac{\bot}{\beta(y)}\bot_E} \lor_{I_1} \qquad \displaystyle{{}\atop{\displaystyle{{}\atop{[\beta(y)]^\circ}}}} } { \dfrac{ \dfrac{ \beta(y) } { \forall x \beta(x) } {\color{green}{\forall_I}} } { \lnot \alpha(x) \to \forall x \beta(x) } \to_I^\bullet } \lor_E^\circ $$

The derivation we are looking for is then the one below, where we use $\pi_1$ (under the assumption $\forall x \forall y (\alpha(x) \lor \beta(y))$, which we discharge at the end) and the derivation (without assumptions) of $\alpha \lor \lnot \alpha(x)$, which in turn crucially uses reductio ad absurdum (see here, here and here for instance):

$$ \dfrac { \displaystyle{ {} \atop {\displaystyle\genfrac{}{}{0pt}{}{\vdots}{\alpha(x) \lor \lnot \alpha(x)}} } \qquad \displaystyle{ {} \atop {\dfrac {[\alpha(x)]^\dagger} {\alpha(x) \lor \forall x \beta(x)} \lor_{I_1}} } \quad \dfrac {\displaystyle \genfrac{}{}{0pt}{}{\displaystyle\genfrac{}{}{0pt}{}{[\forall x \forall y(\alpha(x) \lor \beta(y))]^*}{\vdots \pi_1}} {\lnot \alpha(x) \to \forall x \beta(x)} \qquad \displaystyle{{}\atop{[\lnot \alpha(x)]^\dagger}} } { \dfrac {\forall x \beta(x)} {\alpha(x) \lor \forall x \beta(x)} \lor_{I_2} } \to_E } { \dfrac {\dfrac{\alpha(x) \lor \forall x \beta(x)} {\forall x (\alpha(x) \lor \forall x \beta(x))} \forall_I} {\forall x \forall y (\alpha(x) \lor \beta(y)) \to \forall x (\alpha(x) \lor \forall x \beta(x))} \to_I^* } \lor_E^\dagger $$

  • In the deduction of $\pi_1$, the introduction of $\forall$ while exists $\neg\alpha(x)$ is ok? – Carinha logo ali Nov 04 '24 at 22:46
  • 1
    @Carinhalogoali - That instance of the rule ∀ is valid because it universally quantifies the variable $$, which occurs free in $()$ but does not occur free in any assumptions (unlike the variable $$, which indeed occurs free in the assumption $¬()$). The conclusion of that rule can equivalently be written as $∀()$ or $∀()$ or $∀()$, it does not matter because bounds variables are dummy. Note that in $()∨()$ in $\pi_1$, the variables $$ and $$ are distinct. – Taroccoesbrocco Nov 05 '24 at 01:12
0

Sorry for the image, but I don't know how to write this type of thing in latex

Just use nested \dfrac{}{}{} , perhaps with hspacing to tidy the alignment.


In any case, your idea to start with disjunction elimination, then prove by reduction to absurdity was a good approach.

However, repeating three RAA discharging the same assumption is a clear hint that you could vastly simplify things.

So, seek to avoid that repetition by pushing the universal introduction downwards and lift its assumption, $\lnot(\alpha(s)\vee\forall x~\beta(x))$, up to somewhere earlier.

To where? Well, on the branch where you assume $\alpha(s)$ you may use disjunction introduction to contradict that assumption. Whenever a contradiction is derivable in one case of a disjunction elimination, you may use explosion to turn it into disjunctive sylogism. So put it there and do that.

Thus, you may derive $\beta(t)$, and from there universal introduction and disjunction introduction will set things up to complete your reduction to absurdity.

Then it is a cinch to wrap things up.

$\def\D#1#2#3{\hspace{-0.4ex}\dfrac{#1}{#2}{#3 }\hspace{-3ex}} \D{\D{\D{\D{{\lower{1.5ex}{[\lnot(\alpha(s)\lor\forall x~\beta(x))]^\bullet}\hspace{-14ex}\D{\D{\D{\D{\D{[\forall x~\forall y~(\alpha(x)\vee\beta(y))]^\star}{\forall y~(\alpha(s)\lor\beta(y))}{\forall_\mathrm E}}{\alpha(s)\lor\beta(t)}{\forall_\mathrm E}\qquad\D{\D{\lower{1.5ex}{[\lnot(\alpha(s)\lor\forall x~\beta(x))]^\bullet}\quad\D{[\alpha(s)]^\circ}{\alpha(s)\lor\forall x~\beta(x)}{\lor_\mathrm I}}{\bot}{\lnot_\mathrm E}}{\beta(t)}{\bot_\mathrm E}\qquad\lower{1.5ex}{[\beta(t)]^\circ}}{\beta(t)}{\lor_\mathrm E^\circ}}{\forall x~\beta(x)}{\forall_\mathrm I^t}}{\alpha(s)\lor\forall x~\beta(x)}{\lor_\mathrm I}}}{\bot}{\lnot_\mathrm E}}{\alpha(s)\lor\forall x~\beta(x)}{{\small\mathrm {RAA}}^\bullet\hspace{-2ex}}}{\forall x~(\alpha(x)\lor\forall x~\beta(x))}{\forall_\mathrm I^s}}{\forall x~\forall y~(\alpha(x)\lor\beta(y))\to\forall x~(\alpha(x)\lor\forall x~\beta(x))}{{\to}^\star_\mathrm I}\\\blacksquare$

Graham Kemp
  • 133,231