1

Context in Crossley's book : PC, a predicate calculus consisting in

  • " a denumerable set of individual variables"
  • one predicate : $P(x,y) $

  • a quantifier : $\exists$

  • two connectives : $\land$ , $\neg$

  • formation rules

  • truth conditions

  • axioms

  • one inference rule : modus ponens.


  • I'm currently trying to capture the main features of the proof of predicate logic's completeness, more precsely of the $\Leftarrow$ part, namely : $\vDash \phi \space\implies\space \vdash\phi$

  • To show this, it is suffcient ( maybe also necessary) to prove the "Godel-Henkin Completeness Theorem", which reads like this : if $\Sigma$ is a consistent set of formulas, then $\Sigma$ has a model ( that is, there exists an interpretation $\mathcal {A}$ such that every formula $\phi$ belonging to $\Sigma$ is true in $\mathcal {A})$.

  • As I understand the proof of Godel-Henkin's Completeness Theorem ( as it is presented by Crossley et alii. , in a little volume called What is mathematical logic? Chapter1, OUP 1972) the reasoning goes as follows :

(1) $\Sigma$ is a consistent theory / set of formulas in PC, a Predicate Logic system.

(2) By Lindenbaum's Lemma, we are garanteed that there is at least one full extension of $\Sigma$

(3) In case we find such a full extension that has a model, then $\Sigma$ has a model , and the proof is completed.

(4) So, the whole thing is to define and build a set of formulas that $(a)$ is a full extension of $\Sigma$ and $(b)$ that has a model.

(5) This full extension that has a model , denoted by $\Sigma^{\star}$ ( by Crossley) , can be built by adding new variables to the $PL$ language and by definig a model $<U, R>$ for $\Sigma^{\star}$ such that the universe $U$ ( of the model) is the set of new variables $b_1, b_2, b_3$ ... and such that $P$ is a binary relation holding between $b_i$ and $b_j$ iff $\Sigma^{\star} \vdash P(i,j)$

Is this outline correct? I mean, is (4) really the key point of the proof?

Another way to put my question is : the desired result does not follow directly from Lindenbaum's lemma, but why exactly?

Another question would be : how does the ( complicated) construction of $\Sigma^{\star}$ ensure us that this full extension meets the desired condition of having a model? But this may be too much for a single post.

  • 2
    "Godel-Henkin's proof" is a non-sequitur: Godel's proof was totally different from Henkin's proof. – Noah Schweber Apr 28 '20 at 00:53
  • @Noah Schweber. - Thanks for this point. The post is edited. . –  Apr 28 '20 at 00:56
  • 1
    To answer your clarified question, it does not follow directly from Lindenbaum's lemma simply because Lindenbaum's lemma does not produce a model and on its face says nothing about models. –  Apr 28 '20 at 01:04

1 Answers1

3

I wouldn't quite outline the proof that way. There are really two new ideas: the main one is that of term structures, and the secondary one is the witness property (which is how we apply the term structure idea). The whole "Lindenbaum+" machinery is the boring predictable part, and should only be brought into the picture once those key ideas are understood.


The first key idea is that of term structures. Specifically, given a theory $T$ in a language $L$, there's a natural way to try to build a model of $T$ - namely, look at the set $Term_T$ of (closed) terms of $T$ modulo $T$-provable equality and interpret $L$ over that set in the obvious way:

  • We set $f^{Term_T}([t_1],...,[t_n])=[s]$ iff $T\vdash f(t_1,...,t_n)=s$.

  • We set $R^{Term_T}=\{([t_1],...,[t_n]): T\vdash R(t_1,...,t_n)\}$.

For example, taking $T=PA$ we have terms like $(1+0)$, $(1+(1\cdot 1))$, $(1+1)+(1+1)$, etc., and $T$ proves all the relevant equalities. So $Term_{PA}\models PA$.

However, in general $Term_T$ is not a model of $T$: even ignoring the situation where there are no closed $L$-terms in the first place (which is somehow the "boring" obstacle), consider the case where $L$ consists of a single constant symbol $c$ and a unary relation $U$ and $T=\{\exists xU(x)\}$. Then $Term_T$ has a single element, namely (the one-element equivalence class of) the term $c$, but since $T\not\vdash U(c)$ we have $Term_T\not\models U(c)$ and so $U^{Term_T}=\emptyset$.


So this reduces the problem to the following question:

When does $Term_S\models S$?

The goal of course is to prove "Every $T$ can be 'embedded' in some $S$ such that $Term_S\models S$" (since then the reduct of $Term_S$ to the language of $T$ should be a model of $T$). This takes us to our second key idea: the witness property. A theory $S$ has the witness property iff whenever $S\vdash\exists x\varphi(x)$ there is some closed term $t$ such that $S\vdash\varphi(t)$. You can think of this as a cousin of completeness: a complete theory leaves no disjunction unclarified ($T$ is complete iff whenever $T\vdash \varphi\vee\psi$ we have $T\vdash\varphi$ or $T\vdash\psi$), and a theory with the witness property leaves no existential claim unclarified (if $T$ has the witness property then $T$ never tells us something exists without giving us an explicit "named" example).


At this point the rest of the proof is basically mechanical:

  • Prove that if $T$ is a consistent $L$-theory then there is a language $L'\supseteq L$ and a complete consistent $L'$-theory $T'$ such that $T\subseteq T'$ and $T'$ has the witness property. (This is an elaboration on Lindenbaum's lemma; note that LL doesn't give you the witness property.)

  • Prove that if $S$ is a complete consistent theory with the witness property, then $Term_S\models S$. (This is a straightforward induction on formula complexity.)

Noah Schweber
  • 260,658