0

I have some questions about classical logic (I am not a logician, so please be indulgent).

First, for propositional logic:

$(I)$ Can you explain to me the difference between the symbols "$\vdash$" and "$\models$" ?

$(II)$ I would like to understand how syntax and semantic are linked and more precisely, how rules of inference (for example: https://en.wikipedia.org/wiki/List_of_rules_of_inference) and truth tables (of logical operators $\vee$, $\wedge$, etc.) are linked. I'm wondering "what comes first" and I will continue with an example (modus ponens) to keep it simple for me:

$(1)$ If we consider "first" truth tables, we have that $[p \ \wedge \ (p \rightarrow q)] \rightarrow q$ is a tautology. Can we conclude that $p, (p \rightarrow q) \vdash q$ (or that $p, (p \rightarrow q) \models q$) ?

If we consider "first" rules of inference, we have that $p, (p \rightarrow q) \vdash q$ (or $p, (p \rightarrow q) \models q$ ?). Can we conclude that $[p \ \wedge \ (p \rightarrow q)] \rightarrow q$ is a tautology ?

Basicaly, if we imagine how should we beggin to create propositional logic, what would come first in mind ? Also, let's say for example that rules of inference are the most natural thing to beggin with, are the truth tables built in order to respect these rules (i.e. to obtain a tautology as for my previous example) ?

Now, consider predicate logic:

$(I)$ For example, where does the universal generalization rule (again: https://en.wikipedia.org/wiki/List_of_rules_of_inference) comes from ? Is it assumed to be "true" as modus ponens could be assumed to be "true" in the case of propositional logic ?

Because we don't have truth tables in the case of predicate logic, I have the feeling that it is less "balanced" as propositional logic (like if from now, we need to have rules of inference such as universal generalization (so, a more syntactic approach in a sense) in order to perform proofs, not as before).

$(II$) Do we have $\neg \ (\forall x, P(x)) \vdash \exists x, \neg \ P(x)$ or $\neg \ (\forall x, P(x)) \models \exists x, \neg \ P(x)$ ?

Finally, for both propositional logic and predicate logic:

$(I)$ Are there simple examples of proof using only a syntactic approach ?

I am sorry if some if certain passages are not very clear and I will try to be more explicit if needed. Thank you for your help.

1 Answers1

1

See : " logical consequence" in Internet Encyclopedia of Philosophy ( 3 excellent articles) / also, for a basic approach ( on syntax, semantics and metalogical notions such as soundness and completeness) Papineau, Philosophical Devices.


1) The two symbols mean " [ set Gamma of premisses] has, as logical consequence [ proposition P]". The first one means " ... has, as logical consequence from a syntactical point of view ... " . The second " .... from a semantic point of view...".

Note : these symbols denote a relation ( the logical consequence relation); this relation is not a function since a given set of premises can have more than one logical consequence.

2) Semantics deals with questions such as : " what formulas are true in all possible interpretations? what in no interpretation? what only in some? " or " is there a possible interpretation in which all the given premisses are true and the alledged consequence false?". Syntax deals with questions such as : " is this string of symbols / formulas conformable to the syntactic rules?", " is there a way that leads from this set of formulas to this other one by using a rule and only one at each step of the process?"

Note : In the same way, english syntax tells you that out of " I wonder whether I will take the money. Will I run? " one cannot construct " I wonder whether I will take the money & will I run?". Not because of the meaning / interpretation/semantics of the sentences, but because the form is not grammatically corrrect.

For example, having written "(A-->(BvC)), ~ (BvC)", can I write after that the symbol " ~A ". Syntax answers that the string of formulas

                     " (A--> (BvC)), ~(BvC), ~ A " 

is correct ( "grammatically"), being allowed by the modus tollens rule of inference : " from (X--> Y) and (~Y), infer ~X".

Semantics is often considered as more basic than syntax: syntax is supposed to reflect semantics and the semantic notion of logical consequence is considered standardly as the ground of syntactic logical consequence.

Analogously, ( almost) every rule of inference has a corresponding tautology ( having the form of a conditional) which has value " true" in all the rows of its truth table. One will say that the rule of inference " from (X--> Y) and ~Y , infer ~X" is a good rule because the formula " {[ (X--> Y) & ~Y) --> ~X } is logically true ( true in all possible interpretations).

Although the rule by itself totally abstracts from interpretations ( truth values) the tautology guarantees, so to say, that the rule never leads from true premises to a false consequence.

A caveat : it is not absolutely true that " logical consequence" can be reduced to logical implication via tautological conditionals , see the answer I got to the question here How to show precisely that the conditional definition of validity is equivalent to the standard semantic definition.