7

Remark: I assume the equality symbol „$=$“ with its standard semantics to always be included in first-order logic.

Suppose we want to formalize some first-order theory. For our first-order language, we need to fix some set of variables and some set of contstants, e.g. one constant in group theory and Peano arithmetic, no constant at all in ZFC. The choice of a set of constants is most often done quite canonical.

Also, the cardinality of the set of constants can have a direct influence on semantics. For if the set of constants has cardinality $\kappa$ and we add for any two distinct constants $c,d$ the axiom $c\neq d$, each model must have at least cardinality $\kappa$.

What consequences does the choice of the set of variables have?

I can think of these two cases:

  • One usually takes an at least infinite set of variables to be able to nest arbitrary many quantifiers. Example: Let $P$ be a 1-ary predicate and $F$ be a 2-ary function, then the formula $$ \forall x_1 \forall x_2 \forall x_3 \dots \forall x_n : P(F(x_1,F(x_2,F(x_3, \dots ,F(x_{n-1},x_n)))))$$ does only make sense, if one has at least $n$ distinct variables.
  • If one has at least $n$ distinct variables, one can add axioms like $$ \exists^{\ge n} x : P(x) \qquad \text{or} \qquad \exists^{\le n} x : P(x)$$ which imply each model to contain at least $n$ or at most $n$ elements.
  • If one wants to enumerate the set of all strings and do some gödelian reasoning, one must only take a countable set of variables.

Do you know any more cases, where the cardinality of the set of variables does matter? Does it, as long as it is infinite, affect the class of models for our formal theory?

Lucina
  • 697
  • Actually, there's a (nontrivial) theorem that $3$ variables suffice in a sense, for ZF(C) type theories. – Berci Sep 27 '19 at 19:48

2 Answers2

4

In standard first-order logic, there is always an infinite set of variables available. As you point out, if there were only finitely many variables, that would severely diminish the expressiveness of the language. I've never seen such a thing studied, and it seems like more of a syntactic curiosity than something that is really of interest as a logic.

Assuming you have infinitely many variables, it doesn't matter how many you have (so, the standard setup is just to say you have countably infinitely many). Any formula only uses finitely many variables, and the semantic value of a sentence (i.e., which models satisfy it) is unchanged if you replace all of its variables with different variables. Similarly, if you want to talk about proofs, any proof will involve only finitely many formulas with finitely many variables, which you can replace freely with other variables without changing the validity of the proof. So, if you have some fixed countably infinite set of variables, you can always just replace all the variables in a given formula by variables from that set.

Note that despite their syntactic similarity, constants and variables are in some sense very different sorts of things and you should not think of them as being on an equal footing. Variables are just part of the basic underlying syntactic machinery of the logic, analogous to connectives and quantifiers. No matter what theory you are studying, you still use the same set of variables (not that you have to, but there's just no reason to change it). Constant symbols, on the other hand, are non-logical symbols which are chosen as part of the specific theory you want to talk about and which receive interpretations in models, along with relation and function symbols (in fact, I would consider them to be a special case of function symbols, namely the $0$-ary function symbols).

Eric Wofsey
  • 342,377
3

EDIT: Note that Eric Wofsey's answer here differs from mine. This is not a contradiction; we're describing two different, totally satisfactory ways of making things precise. This reflects part of why this isn't generally talked about (not saying that's a good thing) - there's not really a "best implementation," and there are many "good enough" implementations with no significant differences (in my opinion) between them.

This is a good question, and one which is often passed over (unfortunately). Generally the answer you'll see is "all that you could want," and formally what that means is that we have a proper class of every relevant type of thing. In particular, there is no "set of all first-order formulas."

A formal presentation is a bit messy, but here is one approach:

To get as many of everything as we want, we'll proceed as follows:

  • An $n$-ary relation symbol is any set of the form $\langle \langle 0,n\rangle,x\rangle$ for $x$ a set.

  • An $n$-ary function symbol (and we'll treat constant symbols as nullary function symbols) is any set of the form $\langle \langle 1,n\rangle,x\rangle$ for $x$ a set.

  • A variable is any set of the form $\langle \langle 2,2\rangle,x\rangle$ for $x$ a set.

Here $\langle\cdot,\cdot\rangle$ is your favorite ordered pairing notion.

We can then build, on this basis, the proper classes of all first-order terms/formulas/sentences. Note that the intrusion of proper classes here isn't really making things any worse, since we already have proper classes kicking in on the semantics side (how many structures are there?).

Theorems about first-order logic are then phrased carefully by talking about sets of symbols - e.g. the compactness theorem is phrased as

For any set $\Sigma$ of relation symbols, function symbols, and variables, and any set $X$ of sentences using only the logical symbols and the symbols from $\Sigma$, if every finite subset of $X$ has a model then $X$ has a model.

This can raise the worry that some natural results about first-order logic are actually ill-posed, in that when we try to formalize them naively we wind up talking about classes in an illegal way. This can be gotten around by just being careful about phrasing them; alternatively, we can work in a broader theory like NBG which trivializes the issue.


This is not to say that restricting to smaller collections of symbols is uninteresting (and for example you're right that Godel numbering doesn't work when our language gets too big); however, it's not something we do a priori. The general development of first-order logic does take this proper class approach.

It's also worth noting that all proper classes need not be equivalent (see e.g. here) - and so to avoid any possible issues I've really gone for the biggest possible type (e.g. the class of $2$-ary relation symbols is in definable bijection with the class of all sets).

Noah Schweber
  • 260,658
  • I guess this makes sense if you want to eventually talk about infinitary logic. But for ordinary first-order logic, surely there's no reason to ever want more than a countably infinite set of variables? I don't think of variables as being in the same "category" as things like relation symbols--they are part of the underlying apparatus of first-order logic (like connectives), not something you specify in order to talk about a specific theory. – Eric Wofsey Sep 27 '19 at 20:49
  • @EricWofsey What if you want to talk about $\kappa$-types? :P You can certainly get away with only countably many variables, but I think it's better to have a proper class of them, at least in the background somewhere (and since we definitely do need proper classes of non-logical symbols it doesn't cost us anything). – Noah Schweber Sep 27 '19 at 21:34
  • Well, I prefer to think of types by adding constant symbols, not using variables. I also think it's misleading to say we need proper classes of non-logical symbols--that's kind of like saying you need proper classes of elements to do group theory, since groups can be arbitrarily large. Of course, there are a proper class of things that could be non-logical symbols, but you're always studying some particular set of them at a time. – Eric Wofsey Sep 27 '19 at 21:45
  • And there's no need to set aside certain sets ahead of time to represent non-logical symbols--you can use any set at all as a non-logical symbol (except for the ones we reserve for logical symbols), just like you can use any set at all as an element of a group. (This means that when you specify a signature, you need to not just list the symbols themselves but also explicitly state their arity, but really that seems like the better approach to take anyways.) – Eric Wofsey Sep 27 '19 at 21:46
  • @EricWofsey I'm not going for parsimony, I'm going for the simplest framework that trivializes everything. Setting things up as I did resolves everything once and for all, you never have to tweak anything. I don't see a drawback. – Noah Schweber Sep 27 '19 at 22:00
  • It also, as you said in your first comment, has the advantage of not needing to be changed when we shift to infinitary logic. I really like a one-and-done approach, it lets me ignore unimportant issues while focusing on things that actually matter. – Noah Schweber Sep 27 '19 at 22:01
  • (As to setting aside specific symbol-classes, in my limited experience I've found that this tends to clarify things - including for myself, way back when.) Finally, one advantage to having a proper class of variables is that there's no obstacle to talking about infinite-arity relation and function symbols (although of course those'd have to be folded into the set-aside-classes); even though this is a thing we don't do generally, I think it is something we should be able to do - grasping how "finite-arity first-order logic" is better than "arbitrary-arity first-order logic" is fun and valuable. – Noah Schweber Sep 27 '19 at 22:24
  • Thanks, Noah, for that extensive answer! But I don't understand, why the informal statement „you have as many variables as you could want“ does drive you to a proper class of variables. You mentioned „$\kappa$-types“ and „infinite-ary function symbols“, so that may be your cause. But I am more interested in the classical theory of first-order predicate logic (with equality). Is there any reason for wanting a proper class of variables for classical first-order logic? – Lucina Sep 28 '19 at 21:29
  • It seems to me, that the very concept of "language" should be a countable one (before it gets perverted by set theorists :D). – Lucina Sep 28 '19 at 21:31
  • @Lucina Well, $\kappa$-types are part of classical first-order logic. As Eric Wofsey says we can treat them with constant symbols instead of variables, but either way we need $\kappa$-many "free symbols" to deal with them. Eric is right that we can get away with merely countably many variables. However, since proper classes creep into logic in much more direct ways (how many languages are there? how many structures up to isomorphism in a given language are there? etc.), I don't see the advantage of doing that. The above approach costs nothing and preemptively resolves a ton of silly stuff. – Noah Schweber Sep 28 '19 at 21:32
  • Now if you want to restrict attention to countable languages, that's certainly something you can do. However, even in that case I'd argue it's a good idea to develop first a framework for handling arbitrary stuff: it just gives you sure footing later on that basically anything you want to do, you can do. And keep in mind that uncountable languages are useful even if you only want to talk about countable languages. (If you really object linguistically, just call them "signatures.") – Noah Schweber Sep 28 '19 at 21:35
  • Basically, the approach in my answer is designed to give the user maximal freedom to ignore unimportant details. Afterwards of course one can restrict attention in many different ways. If you're looking for the narrowest framework that lets you implement some part of logic, that's quite a different story, but I really like starting with something that I know I'll never have to alter later on. – Noah Schweber Sep 28 '19 at 21:41
  • I'm an undergraduate student that never heard about $\kappa$-types, so I cannot really appreciate that argument. But concerning your other two mentioned appearences of proper classes in logic: If you try to avoid thinking set theoretically about ormal languages, both questions are solved. Because there is only a potentially infinite collection of letters, there are only potentially infinite (i.e. countably many) laguages. – Lucina Sep 30 '19 at 13:41
  • And from the viewpoint of formal logic, it seems to me that it does not make sense to distinguish between elementary equivalent models. If we identiy a model with the collection of all formulas it satisfies, even the class of all models is not larger than the power class of the set of formulas. – Lucina Sep 30 '19 at 13:44
  • ...Well that is only my uneducated opinion. I am writing this in hope, you can tell me better :D – Lucina Sep 30 '19 at 13:45
  • @Lucina "If we identiy a model with the collection of all formulas it satisfies" But per my answer that's not satisfactory since that collection of formulas will not in general have the strong witness property! Even restricting everything to the countable case, the right analogue of "model" is "complete theory with the strong witness property." And even at the countable level a countable complete theory may have many inequivalent countable complete extensions with the strong witness property, corresponding to how a countable complete theory can have many nonisomorphic countable models. – Noah Schweber Oct 02 '19 at 17:06
  • This addresses your claim that "it seems to me that it does not make sense to distinguish between elementary equivalent models." But it gets worse: the number of uncountable models a countable theory has impacts the number of countable models it has! Specifically, suppose $T$ is a countable theory with exactly one model of size $\aleph_1$ up to isomorphism. Then $T$ has only countably many countable models, and moreover those models "indexed by $\mathbb{N}$" in a natural way (if there are infinitely many in the first place). – Noah Schweber Oct 02 '19 at 17:09
  • So even if we reject uncountable sets as meaningless, the ideas around them are useful for purely countable arguments. (Indeed, a huge theme in modern logic is that higher infinities have implications even at the finite level!) So not only is your proposed "reduction" of models to complete theories fundamentally missing the key point even at the countable level, but closing off discussion of set theory means we lose a valuable perspective on concrete objects. – Noah Schweber Oct 02 '19 at 17:12
  • (Sorry, "my answer" in the last three comments refers to my answer to your other question, asking how models and complete theories relate - the point being that they do not correspond well until we add the further condition of the strong witness property.) – Noah Schweber Oct 02 '19 at 17:13