4

The book I'm reading on model theory ('Model Theory' by Maria Manzano) offers no explanation for why we need an assignment function in addition to the interpretation function. The interpretation function assigns interpretations to constant, function and relation symbols. Why does it stop there? Why not assign interpretations to the variables as well? This would "bundle up" the interpretation function (as currently defined) and the assignment function (as currently defined). I guess this involves adding all the variables to the signature - but what's wrong with that?

An intuitive explanation would be preferable to a technical one (if feasible).

Thank you!

EDIT 1: Ok, after thinking some more, I agree that this would make it hard to talk about quantification. For example, $ (\forall x) R(x,y,z)$. Here $x$ might get assigned to some fixed element in the domain, we cannot change it's assigned element without changing the interpretation function itself. If we go that route, then to test whether $ (\forall x) R(x,y,z)$ is true, we'd have to run through variants of the interpretation function each assigning a different element to $x$ but otherwise the same. Of course, each interpretation function would come from a different structure.

So we'd be quantifying over structures, rather than the more aesthetically pleasing option of quantifying over variables within a fixed structure.

EDIT 2: Also wanted to add that if we do combine the interpretation function and assignment function into one as above, then there is no difference between a constant and a variable anymore. So the question is also asking, by implication, why we need variables. Variables are required for the convenience they offer as pointed out in Edit 1.

  • The interpretation defines the model as a whole. Assignment applies to specific fomulae. – Henno Brandsma Dec 21 '14 at 07:28
  • @HennoBrandsma So I thought a bit more and added an edit to the question. Is this what you're referring to, the issue of quantifying within a fixed structure as opposed to having to quantify structures themselves? – Anonymous Dec 21 '14 at 11:10
  • Yes. You don't want to interpret your example formula as one where the relation can vary together with the $x$,$y$, $z$. You fix the relation, and then afterwards vary the variables. So you need to fix the model (i.e. the interpretation of relations, constants etc.) and interpret formulae within that fixed model. Afterwards you can talk about formulae that stay true even regardless of the model interpretation. – Henno Brandsma Dec 21 '14 at 12:06
  • Notice that, while in general the truth value of a formula must be checked given an interpretation and and assignment function (written $\mathcal{I},\mathcal{\alpha}\models\phi$), for closed formulas (i.e. formulas without variables) you can define what means to be true in an interpretation ($\mathcal{I}\models\phi$), without mentioning the assignment. This is another good reason for keeping them separated. – logi-kal May 06 '21 at 15:03

1 Answers1

3

The variables "vary on" the elements of the domain.

If we consider the domain $\mathbb N$ of natural numbers with the "usual" interpretation for the (binary) predicate $<$, the (unary) function $S$ ("successor"), the (binary) functions $+$ ("sum") and $\times$ ("product") and for the (individual) constant $\overline 0$, what is the "meaning" in this interpretation of a formula like :

$x = \overline 0$ ?

Clearly, if we "assign" as reference to the variable $x$ the element $0$ of the domain, we obtain :

$(x = \overline 0)[x := 0]$

that is true, while if we "assign" as reference to the variable $x$ the element $1$ of the domain, we obtain :

$(x = \overline 0)[x := 1]$

that is false.

The assignment function $s : Var \to \mathbb N$ "formalize" this fact: for a "specific" domain of interpretation, it assigns a "reference" to the (individual) variables, allowing the inductive truth definition to "calculate" the truth-value of formulae.