The book I'm reading on model theory ('Model Theory' by Maria Manzano) offers no explanation for why we need an assignment function in addition to the interpretation function. The interpretation function assigns interpretations to constant, function and relation symbols. Why does it stop there? Why not assign interpretations to the variables as well? This would "bundle up" the interpretation function (as currently defined) and the assignment function (as currently defined). I guess this involves adding all the variables to the signature - but what's wrong with that?
An intuitive explanation would be preferable to a technical one (if feasible).
Thank you!
EDIT 1: Ok, after thinking some more, I agree that this would make it hard to talk about quantification. For example, $ (\forall x) R(x,y,z)$. Here $x$ might get assigned to some fixed element in the domain, we cannot change it's assigned element without changing the interpretation function itself. If we go that route, then to test whether $ (\forall x) R(x,y,z)$ is true, we'd have to run through variants of the interpretation function each assigning a different element to $x$ but otherwise the same. Of course, each interpretation function would come from a different structure.
So we'd be quantifying over structures, rather than the more aesthetically pleasing option of quantifying over variables within a fixed structure.
EDIT 2: Also wanted to add that if we do combine the interpretation function and assignment function into one as above, then there is no difference between a constant and a variable anymore. So the question is also asking, by implication, why we need variables. Variables are required for the convenience they offer as pointed out in Edit 1.