"Old" programming languages like Fortran, Cobol and LISP arose before serious mathematical theory of programming languages was developed. They inspired the development of such theory, but are full of idiosyncracies and features which from a mathematical point of view can best be described as "warts". However, each of the old languages in essence has a mathematical core. We can extract their cores and see what mathematical models those have.
For LISP the core would be a small subset of Scheme. Let us focus just on the list processing part, and forget about the fact that in LISP and Scheme one can change state with setq and the like.
A small core for LISP could contain the atoms, S-expressions and functions, by which we mean some basic functions, $\lambda$-abstractions, and recursive functions. There are several ways to model this much of LISP, but perhaps the neatest is by domain theory.
Consider the following simplified statement about (a core of) LISP. Every value is
- either an atom (a literal),
- or a cons,
- or a function.
Let us try to express this idea mathematically. We should find a set $D$ of LISP values which
- contains the set $A$ of all atoms (whatever they are)
- contains $D \times D$ because every cons is a pair of values (car and cdr)
- contains $D \to D$ because "functions are data"
We can express this idea as a requirement that
$$A + D \times D + (D \to D) \subseteq D.$$
Unfortunately, this is not possible because $D \to D$ is larger than $D$, unless $D$ contains just one element (but there are many atoms).
However, there is a way out: the set $D \to D$ of functions is larger than $D$ only if we take all functions -- but we only need to consider the computable functions, or some slightly larger set of them. Indeed, as was shown by Dana Scott a long time ago, we should look for a space $D$ (rather than a set) and then take $D \to D$ to be the space of continuous functions (because every computable map is continuous). As it turns out, there are several kinds of "spaces" that we could use, all of which have in common that they capture the idea of information content and information processing. I cannot go into technical details on how this is done, but you can read about domain theory to find out more.
Let us just think about what $D$ must contain. Of course, it contains various atoms, includnig an element corresponding to nil. Then, it contains all finite lists of elements, since a list (x1 ... xn) is just a lot of nested pairs (x1, (x2, ..., (xn, nil)). Next, it contains lot and lots of functions: anything that can be defined by a lambda, and with a bit of care also anything that can be defined by recursion.
In summary, the answer to your question is that the mathematical essence of LISP is a space $D$ which contains the atoms, its own product $D \times D$ and its own function space $D \to D$.
The stuff about logic and propositional connectives, you should forget that on the first round of studying mathematical models of LISP. Maybe come back to it later, and look up something like Curry-Howard correspondence and realizability.