0

If you open any textbook (or video, or notes) on how to write proofs in math (not just in formal logic), it will say that to prove a conditional $A \Rightarrow B$, one needs to assume that $A$ is true and prove that $B$ is true. While I understand that this is true if we work in logic, this doesn't seem to be true in mathematics in general: in mathematics more broadly, $A$ may stand for the Fundamental Theorem of Calculus, and $B$ may stand for the statement that $4$ is even. In this scenario, we may assume that $A$ is true and, after forgetting about this assumption, independently prove that $B$ is true. But I suppose no one would say that the Fundamental Theorem of Calculus implies that $4$ is even. It seems that in mathematics in general (outside logic), proving $A \Rightarrow B$ amounts to assuming $A$ is true and proving that $B$ is true using the assumption $A$. So it seems that the principle "to prove $A\Rightarrow B$, assume $A$ and prove $B$" does not work in mathematics more broadly.

So my questions are:

  1. If in math more broadly, the principle "to prove $A\Rightarrow B$, assume $A$ is true and prove $B$ is true" does not always work, why do people teach proofs of conditionals (in broader math, outside logic) in this way? They don't just teach how to prove conditionals in propositional logic, they say (or implicitly suggest) that the same principle works for all mathematical arguments, which seems to be incorrect based on what I said above. I don't see why it is even useful to consider the conditional from propositional logic in the context of an "intro to proof" textbook if such conditional is not used in real mathematical arguments. Or am I missing something? Is there a way to "rescue" these textbooks and explain how to "go" from the formal conditional in logic to the conditional that mathematicians use?

  2. I've read in other answers on this website (for example, here) that in the context of mathematics $A\Rightarrow B$ means that in any model, if $A$ is true, then $B$ is true. (This also raises the question why the mentioned textbooks do not explain conditionals in terms of models if this is the right approach.) Does this somehow account for the fact that the assumption $A$ must be used in proving $B$? (This fact, intuitively, is ought to be true if we want to exclude situations like I mentioned above.)

  3. Given the two particular statements $A$ and $B$ mentioned above (or any other completely unrelated theorems), we know that $A\not \Rightarrow B$ in the sense of broader mathematics. Then there should exist a model where $A$ is true and $B$ is false. But to talk about models, we first need some kind of set of axioms and some logical language in which we can write $A$ and $B$. Given that my $A$ and $B$ come from pretty different areas of math, it's not obvious that a common set of logical axioms and a reasonable "common logical language" exists, and therefore it's not clear whether the required model exists. Does there really exist a model where $A$ is true and $B$ is false?

2 Answers2

2

$A \implies B$ in formal logic does reflect the meaning it has in "real" mathematics. It's just that in real life we don't typically deal with conditionals where the condition has no relevance to the conclusion at all and thus wouldn't be needed in the proof. We don't encounter statements like $A \implies (B \lor \neg B)$ in practice. But if we did, we could apply just the same techniques, and prove the conclusion without needing the assumption.

In mathematics, conditionals often occur (implicitly) universally quantified: "If $x$ is ..., then $x$ is ...", meaning that this conditional holds for all things $x$ of the relevant domain. Then a proof of the falsity of such a statement proceeds by finding a counter example, which would be an object for which the left-hand side of the statement is true but not the right-hand side. This again corresponds to how we would carry out such a proof more formally, with the extra step of disassembling the universal generalization.

Regarding your second point, models are a precise formalization of what we might think of as "situations" or "universes". To take a non-mathematical example, refuting validity of the inference "My feet are wet" $\vDash$ "It's raining" would amount to finding a situation where my feet are wet but it's it's not raining, for example because I stepped into a puddle.

Also be careful about the difference between logical inference $\vDash$ and conditional satements $\implies$. Conditional stamtents don't involve reasoning about all models. Instead, we are working in some particular mathematical model, say the striucture of the the natural numbers, and find that the statement is true or false in that particular model by analyzing the truth of the condition and the conclusion in that model. Or in formalized proofs, we'd be working in a formal theory, and then show that the conditional statement is derivable from the axioms of the theory. But we wouldn't be talking about logical inference involving arbitrary models outside the subject of logic itself.

  • To add to the last paragraph: outside of formal logic, reasoning that A ⇒ B generally involves tacit assumptions. – ryang Jan 24 '25 at 04:47
0

Long comment

See Weakening rule (and Monotonicity): If B follows from A, it follows also from A and some further assumption C.

This means that we can add "unused" premises to a proof without destroying the relation of consequence holding between premises and conclusion.

Of course, if Theorem B has been proved from Axiom A, we have that "if A, then B", and if C is an estraneous statement, due to the above property, we have also "if (A and C), then B".

But this does not amount to saying: "if C, then B".

Mathematical textbooks are full of examples. See e.g. Ethan Bloch, The real numbers and real analysis (Springer, 2011), page 66. Having defined the axioms and the definitions for the real numbers system, the author proves some basic properties: Lemma 2.3.2. Let $a,b,c \in \mathbb R$ If $a+c = b+c$ then $a = b$ (Cancellation Law for Addition).

The proof starts: "Suppose that $a+c = b+c$. Then $(a+c)+(-c) = (b+c)+(-c)$. By the Associative Law for Addition it follows that...

Here you can see how the basic machinery of proof works: the author uses the ALA Axiom to prove CLA Theorem. The "formal result" will be "if ALA, the CLA".

But ALA is an axiom, and thus the proof amount to saying that we have proved CLA as a thorem of the theory of reals.

Regarding point 3 above, we can compare arithmetic with boolean algebra: both uses the symbols $0,1$ and $+$, but while in boolean algebra $1+1=1$, this is not true in arithmetic.

In terms of consequence, we have that $1+1=1$ can be proved using the axiom of Boolean algebra. The fact that $1+1=1$ does not hold in arithmetic implies that the usual structure of natural numbers: $\mathbb N$, is not a model of the axioms of Boolean algebra.