I'm writing a test suite that checks the correctness of a fixed-point arithmetic library that I wrote. Specifically, it deals with Q4.4 numbers, i.e. 4 integer bits and 4 fractional, so its precision is 1/16.
The test suite produces two rational numbers and computes their product, then transforms these rational numbers to Q4.4 numbers and computes their Q4.4 product, then checks that the Q4.4 product is within the expected absolute error of the correct product.
Let $a$, $b$ be the true numbers (assumed for simplicity to be nonzero), let $\Delta x$ be the absolute error, let $\epsilon_x$ be the relative error. I computed the absolute error of $ab$ to be $\Delta ab = \epsilon_{ab} * |ab| = (\epsilon_a + \epsilon_b) * |ab| = (|\frac{\Delta a}{a}| + |\frac{\Delta b}{b}|) * |ab|$, with an upper bound being $\epsilon_{ab} \leq (|\frac{1/32}{a}| + |\frac{1/32}{b}|)*|ab|$ since $\Delta a$ can be at most half the precision.
This reasoning seemingly fails when computing $1.3488 * 3.2238 = 4.34826144$. When expressed with Q4.4 numbers this multiplication gives $1.375 * 3.250 = 4.500$, and its actual absolute error is $\Delta ab = 0.15173856$; however, the expected value per the formula above is $4.34826144 * (\frac{1/32}{1.3488} + \frac{1/32}{3.2238}) \approx 4.34826144 * (0.02316874 + 0.00969353) = 4.34826144 * 0.03286227 \approx 0.14289374$.
To make it clear: the actual value is $0.152...$, the expected value is $0.143...$
Where am I wrong?