Revised -- I had made a convenient typo that provided me with more structure. Thanks to the correction, I think the answer is actually better (not just by being correct, but by avoiding an unnecessary detour). I appreciate the careful reading that caught my typo.
For any block eigenvector $\binom{x_1}{x_2}$ with eigenvalue $\lambda$, we have:
$$B\left(\begin{array}{c}x_1\\ x_2\end{array}\right) =
\left(\begin{array}{c}A(x_1+x_2)\\ Ax_2\end{array}\right) =
\left(\begin{array}{c}\lambda x_1\\ \lambda x_2\end{array}\right).$$
If $x_2 = 0$, then $\lambda$ is an eigenvalue of $A$ and $x_1$ is an eigenvector.
If $x_2 \ne 0$, then $\lambda$ is an eigenvalue of $A$ and $x_2$ is an eigenvector.
Notice this means the eigenvalues of $A$ are exactly the eigenvalues of $B$, which is also clear from the determinant formulation of eigenvalues.
However, among the composite eigenvectors $\binom{x_1}{x_2}$, it cannot be true that all the $x_2=0$. In fact must be true that $n$ of the $x_2$s (and $n$ of the $x_1$s) are linearly independent (we need to come up with $2n$ linearly independent composites -- which means $n$ of each). This means from the eigenvectors for $B$ we can derive $n$ independent eigenvectors for $A$ (the $x_2$s that are linearly independent).
That's enough to decide that $A$ is diagonalizable. So without loss of generality, $A$ is diagonal (in other words, if $A=P^{-1}DP$, then replace $B$ with $\binom{P\ \ \ 0}{0\ \ \ P}B\binom{P^{-1}\ \ \ \ 0}{0\ \ \ \ \ \ \ P^{-1}}$).
However, if we have $A=D$ diagonal, we now have:
$$B\left(\begin{array}{c}x_1\\ x_2\end{array}\right) =
\left(\begin{array}{c}D(x_1+x_2)\\ Dx_2\end{array}\right) =
\left(\begin{array}{c}\lambda x_1\\ \lambda x_2\end{array}\right).$$
For the $i$ that gives $d_{i,i}=\lambda$, the $i$ and $i+n$ rows of this equation say:
$$\lambda x_{1,i} + \lambda x_{2,i} = \lambda x_{1,i}$$
$$\lambda x_{2,i} = \lambda x_{2,i}$$
which of course (now that $d_i$ is a scalar, not $A$ as before) implies that $\lambda=0$ or $x_{2,i}=0$. If $\lambda$ has multiplicity higher than 1, this may give us a more complex statement like $\lambda=0$ or $x_{2,i_1}=x_{2,i_2}=\dots=x_{2,i_m}=0$.
However, none of those $x_{2,i_k}$ could be zero because we need to have $2m$ independent eigenvectors here (and all of the other entries of $x_1$ and $x_2$ must be zero now that we've diagonalized $A$ as $D$). So $\lambda=0$ must be true instead.
Contrapositively, if you assumed $\lambda\ne 0$, you'd know that $x_1$ can only have entries in rows $i$ such that $d_{i,i}=\lambda$, while $x_2$ would be just zero (instead of having any of the corresponding entries). This would only give you have as many dimensions in the $\lambda$-eigenspace for $B$ ($m$ instead of $2m$ where $m$ is the multiplicity of $\lambda$ with respect to $A$, as above).
So the generic $\lambda$ must be $0$, in other words all the $\lambda$ are $0$, which implies that $D=0$, meaning $A=0$, and finally $B=0$.
This is analogous to how you prove $\binom{a\ a}{0\ a}$ has no second eigenvector unless $a=0$. You can find $\binom{1}{0}$ first, but if you tried to get any other eigenvector $\binom{v_1}{v_2}$, you would immediately conclude that $av_2=0$, where either $a=0$ (which is what we want) or $v_2=0$, which is not allowed if we assumed diagonalizability (because $v_2=0$ means we haven't really found a second eigenvector).
I see the key to this problem as showing that $A$ and $B$ have some of the same eigenvectors and precisely the same eigenvalues, and so $A$ is also diagonalizable -- and then you basically repeat the argument for $\binom{1\ 1}{0\ 1}$ for $\binom{A\ A}{0\ A}$ by diagonalizing $A$ (with a little care about repeated eigenvalues).