The first part is clearly not true, as $(\begin{smallmatrix}0&1\\0&0\end{smallmatrix})$ shows.
For the second part, this is related to the fact that the restriction of a diagonalisable operator to a stable subspace is diagonalisable. This in fact directly gives the result for $A$, which describes the restriction of $M$ to the (invariant) subspace$~W$ spanned by the first $n$ standard basis vectors. That argument does not apply as directly to $B$, but by transposition symmetry it is clear that it must be diagonalisable as well.
In fact the mentioned fact is proved most easily using an argument that also directly shows $B$ is diagonalisable. It can be shown using the property that if $\Lambda=\{\lambda_1,\ldots,\lambda_k\}$ is a finite set (no repetitions among the $\lambda_i$), then an operator $\phi$ is diagonalisable with eigenvalues contained in$~\Lambda$ if and only if the polynomial $P=\prod_{\lambda\in\Lambda}(X-\lambda)$ annihilates$~\phi$, in other words if $P[\phi]=(\phi-\lambda_1I)\circ\cdots\circ(\phi-\lambda_kI)$ is the zero operator. This is applied with $\Lambda$ the set of eigenvalues of$~M$, and the fact that $P[M]$ has diagonal blocks $P[A]$ and $P[B]$ for any polynomial$~P$. The mentioned property is easy to show: the composed operator clearly kills each eigenspace for $\lambda_i$, and on the other hand the dimension of its kernel cannot exceed the sum of the dimensions of the kernels of the individual $(\phi-\lambda_iI)$, which is the sum of the dimensions, and dimension of the direct sum, of the eigenspaces for the $\lambda_i$.
Concretely, you get that the sets of eigenvalues of $A$ and of $B$ are contained in that of$~M$, that the eigenspaces of $A$ are obtained by intersecting the eigenspaces of$~M$ with the invariant subspace$~W$, and those of$~B$ by projecting those eigenspace parallel to$~W$ to a complementary subspace.