You need to require that the matrices $A$ and $B$ commute (i.e., that
$AB=BA$). Otherwise, for example, $A=\begin{pmatrix}
1 & 1\\
0 & 1
\end{pmatrix}$ and $B=\begin{pmatrix}
1 & 0\\
1 & 1
\end{pmatrix}$ yield a counterexample (since $\det\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}=\det\begin{pmatrix}
1 & 1 & 1 & 0\\
0 & 1 & 1 & 1\\
-1 & 0 & 1 & 1\\
-1 & -1 & 0 & 1
\end{pmatrix}=1\neq0=\det\left( A^2 +B^2 \right) $ in this case).
But if $A$ and $B$ do commute, then your claim holds:
Theorem 1. Let $n\in\mathbb{N}$. Let $A$ and $B$ be two $n\times
n$-matrices over a commutative ring $\mathbb{K}$ such that $AB=BA$. Then, the
block matrix $\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}$ satisfies
\begin{align}
\det\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}=\det\left( A^2 +B^2 \right) .
\end{align}
First proof of Theorem 1 (sketched). One fact about block matrices is the
following: If $A$, $B$, $C$ and $D$ are four $n\times n$-matrices over
$\mathbb{K}$ such that $AB=BA$, then
\begin{align}
\det\begin{pmatrix}
A & B\\
C & D
\end{pmatrix}=\det\left( DA-CB\right) .
\label{darij1.pf.t1.1st.1}
\tag{1}
\end{align}
(This is mentioned in
https://math.stackexchange.com/a/548487/ , and can be proven using the Schur
complement in the case when $A$ is invertible. When $A$ is not invertible,
replace $A$ by $A+xI_{n}$, where $x$ is a polynomial indeterminate. This
argument is probably all over math.stackexchange. For a specific reference, see (16) in John R. Silvester, Determinants of Block Matrices, The Mathematical
Gazette, Vol. 84, No. 501 (Nov., 2000), pp. 460--467.)
Applying \eqref{darij1.pf.t1.1st.1} to $C=-B$ and $D=A$, we find
\begin{align}
\det\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}=\det\underbrace{\left( AA-\left( -B\right) B\right) }
_{=A^2 +B^2 }=\det\left( A^2 +B^2 \right) .
\end{align}
This proves Theorem 1. $\blacksquare$
A second proof of Theorem 1 will result from proving a somewhat more
general result, which however relies on the existence of an "imaginary
unit" in our ring $\mathbb{K}$ (that is, an element $i$ such that
$i^2 = -1$):
Theorem 2. Let $n\in\mathbb{N}$. Let $A$ and $B$ be two $n\times
n$-matrices over a commutative ring $\mathbb{K}$.
Let $i \in \mathbb{K}$ be such that $i^2 = -1$.
Then, the
block matrix $\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}$ satisfies
\begin{align}
\det\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}=\det\left( A-iB\right) \det \left( A+iB\right).
\end{align}
Proof of Theorem 2. It is straightforward to see
that the block matrix $\begin{pmatrix}
I_{n} & iI_{n}\\
0_{n\times n} & I_{n}
\end{pmatrix}$
(where $0_{n\times n}$ denotes the $n\times n$ zero matrix)
is invertible (with inverse $\begin{pmatrix}
I_{n} & -iI_{n}\\
0_{n\times n} & I_{n}
\end{pmatrix} $) and satisfies
\begin{align}
\begin{pmatrix}
I_{n} & iI_{n}\\
0_{n\times n} & I_{n}
\end{pmatrix}
\begin{pmatrix}
A & B \\
-B & A
\end{pmatrix}
=
\begin{pmatrix}
A-iB & 0\\
-B & A+iB
\end{pmatrix}
\begin{pmatrix}
I_{n} & iI_{n}\\
0_{n\times n} & I_{n}
\end{pmatrix} .
\end{align}
Hence,
\begin{equation}
\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}
=
\begin{pmatrix}
I_{n} & iI_{n}\\
0_{n\times n} & I_{n}
\end{pmatrix} ^{-1}
\begin{pmatrix}
A-iB & 0\\
-B & A+iB
\end{pmatrix}
\begin{pmatrix}
I_{n} & iI_{n}\\
0_{n\times n} & I_{n}
\end{pmatrix} .
\end{equation}
Thus, the matrices $\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix}$ and $\begin{pmatrix}
A-iB & 0\\
-B & A+iB
\end{pmatrix}$ are similar, and therefore have the same determinant. Hence,
\begin{align*}
\det\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix} & =\det\begin{pmatrix}
A-iB & 0\\
-B & A+iB
\end{pmatrix}\\
& =\det\left( A-iB\right) \cdot\det\left( A+iB\right)
\end{align*}
(because the determinant of any block-triangular matrix whose diagonal blocks
are square matrices always equals the product of the determinants of these
diagonal blocks).
This proves Theorem 2. $\blacksquare$
Second proof of Theorem 1 (sketched).
We can find a commutative ring $\mathbb{L}$ such that $\mathbb{K}$ is a
subring of $\mathbb{L}$ and such that there exists some $i\in\mathbb{L}$
satisfying $i^2 =-1$. (For example, if $\mathbb{K}=\mathbb{R}$ or
$\mathbb{K}=\mathbb{C}$, then we can take $\mathbb{L}=\mathbb{C}$. In the
general case, we can let $\mathbb{L}$ be the quotient ring $\mathbb{K}\left[
x\right] /\left( x^2 +1\right) $, which is a free $\mathbb{K}$-module with
basis $\left( \overline{1},\overline{x}\right) $ because $x^2 +1$ is a
monic polynomial; then, $i$ should be taken to be the residue class
$\overline{x}$ of the indeterminate $x$.)
Anyway, having picked our ring $\mathbb{L}$ and element $i$, let us now regard
our matrices as matrices over $\mathbb{L}$. Now, Theorem 2 (applied to
$\mathbb{L}$ instead of $\mathbb{K}$) yields
\begin{align*}
\det\begin{pmatrix}
A & B\\
-B & A
\end{pmatrix} & =\det\left( A-iB\right) \cdot\det\left( A+iB\right) \\
& =\det\left( \underbrace{\left( A-iB\right) \left( A+iB\right)
}_{=AA+iAB-iBA-i^2 BB}\right) \\
& =\det\left( \underbrace{AA}_{=A^2 }+i\underbrace{AB}_{=BA}
-iBA-\underbrace{i^2 }_{=-1}\underbrace{BB}_{=B^2 }\right) \\
& =\det\underbrace{\left( A^2 +iBA-iBA-\left( -1\right) B^2 \right)
}_{=A^2 +B^2 }=\det\left( A^2 +B^2 \right) .
\end{align*}
This proves Theorem 1. $\blacksquare$