If the covariance matrices $\Sigma_k$ are diagonal matrices, each component in the random vector $X$ also follows a mixture of univariate Gaussian distributions with the same mixing weights and the marginal variances are given by the diagonal elements of $\Sigma_k$. The reason for this is (the location parameter is set to zero to simplify the notation without loss of generality):
Lemma. If $Z = (Z_1, \ldots, Z_n)'$ is Gaussian random vector s.t. $Z \sim N(0, \Sigma)$, with $\Sigma = \text{diag}(\sigma_1^2, \ldots, \sigma_n^2)$, then $\forall i\neq j$, $Z_i$ and $Z_j$ are statistically independent and $Z_j \sim N(0, \sigma_j^2)$.
Proof. Both the independence and the marginal Gaussianity can be proved by noting that the joint density factors out into product of marginals:
$$
f_{Z}(z ; \Sigma) = (2\pi)^{-n/2} |\Sigma|^{-1/2}\exp\left(-\frac{1}{2} z'\Sigma^{-1}z \right) = \prod^n_{j=1}\varphi(z_j; \sigma_j^2), \ z \in \mathbb{R}^n, \ z_j \in \mathbb{R},
$$
where $$\varphi(z_j; \sigma_j^2) = (2\pi)^{-1/2} (\sigma_j^2)^{-1/2} \exp\left(-\frac{1}{2} \frac{ z_j^2}{\sigma_j^2} \right).$$ The proof of the equalities above is straightforward by noting that $|\Sigma|^{-1/2} = \prod^n_{j=1}(\sigma_j^2)^{-1/2}$ and $z'\Sigma^{-1}z = \sum^{n}_{j=1}z_j^2/\sigma_j^2 $.
$\square$
Proof of the claim. Without loss of generality, suppose $X$ is a mixture of two $n$-variate Gaussian distributions $N(0, \Sigma_1)$ (with weight $p \in [0,1]$) and $N(0, \Sigma_2)$
with joint p.d.f.
$$
f_X(x; \Sigma_1, \Sigma_2, p) = pf_{Z}(x ; \Sigma_1) + (1-p)f_{Z}(x ; \Sigma_2), \ x \in \mathbb{R}^n.
$$
If $\Sigma_1$ and $\Sigma_2$ are both diagonal such that $\Sigma_k = \text{diag}(\sigma_{1,k}^2, \ldots, \sigma_{n,k}^2)$, $k = 1, 2$, then by the preceding Lemma,
$$
f_X(x; \Sigma_1, \Sigma_2, p) = p\prod^n_{j=1}\varphi(x_j; \sigma_{j,1}^2)+ (1-p)\prod^n_{j=1}\varphi(x_j; \sigma_{j,2}^2).
$$
Thus, for any $j \in \{1, \ldots, n\}$ the marginal density of the random variable $X_j$ is given by
$$
\begin{align}
f_{X_j}(x_j) &= p\varphi(x_j; \sigma_{j,1}^2) \prod_{i\neq j}
\underbrace{\int\varphi(x_i; \sigma_{i,1}^2) dx_i}_{=1} +
(1-p)\varphi(x_j; \sigma_{j,2}^2)\prod_{i\neq j}
\underbrace{\int\varphi(x_i; \sigma_{i,2}^2) dx_i}_{=1}\\
&= p\varphi(x_j; \sigma_{j,1}^2) +(1-p)\varphi(x_j; \sigma_{j,2}^2), \ x_j \in \mathbb{R}
\end{align}.
$$ In other words, $X_j$ is a mixture of two univariate Gaussian distributions $N(0, \sigma_{j,1}^2)$ (with weight $p$) and $N(0, \sigma_{j,2}^2)$.
$\square$