Suppose $L=L_1\oplus L_2$, where $L_1,L_2$ are finite-dimensional complex semisimple Lie algebras. Suppose $V$ is a finite-dimensional irreducible representation of $L$ (through $\rho:L\to \mathfrak{gl}(V)$), then I need to show either $L_1$ or $L_2$ act on $V$ trivially.
My proof follows like this: we first compute, for $X_1\in L_1,X_2\in L_2$, that $$ \rho(X_2)\rho(X_1)=\rho(X_1)\rho(X_2)-[\rho(X_1),\rho(X_2)]=\rho(X_1)\rho(X_2)-\rho[X_1,X_2]=\rho(X_1)\rho(X_2). $$ The last step follows from the structural theorem of semisimple Lie algebras, which gives $[X_1,X_2]=0$. But how to reason from here?
I am also curious whether the step where I used semisimplicity is correct. My argument comes from the decomposition of semisimple Lie algebras into direct sum of simple ideals. Is this how semisimplicity in the assumption is used?
EDIT: I just revisited the definition of a direct sum of Lie algebras, and it seems that the last equality directly follows from definition, but not semisimplicity. How does one apply the semisimplicity condition then?
EDIT on Sept. 26 It seems that one could prove $V$ is irreducible as a $L_1-$module. To prove this, we use Weyl's theorem and Schur's lemma. The previous one uses the complex semisimple property to induce that $V$ is completely irreducible as a $L_1-$module. The latter one and the commutative property of $\rho(X_1),\rho(X_2)$ shows that $\rho(X_2)$ acts on the irreducible submodules as scalars. This shows $V$ must be irreducible as a $L_1$ module. Is this argument correct? And how should I continue?