3

Let's say I have a unitary matrix $U$, how do I know if $U$ is the resultant of a tensor product of two other unitary matrices $U_1$ and $U_2$ (dim$\geq$2), such that $U_1\otimes U_2=U$?

More specifically I am concerned with the almost trivial case of where the dimensions of $U_1$ and $U_2$ are 2x2 ($U$ is 4x4).

Do I necessarily need to find the basis where $U$ is block diagonal? Or do I need to fully diagonalize $U$ and find common factors? Are there any other strategies to verify without doing these?

I tried to find some direction, but I do not know what is the right term here, is it reducibility? separability?

Mauricio
  • 447
  • Since you specified that $U$ is given, I will assume you have fixed your basis, and it's with respect to this basis you want to check this property. I would first show that $U$ maps tensor products of vectors to tensor products of vectors iff $U$ is of the form $U_1 \otimes U_2$. Then, I would show that it suffices to check it for basis vectors that are tensor products. Then one needs to check only that these $d^2$ basis vectors are mapped to tensor products: https://quantumcomputing.stackexchange.com/questions/2263/how-do-i-show-that-a-two-qubit-state-is-an-entangled-state – Kenneth Goodenough Jul 17 '22 at 17:09
  • @KennethGoodenough could you provide an example? How is a basis of tensor product of basis vectors helpful, isn’t this the same as checking some property of the elements of $U$? – Mauricio Jul 17 '22 at 17:32
  • Yes, this should reduce to checking whether $U$ has the form of a $U_1\otimes U_2$ matrix (which is conceptually a lot easier). – Kenneth Goodenough Jul 17 '22 at 17:40
  • 1
    @KennethGoodenough this is probably trivial but I am blocking a bit. Isn’t that the whole question? – Mauricio Jul 17 '22 at 17:48
  • @KennethGoodenough are you suggesting to take the traces with respect to the two non-tensor spaces? – Mauricio Jul 17 '22 at 18:21
  • If performing a partial trace is easy, then you could always do that and check whether the resultant matrix is unitary. One first has to prove that a partial trace on a d1xd2 unitary results in a unitary iff it was in tensor product to begin with, which is not hard to do. – Kenneth Goodenough Jul 17 '22 at 19:29

2 Answers2

4

In the smallest interesting dimension you can use the exceptional morphism $\mathrm{SL}_2(\mathbb{C}) \times \mathrm{SL}_2(\mathbb{C}) \to \mathrm{SO}_4(\mathbb{C})$ obtained by viewing $\mathrm{SL}_2$ as a symplectic group, and using the fact that the tensor product of two vector spaces endowed with alternate forms is naturally endowed with a symmetric form. This morphism is onto and has kernel $\{(1,1),(-1,-1)\}$. I believe that the image of $\mathrm{SU}(2) \times \mathrm{SU}(2)$ is isomorphic to $\mathrm{SO}(4)$ (the compact Lie group). This may be deduced from $\mathrm{SU}(2) = \mathrm{SL}_2(\mathbb{C})^\sigma$ where $\sigma(g) = A \overline{g} A^{-1}$ and $A = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$. Working out a basis of $\mathbb{C}^2 \otimes \mathbb{C}^2$ in which the image is the usual $\mathrm{SO}(4)$ is an exercise in Galois cohomology (for $\mathbb{C}/\mathbb{R}$). Choosing as basis of $\mathbb{C}^2 \otimes \mathbb{C}^2$ the family $(e_1 \otimes e_1, e_1 \otimes e_2, e_2 \otimes e_1, e_2 \otimes e_2)$ where $(e_1, e_2)$ is the standard basis of $\mathbb{C}^2$, I find that the image of the tensor product map $\mathrm{SU}(2) \times \mathrm{SU}(2) \to \mathrm{GL}_4(\mathbb{C})$ is $h \mathrm{SO}(4) h^{-1}$ where $$ \mathrm{SO}(4) = \{ g \in \mathrm{SL}_4(\mathbb{R}) \,|\, g^T = g^{-1} \}$$ and $$h = \begin{pmatrix} 1 & 0 & 0 & -i \\ 0 & i & 1 & 0 \\ 0 & i & -1 & 0 \\ 1 & 0 & 0 & i \end{pmatrix}.$$

So the image of $\mathrm{U}(2) \times \mathrm{U}(2)$ in $\mathrm{U}(4)$ is $\mathrm{U}(1) \cdot h \mathrm{SO}(4) h^{-1}$. To check if $g \in \mathrm{U}(4)$ is in the image, first find out if there exists $z \in \mathrm{U}(1)$ such that $z^{-1} h^{-1} g h$ has real entries (since $g \neq 0$ there is at most one such $z$ up to multiplication by $-1$), and if so check if $z^{-1} h^{-1} g h$ belongs to $\mathrm{SO}(4)$.

Edit: a few indications on how $h$ is found: $A \in \mathrm{SL}_2(\mathbb{C})$ corresponds to a $1$-cocycle $c: \mathrm{Gal}(\mathbb{C}/\mathbb{R}) \to \mathrm{PGL}_2(\mathbb{C})$ defined by $c(1) = 1$ and $c(\overline{\cdot}) = A$ (well, its image in $\mathrm{PGL}_2(\mathbb{C})$) because we have $A \overline{A} = A^2 = -1 \in \ker(\mathrm{SL}_2(\mathbb{C}) \to \mathrm{PGL}_2(\mathbb{C}))$. Similarly $A \otimes A \in \mathrm{SO}_4(\mathbb{C})$ (with the choice of basis above, this $\mathrm{SO}_4$ is the one for the symmetric matrix $S = \begin{pmatrix} & & & 1 \\ & & -1 & \\ & -1 & & \\ 1 & & & \end{pmatrix}$, which also happens to be $A \otimes A$) defines a $1$-cocycle $c': \mathrm{Gal}(\mathbb{C}/\mathbb{R}) \to \mathrm{SO}_4(\mathbb{C})$. Now by Hilbert 90 we know that any $1$-cocycle $\mathrm{Gal}(\mathbb{C}/\mathbb{R}) \to \mathrm{GL}_4(\mathbb{C})$ is a coboundary, in other words $A \otimes A$ may be written as $h \overline{h}^{-1}$ for some $h \in \mathrm{GL}_4(\mathbb{C})$. For the explicit computation we can use the standard proof of Hilbert 90: $x$ may be chosen of the form $\sum_{\tau \in \mathrm{Gal}(\mathbb{C}/\mathbb{R})} c'(\tau) \tau(a)$ for "generic" $a \in \mathrm{GL}_4(\mathbb{C})$. In fact we can make the computation slightly simpler by replacing $A \otimes A$ by $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ because $A$ is made up of this $2 \times 2$ matrix and its opposite. Finally, translating the above (split) $\mathrm{SO}_4$ for $S$ via the change of basis corresponding to $h$, we find the "usual" special orthogonal group for the symmetric matrix $I_4$.

3

Here's how you do it while dropping "unitary." You can actually vectorize all the matrices and treat this as a question about vectors: given a vector in $\mathbb{C}^{16}$, how do you know if it's the tensor product of two vectors in $\mathbb{C}^4$?

More abstractly, let's consider two vector spaces $V, W$ and ask when an element of the tensor product $V \otimes W$ is a tensor product $v \otimes w$. These are known as pure tensors. The answer turns out to be quite nice and can be written in coordinates as follows: if $\{ v_i \}$ is a basis of $V$ and $\{ w_j \}$ is a basis of $W$, and we write an element of $V \otimes W$ as $\sum a_{ij} v_i \otimes w_j$, then this element is a pure tensor iff all $2 \times 2$ minors of the matrix $a_{ij}$ vanish, so

$$a_{ij} a_{k \ell} - a_{i \ell} a_{kj} = 0$$

for all $i, j, k, \ell$. These are the homogeneous equations cutting out the Segre embedding of $\mathbb{P}(V) \times \mathbb{P}(W)$ into $\mathbb{P}(V \otimes W)$.

To see this abstractly, we can assume WLOG that $V$ and $W$ are finite-dimensional. Then an element of $V \otimes W$ is the same thing as a linear map $V^{\ast} \to W$, and the condition that it's a pure tensor is the same as the condition that this linear map has rank at most $1$. And it's a general fact that a matrix has rank less than $k$ iff its $k \times k$ minors vanish; see, for example, this answer.

So this condition can be used to determine whether $U$ is a tensor product of two matrices. These matrices may not be unitary, but I think you should be able to check whether they are by taking partial traces as Kenneth says in the comments (unless by bad luck one of the traces happens to be zero).

Qiaochu Yuan
  • 468,795
  • There should still be a reduction here by imposing unitarity as it has to be easier than to check $4^4$ equations. Also are you suggesting that this works for higher dimensions, isn't this supposed to be NP-hard? – Mauricio Jul 18 '22 at 11:46
  • @Mauricio: here the indices $i, j, k, \ell$ only have two different values so that's $16$ equations. Unitarity probably makes some of them redundant but I haven't thought about it. This works in any dimension; I'm not aware of any NP-hardness result here. – Qiaochu Yuan Jul 18 '22 at 11:49
  • Thanks for clarifying. I was thinking of this problem with density matrices : https://en.wikipedia.org/wiki/Separable_state – Mauricio Jul 18 '22 at 11:50
  • 1
    @Mauricio: I see. I guess the problem in higher dimensions is exactly that the number of equations blows up... – Qiaochu Yuan Jul 18 '22 at 11:54