Here's how you do it while dropping "unitary." You can actually vectorize all the matrices and treat this as a question about vectors: given a vector in $\mathbb{C}^{16}$, how do you know if it's the tensor product of two vectors in $\mathbb{C}^4$?
More abstractly, let's consider two vector spaces $V, W$ and ask when an element of the tensor product $V \otimes W$ is a tensor product $v \otimes w$. These are known as pure tensors. The answer turns out to be quite nice and can be written in coordinates as follows: if $\{ v_i \}$ is a basis of $V$ and $\{ w_j \}$ is a basis of $W$, and we write an element of $V \otimes W$ as $\sum a_{ij} v_i \otimes w_j$, then this element is a pure tensor iff all $2 \times 2$ minors of the matrix $a_{ij}$ vanish, so
$$a_{ij} a_{k \ell} - a_{i \ell} a_{kj} = 0$$
for all $i, j, k, \ell$. These are the homogeneous equations cutting out the Segre embedding of $\mathbb{P}(V) \times \mathbb{P}(W)$ into $\mathbb{P}(V \otimes W)$.
To see this abstractly, we can assume WLOG that $V$ and $W$ are finite-dimensional. Then an element of $V \otimes W$ is the same thing as a linear map $V^{\ast} \to W$, and the condition that it's a pure tensor is the same as the condition that this linear map has rank at most $1$. And it's a general fact that a matrix has rank less than $k$ iff its $k \times k$ minors vanish; see, for example, this answer.
So this condition can be used to determine whether $U$ is a tensor product of two matrices. These matrices may not be unitary, but I think you should be able to check whether they are by taking partial traces as Kenneth says in the comments (unless by bad luck one of the traces happens to be zero).