4

Irreducible decompositions

Say that a decomposition $f(x,y) = \sum_i U_i(x)V_i(y)$ is irreducible if the $U_i$ are all linearly independent, as are the $V_i$. The rank of a decomposition is the number of terms in the sum.

In a previous question, I established that any two irreducible decompositions $f(x,y)=\sum_i U_i(x)V_i(y) = \sum_i P_i(x)Q_i(y)$ have the same rank. Moreover, the set $\{U_1, \ldots, U_n, P_1, \ldots, P_n\}$ is linearly dependent, as is the set $\{V_1,\ldots,V_n,Q_1,\ldots, Q_n\}$. (Linear independence when writing a function as a sum of functions.)

Question:

I am trying to establish whether/when a stronger result might hold, namely that if the decompositions are irreducible, then the $\{U_i\}$ and the $\{P_i\}$ necessarily span the same space— that each $P_i$ is a linear combination of $U_i$:

If $f(x,y)=\sum_i U_i(x)V_i(y) = \sum_i P_i(x)Q_i(y)$ and both decompositions are irreducible, then $\text{span}(\{U_i\}) = \text{span}(\{P_i\})$.

I know that if $n$ functions $f_i$ are independent, then there exist $n$ points $x_i$ such that the matrix $[f_i(x_j)]$ is invertible. So I can prove this result if there is a set of points $x_i$ such that both $[U_i(x_j)]$ and $[P_i(x_j)]$ are invertible. Or if the statement is false, perhaps there's an easy example of a particular $f$ and two decompositions where the conjecture fails. Alternatively, there might be a way to represent $f(x,y)$ as a decomposition involving both the $U_i$ and $P_i$, then using the minimality property to winnow it down—but I haven't had much luck there. Any help is appreciated.

I also tried defining $$D(\vec{\alpha})\equiv \text{det}([U_i(\alpha_j)]_{i,j}\cdot [P_i(\alpha_j)]_{i,j})$$ which, because of the multiplicative property of determinants, is identically zero unless there exists a collection of $n$ points $\alpha_1,\ldots,\alpha_n$ which simultaneously makes both matrices invertible.

In short:

Just how unique are irreducible decompositions? Are any two irreducible decompositions $\sum_i U_i(x)V_i(y) = \sum_i P_i(x)Q_i(y) $ linearly related to each other with $\text{span}(U_i)=\text{span}(P_i)$, or are there decompositions that are significantly different from one another?

user326210
  • 19,274

2 Answers2

4

Note: the notations are a bit different from the ones in the question.

Let $k$ be a field, $X$, $Y$ non-void sets. Let $V$ the space of $k$ valued functions on $X$, $W$ the space of $k$-valued functions on $Y$. Then we have an imbedding of $V\otimes W$ into the space of functions on $X\times Y$

$$V\otimes W\to \mathcal{F}(X\times Y)$$

given by $$\sum f_i(x) \otimes g_i(y) \mapsto \sum f_i(x) g_i(y)$$

Now we can forget that $V$, $W$ are spaces of functions. The question is about elements of the space $V\otimes W$.

Now, every element $u \in V\otimes W$ can be written as a finite sum $$u = \sum_{i \in I} v_i \otimes w_i$$

Choosing a basis of the system $w_i$, we may assume that $w_i$ are linearly independent. ( even further, we may also assume that $u_i$ are linearly independent, although we do not need it at this point).

Note that for every linear functional $\phi \colon W\to k$ we have an element $u \cdot (1 \otimes \phi) \in V$

$$u \cdot (1 \otimes \phi) = \sum_{i\in I} v_i \cdot \phi(w_i)$$

We get in this way a linear map from $W^{\star}$ to $V$. If we take a writing of $u$ with $w_i$ linearly independent we see that the image of this map equals the span of $v_i$.

Let us now take a writing of $u$ with both systems $v_i$, $w_i$ linearly independent. We see that the image $$\{ u \cdot (1 \otimes \phi) \ | \ \phi \in W^{\star}\}$$ equals the vector space with basis $v_i$.

Now we see easily the the span and the cardinality, does not depend on the irreducible writing of $u$.

Note: The span on $V$ and $W$ of an element $u \in V\otimes W$ are the smallest subspaces $V'\subset V$, $W'\subset W$ such that $u \in V'\otimes W'$.

$\bf{Added:}$ Why is the above map $V\otimes W \to \mathcal{F}(X\times Y)$ injective? Consider a linearly independent system $v_i$ in $V$ and a linearly independent system $w_j$ in $W$. We have to show that the function $$\sum \alpha_{ij} u_i(x) v_j(y)$$ on $X\times Y$ is the zero function only if all the $\alpha_{ij}$ are $0$.

Fix an $x\in X$. Then we have $$\sum_{j\in J} ( \sum_{i\in I} \alpha_{ij} u_i(x)) v_j(y) = 0$$ for all $y \in Y$. Since the functions $v_j$ are linearly independent, that meant $$\sum_{i\in I} \alpha_{ij} u_i(x)= 0$$ for all $j$. Since this happens for all $x$, we have $$\sum_{i\in I} \alpha_{ij} u_i(\cdot)= 0$$ Now use that $u_i(\cdot)$ are linearly independent.

orangeskid
  • 56,630
  • Thanks for your help. Can you explain more how it connects to my question? It sounds like you're saying that because subsets $A_1,A_2 \subset V$ and $B\subset W$ have $n$ linearly independent vectors apiece, you can find invertible linear maps between their spans. True, but I don't yet see the big picture. Does this answer whether every two minimal decomps $\sum u_iv_i = \sum p_iq_i$ have $\text{span}(u_i) = \text{span}(p_i)$, equal as vector spaces not just in dimension? – user326210 Jun 19 '21 at 07:52
  • @user326210: It als describes that span as defined only in terms of the tensor $u$, and not on the particular minimal decomposition. – orangeskid Jun 19 '21 at 21:29
  • That might be what I'm missing. Concretely, if you have a function $f = \sum_{i} u_i(x) v_i(y) = \sum_{i} p_i(x) q_i(y)$, and a linear functional $\phi$ defined on single-variable functions, how do you know that $\sum_i \phi(p_i) q_i$ and $\sum_i \phi(u_i)v_i$ are equal as functions of $y$? (So that the map from functionals to functions of $y$ is well-defined.) I can see why it's true for particular linear functionals, like the eval functionals, but I don't see why it's true in general. – user326210 Jun 20 '21 at 05:48
  • @user326210: an important fact is that the map $V\otimes W$ to the space of functions on $X\times Y$ is injective. This is not obvious, but also not hard to prove. I will add some explanation in the solution – orangeskid Jun 20 '21 at 07:34
1

The conjecture is true: if $f(x,y)$ has two minimal decompositions $\sum_i u_i(x) v_i(y) = \sum_i p_i(x)q_i(y)$, then each $p_i$ is a linear combination of the $u_i$, and each $q_i$ is a linear combination of the $v_i$. Specifically, if we write the linear combination as $\vec{p} = A\vec{u}$, we find that $\vec{q}=(A^{\top})^{-1}\vec{v}$. So there is, up to invertible transformation $A$, exactly one way to decompose any given function $f(x,y)$.

Proof. We need the following lemma:

Lemma: Functions $f_1(x),\ldots,f_n(x)$ are linearly independent if and only if there exist points $x_1,\ldots, x_n$ such that the matrix $[f_i(x_j)]_{i,j}$ is invertible.

Now, suppose $f$ has two decompositions $\sum_{i=1}^n u_i(x)v_i(y)$ and $\sum_{i=1}^n p_i(x)q_i(y)$, which have the same rank but are not necessarily fully reduced.

If $\sum_i u_i(x)v_i(y)$ is fully reduced so that $u_1,\ldots,u_n$ are linearly independent as are $v_1,\ldots,v_n$, then by the lemma there exist points $a_1,\ldots, a_n$ and $b_1,\ldots,b_n$ such that the matrices $U\equiv [u_i(a_j)]_{i,j}$ and $V\equiv [v_i(b_j)]_{i,j}$ are invertible.

The product $UV^{\top}$, as we can calculate, is just $[f(a_i, b_j)]_{i,j}$, and it is also invertible. But this is equal to the matrix product $PQ^\top \equiv [p_i(a_j)]_{i,j}\cdot [q_i(b_j)]_{i,j}^\top$ by a similar calculation. Because the product of these matrices is invertible, each individual matrix $P$ and $Q$ must be invertible.

Now, $U\begin{bmatrix}v_1\\\vdots\\v_n\end{bmatrix} = \begin{bmatrix}f(a_1,y)\\\vdots\\f(a_n,y)\end{bmatrix} = P\begin{bmatrix}q_1\\\vdots\\q_n\end{bmatrix}$. Multiply on the left by $U^{-1}$ to find that:

$$\vec{v} = (U^{-1}P)\vec{q}$$

so each entry in $\vec{v}$ is a linear combination of the entries $\vec{q}$. The case for $\vec{u}$ is analogous.

user326210
  • 19,274