1

Let $V$ be a vector space with basis $ (e_{1}, \cdots , e_{d} )$. The $n$ fold symmetric tensor product $\operatorname{Sym}^n(V)\subset V^{\otimes n}$ is the subspace of symmetric tensors. It can be obtained as the image of the projection (on the symmetric space...) $$ S:\left\lbrace \begin{aligned} V^{\otimes n}\quad & \longrightarrow \quad V^{\otimes n}\\ e_{i_1}\otimes \cdots \otimes e_{i_n} & \longmapsto \frac{1}{n!} \sum_{\sigma\in\mathfrak{S}_n} e_{i_{\sigma(1)}}\otimes \cdots \otimes e_{i_{\sigma(n)}}\end{aligned} \right. $$ (example: $n=3, d\geq 3$ $$\it S(e_{1}\otimes e_2 \otimes e_{3})= \frac{\big({\small e_{1}\otimes e_2 \otimes e_{3} + e_{2}\otimes e_3 \otimes e_{1} + e_{3}\otimes e_1 \otimes e_{2} + e_{1}\otimes e_3 \otimes e_{2} + e_{3}\otimes e_2 \otimes e_{1} + e_{2}\otimes e_1 \otimes e_{3}} \big)}{6} $$)

A basis of $\operatorname{Sym}^n(V)$ is given by (cf. link p.33 in this question) $$ \Big\lbrace S(e_{i_1}\otimes \cdots \otimes e_{i_n}),\ 1\leq i_1 \leq i_2 \leq \cdots \leq i_n \leq d \Big\rbrace \tag{1} \label{1}$$ so that (cf. this other post) $$\operatorname{dim}\big(\operatorname{Sym}^n(V) \big) = { d+n-1 \choose n} \tag{2} \label{2}$$

Question: Show that $\big\lbrace \mathbf{x}\otimes \mathbf{x} \otimes \cdots \otimes \mathbf{x},\ \mathbf{x} \in V \big\rbrace $ is another generating set, or even better, give a basis consisting of vectors of this form.


My motivation for this question came from the sentence around equation (3.48) p.38 of this lecture notes on quantum field theory. A representation of $SU(2)$ is then (completely) defined on vectors of the form $\mathbf{x}\otimes \mathbf{x} \otimes \cdots \otimes \mathbf{x}$. In the notes, $V:= \mathbb{C}^2$ so that $d=2, n=2s$ in (\ref{2}), i.e. $\operatorname{dim}\big(\operatorname{Sym}^n(V) \big)=2s +1$

so additional keywords: symmetric tensor product, representation of $SU(2)$, spin.

Noix07
  • 3,839

1 Answers1

1

In fact, this question had already been addressed. Let me still give first my line of thoughts, and in a second part an erring, and finally an equivalent problem.

  1. I first to took the "finite difference" $$ (\mathbf{x}+\boldsymbol{\delta})\otimes (\mathbf{x}+\boldsymbol{\delta}) \otimes \cdots \otimes (\mathbf{x}+\boldsymbol{\delta})- \mathbf{x}\otimes \mathbf{x} \otimes \cdots \otimes \mathbf{x}= \sum_{k=1}^n {n\choose k} S\big(\boldsymbol{\delta}^{\otimes k} \otimes \mathbf{x}^{\otimes (n-k)} \big) $$ which imitates derivation ($f:x\mapsto x^n\ \Rightarrow f'(x)=nx^{n-1}$) but since the equivalent of the third derivative always vanishes, this procedure cannot produce the basic vectors of (1).

  2. the correct linear combination is an adaptation of the polarization formulae which we recall: it usually relates quadratic to bilinear forms and more generally, if $\alpha: V \to \mathbb{C}$ is such that $\forall\ \lambda \in \mathbb{C},\ \forall\ \mathbf{x}\in V,\ \alpha(\lambda \mathbf{x})= \lambda^n \alpha(\mathbf{x})$ then the $n^{\text{th}}$ derived form or defect $$\Delta^n\alpha\ (\mathbf{x}_1, \mathbf{x}_2,\cdots , \mathbf{x}_n):= \frac{1}{n!}\sum_{1\leq i_1 < i_2 < \cdots < i_k \leq n} (-1)^{n-k} \alpha (\mathbf{x}_{i_1} + \mathbf{x}_{i_2} + \cdots + \mathbf{x}_{i_k}) \tag{Polar} \label{Polar}$$ is $n$-linear and symmetric. (I took it from Drapala,Vojtechovsky, (2.1) p.4. I'll write a proof later).

  3. In our problem, we should thus have (if the formula is correct) $$\mathbf{x}_1 \otimes \mathbf{x}_2 \otimes \cdots \otimes \mathbf{x}_n = \frac{1}{n!}\sum_{1\leq i_1 < i_2 < \cdots < i_k \leq n} (-1)^{n-k} (\mathbf{x}_{i_1} + \mathbf{x}_{i_2} + \cdots + \mathbf{x}_{i_k})^{\otimes n} \tag{Sol} \label{Sol}$$ (then replace each $\mathbf{x}_i$ by a $e_{i_i}$ (confusing notation) that appears in the vectors of (1))



A misleading connection between (\ref{Polar}) and our initial problem is the "realization" of a tensor $T\in V^{\otimes n}$ as an $n$-linear map: $T: V^* \times V^* \times \cdots \times V^* \longrightarrow \mathbb{C} $ (cf. e.g. here, at least for finite dimensional spaces). For example $e_{1}\otimes e_2 \otimes \cdots \otimes e_{2}$ can be thought as $$e_{1}\otimes e_2 \otimes \cdots \otimes e_{2}:\left\lbrace \begin{aligned} V^* \times V^* \times \cdots \times V^* & \longrightarrow \quad \mathbb{C}\\ (\lambda_{1}, \lambda_2, \cdots , \lambda_{n})\quad & \longmapsto \lambda_{1}(e_{1}) \lambda_2(e_2) \cdots \lambda_{n}(e_2) \end{aligned} \right. $$ to which one associates the following homogeneous map of order $n$ $$\alpha: \left\lbrace \begin{aligned} V^* & \longrightarrow \quad \mathbb{C}\\ \lambda \enspace & \longmapsto \lambda(e_{1})\ \lambda(e_2)^{n-1} \end{aligned} \right. \tag{$\alpha$} \label{alpha}$$ whose $n^{\text{th}}$-derived form should in principle be $e_{1}\otimes e_2 \otimes \cdots \otimes e_{2}$. The problem is that $\alpha$ is not of the form $\mathbf{x}\otimes \mathbf{x} \otimes \cdots \otimes \mathbf{x}$.


The question is analogous to the generalization of the "Gauss reduction" (no english article... the one used in Sylvester's law of inertia), i.e. expressing a general homogeneous polynomial of degree $n$ $$ P(x_1,x_2,\cdots, x_d)= \sum_{i=1}^d a_i x_i^n + \sum_{i\neq j} b_{i,j} x_i^{n-1} x_j + \sum_{i\neq j,k} c_{i,j,k} x_i^{n-2} x_j x_k + \cdots \tag{Poly}\label{Poly} $$ as a sum of $n^{\text{th}}$ power of linear forms, i.e. $\exists\ (\alpha_1,\cdots , \alpha_r)\in \mathbb{R}^r$ and $ (l_1,\cdots , l_r)$ linear maps s.t. $$ P(x_1,x_2,\cdots, x_d)= \sum_{p=1}^r \alpha_p l_p(x_1,x_2,\cdots, x_d)^n \tag{nPower}\label{nPower}$$ (A somewhat formal correspondence with our problem is given by $$P\ \longleftrightarrow\ \sum_{i=1}^d a_i S\big(e_i^{\otimes n}\big) + \sum_{i\neq j} b_{i,j} S\big( e_i^{\otimes (n-1)} \otimes e_j \big) + \sum_{i\neq j,k} c_{i,j,k} S\big(e_i^{\otimes (n-2)}\otimes e_j \otimes e_k\big) + \cdots$$ $P(x_1,x_2,\cdots, x_d)= P(\mathbf{x})$ plays the role of the $\alpha$ in (\ref{Polar}) or (\ref{alpha}).)

This problem probably admits different solutions: (already the case of the decomposition of quadratic form as a sum of squares. The parallelograms identity is in fact an equality of two sums of squares!)

  • Apply (\ref{Polar}) for $\alpha: \mathbb{R}^n \to \mathbb{R},\ (y_1,\cdots, y_n) \mapsto \prod_{j=1}^n y_j$ yields $$ y_1 \cdots y_n= \frac{1}{n!}\sum_{1\leq i_1 < i_2 < \cdots < i_k \leq n} (-1)^{n-k} \big(y_{i_1} + y_{i_2} + \cdots + y_{i_k}\big)^n \tag{Polar2}\label{Polar2}$$ and successively replacing $y_1 \cdots y_n$ by the monomials $x_i^n,\ x_i^{n-1} x_j,\ x_i^{n-2} x_j x_k$ etc. of (\ref{Poly}) will yield (\ref{nPower}). This is what seems to be done here but this other answer looks much more interesting.

  • Instead of doing it for each monomial, one can try to treat the problem one variable $x_i$ after another: assume that one of the $a_i$ is non-zero (otherwise, jump to the other cases which have to be treated anyway). Let's assume it is $a_1$, then $$P(x_1,x_2,\cdots, x_d)= a_1 x_1^n + x_1^{n-1} B(x_2,\cdots, x_d) + x_1^{n-2} C(x_2,\cdots, x_d) + \cdots \tag{a}\label{Fctze}$$ where $B$ is a polynomial of order 1, $C$ of order 2 etc. on the $n-1$ other variables. $$ \ref{Fctze} = a_1 \left(x_1 + \frac{B(x_2,\cdots, x_d)}{na_1}\right)^n - x_1^{n-2}\left( C(x_2,\cdots, x_d) - {n\choose 2} \Big(\frac{B(x_2,\cdots, x_d)}{na_1}\Big)^2 \right) + \cdots $$ The second term is of the form $ x_1^{n-2}\ \tilde{C}(x_2,\cdots, x_d)$ with $\tilde{C}$ quadratic. Use a decomposition of it as sum of squares $\tilde{C}= \sum c_p l_p(x_2,\cdots, x_d)^2$. Inspired by (\ref{Polar2}), one guesses that $$x_1^{n-2} l_p^2 = \big((n-2)x + 2 l_p\big)^n - 2\big((n-2)x + l_p \big)^n -(n-2) \big((n-3)x + 2 l_p \big)^n + \cdots $$ not sure I can get an explicit formula...

Noix07
  • 3,839