7

$ \newcommand{\real}{\mathbb{R}} $ Lets consider a multivariate polynomial $p\in \real[x_1,\dots,x_d]$ of order $n$. We can now take $1$-dimensional slices of this polynomial $$ p_v(\lambda) := p(\lambda v) $$ for $v\in \real^d$ and $\lambda\in \real$.

How many of these slices are necessary to fully determine the original polynomial $p$? This is equivalent to the question: If all $p_v \equiv 0$, under what conditions do we have $p\equiv 0$?

Example

Let $p(x) = x_1 x_2$, then $p_{e_1}\equiv 0$ and $p_{e_2}\equiv 0$, yet the polynomial is obviously not zero.

Towards an answer

A similar question is, how many evaluation points of a multivariate polynomial are necessary to ensure the polynomial $p$ to be zero (e.g. Proving a multivariate polynomial is zero in $N$ easy steps. But what is $N$?). For any point $x\in \real^d$ and order $n$ there is a corresponding monomial vector $$ m(x) = (\underbrace{x_0^n, x_0^{n-1}x_1,\dots, x_d^n}_{\text{order n}}, x_0^{n-1}, \dots, x_d) \in \real^N $$ for $N = {n+d \choose n}$ (Number of coefficients of a multivariable polynomial). And the trick is now to pick $(x_1,\dots,x_N)$ such that the $m(x_i)$ are linear independent.

In the case of our 1D slices, we do not get single point evaluations, but rather a continuum of point evaluations. And in some sense our example is very unlucky since our continuum of point evaluations mapped via $m$ is embedded in a 2D subspace of $\real^N$ and therefore not sufficient.

The question is now: is there a good criterion on the vectors $v_1,\dots$ that ensures the least numbers of vectors are necessary to obtain the necessary independent monomials along those lines? What is that number of vectors?

  • Is $v$ handed to us or can we pick $v$ to have algebraically independent transcendental coordinates? – Eric Towers Oct 14 '24 at 10:01
  • @EricTowers you can pick it but I would ideally like a general sufficient (necessary) condition how to pick $v$ such that their number is minimal so that I can compare this condition to other things I need from them in the application – Felix Benning Oct 14 '24 at 10:27

1 Answers1

4

The number of vectors $v$ must at least be ${n + d - 1 \choose n}$

$\newcommand{\real}{\mathbb{R}}$ Let $m_n$ be the map to the monomials of $x$ with exactly order $n$: $$ m_n(x) := (x_1^n, x_1^{n-1}x_2,\dots, x_d^n) $$ then we have $$ m_n(\lambda v) = \lambda^n m_n(v). \tag{1} $$ For vectors $v_1,\dots, v_k$ consider the matrix of $n$-th order monomials $$ M_v := \begin{pmatrix} | & & | \\ m_n(v_1)& \cdots & m_n(v_k)) \\ | & & | \end{pmatrix} $$ If we can find a vector $q$ of length $N = {n+d-1 \choose n}$ (the number of order n monomials) with $$ q^T M_v = 0, $$ then for the polynomial $p(x) = q^T m_n(x)$ we have $p(v_i)=0$

Finally, observe that information about the polynomials $p_{v_1},\dots,p_{v_k}$ is equivalent to access to $p(v_1),\dots p(v_k)$, since we have due to $(1)$ $$ p_{v_i}(\lambda)= \lambda^n p(v_i). $$ We therefore only gain the information of $k$ points or in other words, we have $p_{v_i}\equiv 0$.

As the matrix $M_v$ is of shape $N\times k$, we can always find such a $q$ unless $k\ge N$.

There exist $N={n + d - 1 \choose n}$ vectors $v_1,\dots,v_N$ such that knowledge of $p_{v_i}$ is sufficient

Prop 1: We can choose $v_1,\dots, v_N$ such that $m_n(v_1),\dots m_n(v_n)$ are linear independent.

with this proposition we have:

Theorem: Let $p$ be a polynomial of degree $n$. And let $v_1,\dots, v_N$ be selected such that $m_n(v_1),\dots, m_n(v_N)$ span the space. And assume $p_{v_i} \equiv 0$. Then we have $p\equiv 0$

The polynomial can be written in the form $$ p(x) = \sum_{k=0}^n p^{(k)}(x), $$ where $p^{(k)}$ are polynomials that only consist of monomials of degree $k$. We have for all $k<n$ $$ \lim_{\lambda\to\infty} \frac{p^{(k)}(\lambda v_i)}{\lambda^n} = 0 $$ and thus $$ 0 = \lim_{\lambda\to\infty} \frac{p_{v_i}(\lambda)}{\lambda^n} = p^{(n)}(v_i) $$ Since $p^{(n)}(x) = q^T m_n(x)$ for some $q$, $q^T M_v = 0$ by the equation above and $M_v$ has full rank by assumption we have $q=0$ and thus $p^{(n)}=0$. Thus the original polynomial $p$ is at most of degree $n-1$

With the following lemma, we can finish the proof by induction.

Lemma 1: If $m_n(v_1),\dots, m_n(v_N)$ span the space, then $m_k(v_1),\dots, m_k(v_N)$ also span their space for all $k\le n$.

Proof: Without loss of generality, k = n-1, Observe that $$ x_1 \cdot m_k(x) = x_1 \cdot (x_1^k, x_1^{k-1} x_2,\dots, x_d^k) = (x_1^n, x_1^{n-1} x_2, \dots, x_1 x_d^{n-1}) $$ is a specific subset of $m_n(x)$. Specifically, for $M_v$ as defined above, this represents a specific selection of rows. But this retains the spanning property of the columns.

Proof of Prop 1.

We need to show that there exists $v_1,..., v_N$ such that $m_n(v_1),...,m_n(v_N)$ are linear independent. For this we consider the bijective map between the $d$ digits of a base-$(n+1)$ unsigned int encoding and the numeric value, i.e. $$ \phi: \begin{cases} \{0,\dots,n\}^d &\to \{0,\dots, (n+1)^d-1\} \\ x &\mapsto \sum_{k=0}^{d-1} x_{k+1} (n+1)^k \end{cases} $$ Observe that a tuple $(k_1,...,k_d) \in \{0,\dots,n\}^d$ can encode a monomial $x_1^{k_1}\cdot \ldots \cdot x_d^{k_d}$. We are now going to assume that the monomials in $m_n(x)$ are ordered in such a way such that $\phi$ maps the powers to an ordered sequence of integers. Since we only want monomials of order equal to $n$ we leave out all tuples which do not satisfy $k_1+\dots+k_d=n$.

With this setup, consider the vectors $$ v_i := (a_i, a_i^{n+1}, a_i^{(n+1)^2},\dots, a_i^{(n+1)^{d-1}}) $$ for $a_1,\dots ,a_N\in \real$ with $a_i \neq a_j$. Then we have

$$ M_v = \begin{pmatrix} | & & | \\ m_n(v_1)& \cdots & m_n(v_N)) \\ | & & | \end{pmatrix} = \begin{pmatrix} a_1^{\lambda_1} & \dots & a_N^{\lambda_1} \\ \vdots & & \vdots \\ a_1^{\lambda_N} & \dots & a_N^{\lambda_N} \end{pmatrix} $$ where $\lambda_1 < ... < \lambda_N$ and $\lambda_i \in \mathbb{N}$ as $\lambda_i$ are the mappings of the monomial powers via $\phi$.

Assuming we select $a_i >0$ the matrix $M_v$ is a generalized vandermonde as in https://math.stackexchange.com/a/4549001/445105 and is therefore invertible. This finishes the proof that there exists $v_1,\dots, v_n$ such that the $m_n(v_i)$ are linear independent.

Corollary Almost all selections of $v_1,\dots,v_N$ work.

As $\det(M_v)$ is a multivariate polynomial in the entries of the $v_i$, it is an analytic function. But the zero set of an analytic function that is not equal to zero has Lebesgue measure zero, thus if $v_1,\dots, v_N$ is sampled from a distribution that has a Lebesgue density the sample will almost surely result in linearly independent $m_n(v_i)$.

  • There is only one point which I do not quite get; how do you know you can choose $v_1,\dots,v_N$ such that $m_n(v_1),\dots,m_n(v_N)$ spans the space? This is probably not hard to prove; maybe you could even choose $v_1,\dots,v_N$ to be random vectors and have it work. – Mike Earnest Oct 14 '24 at 18:16
  • @MikeEarnest managed to prove it now see Prop 1. – Felix Benning Nov 01 '24 at 15:40