The number of vectors $v$ must at least be ${n + d - 1 \choose n}$
$\newcommand{\real}{\mathbb{R}}$
Let $m_n$ be the map to the monomials of $x$ with exactly order $n$:
$$
m_n(x) := (x_1^n, x_1^{n-1}x_2,\dots, x_d^n)
$$
then we have
$$
m_n(\lambda v) = \lambda^n m_n(v). \tag{1}
$$
For vectors $v_1,\dots, v_k$ consider the matrix of $n$-th order monomials
$$
M_v := \begin{pmatrix}
| & & |
\\
m_n(v_1)& \cdots & m_n(v_k))
\\
| & & |
\end{pmatrix}
$$
If we can find a vector $q$ of length $N = {n+d-1 \choose n}$ (the number of order n monomials) with
$$
q^T M_v = 0,
$$
then for the polynomial $p(x) = q^T m_n(x)$ we have $p(v_i)=0$
Finally, observe that information about the polynomials $p_{v_1},\dots,p_{v_k}$ is equivalent to access to $p(v_1),\dots p(v_k)$, since
we have due to $(1)$
$$
p_{v_i}(\lambda)= \lambda^n p(v_i).
$$
We therefore only gain the information of $k$ points or in other words, we have $p_{v_i}\equiv 0$.
As the matrix $M_v$ is of shape $N\times k$, we can always find such a $q$ unless $k\ge N$.
There exist $N={n + d - 1 \choose n}$ vectors $v_1,\dots,v_N$ such that knowledge of $p_{v_i}$ is sufficient
Prop 1: We can choose $v_1,\dots, v_N$ such that $m_n(v_1),\dots m_n(v_n)$ are linear independent.
with this proposition we have:
Theorem: Let $p$ be a polynomial of degree $n$. And let $v_1,\dots, v_N$ be selected such that $m_n(v_1),\dots, m_n(v_N)$ span the space. And assume $p_{v_i} \equiv 0$. Then we have $p\equiv 0$
The polynomial can be written in the form
$$
p(x) = \sum_{k=0}^n p^{(k)}(x),
$$
where $p^{(k)}$ are polynomials that only consist of monomials of degree $k$. We have for all $k<n$
$$
\lim_{\lambda\to\infty} \frac{p^{(k)}(\lambda v_i)}{\lambda^n} = 0
$$
and thus
$$
0 = \lim_{\lambda\to\infty} \frac{p_{v_i}(\lambda)}{\lambda^n} = p^{(n)}(v_i)
$$
Since $p^{(n)}(x) = q^T m_n(x)$ for some $q$, $q^T M_v = 0$ by the equation above and $M_v$ has full rank by assumption we have $q=0$ and thus $p^{(n)}=0$. Thus the original polynomial $p$ is at most of degree $n-1$
With the following lemma, we can finish the proof by induction.
Lemma 1: If $m_n(v_1),\dots, m_n(v_N)$ span the space, then $m_k(v_1),\dots, m_k(v_N)$ also span their space for all $k\le n$.
Proof: Without loss of generality, k = n-1,
Observe that
$$
x_1 \cdot m_k(x)
= x_1 \cdot (x_1^k, x_1^{k-1} x_2,\dots, x_d^k)
= (x_1^n, x_1^{n-1} x_2, \dots, x_1 x_d^{n-1})
$$
is a specific subset of $m_n(x)$. Specifically, for $M_v$ as defined above, this represents a specific selection of rows. But this retains the spanning property of the columns.
Proof of Prop 1.
We need to show that there exists $v_1,..., v_N$ such that
$m_n(v_1),...,m_n(v_N)$ are linear independent. For this
we consider the bijective map between the $d$ digits of a base-$(n+1)$ unsigned int encoding and the numeric value, i.e.
$$
\phi: \begin{cases}
\{0,\dots,n\}^d &\to \{0,\dots, (n+1)^d-1\}
\\
x &\mapsto \sum_{k=0}^{d-1} x_{k+1} (n+1)^k
\end{cases}
$$
Observe that a tuple $(k_1,...,k_d) \in \{0,\dots,n\}^d$ can encode
a monomial $x_1^{k_1}\cdot \ldots \cdot x_d^{k_d}$.
We are now going to assume that the monomials in $m_n(x)$ are ordered in such a way such that $\phi$ maps the powers to an ordered sequence of integers. Since we only want monomials of order equal to $n$ we leave out all tuples which do not satisfy $k_1+\dots+k_d=n$.
With this setup, consider the vectors
$$
v_i := (a_i, a_i^{n+1}, a_i^{(n+1)^2},\dots, a_i^{(n+1)^{d-1}})
$$
for $a_1,\dots ,a_N\in \real$ with $a_i \neq a_j$. Then we have
$$
M_v = \begin{pmatrix}
| & & |
\\
m_n(v_1)& \cdots & m_n(v_N))
\\
| & & |
\end{pmatrix}
= \begin{pmatrix}
a_1^{\lambda_1} & \dots & a_N^{\lambda_1}
\\
\vdots & & \vdots
\\
a_1^{\lambda_N} & \dots & a_N^{\lambda_N}
\end{pmatrix}
$$
where $\lambda_1 < ... < \lambda_N$ and $\lambda_i \in \mathbb{N}$ as $\lambda_i$ are the mappings of the monomial powers via $\phi$.
Assuming we select $a_i >0$ the matrix $M_v$ is a generalized vandermonde as in https://math.stackexchange.com/a/4549001/445105
and is therefore invertible. This finishes the proof that there exists $v_1,\dots, v_n$ such that the $m_n(v_i)$ are linear independent.
Corollary Almost all selections of $v_1,\dots,v_N$ work.
As $\det(M_v)$ is a multivariate polynomial in the entries of the $v_i$, it is an analytic function. But the zero set of an analytic function that is not equal to zero has Lebesgue measure zero, thus if $v_1,\dots, v_N$ is sampled from a distribution that has a Lebesgue density
the sample will almost surely result in linearly independent $m_n(v_i)$.