Let $R$ be a ring. A multivariate polynomial $p(x_1,\ldots,x_n)$ over $R$ is a finite sum of powers of the $x_i$s multiplied by coefficients in $R$.
A multivariate polynomial $p(x_1,\cdots,x_n)\in R[x]$ is a generalization of the one-variable version, namely a finite sum of terms of the form $x_1^{d_1}\cdot \ldots \cdot x_n^{d_n}$, where each $d_i\in \mathbb{N}^0$. We say the degree of this term is $\sum_i d_i$ and the degree of $p$ is the maximal degree of its terms. The notation is somewhat cumbersome; a system called index notation is usually used for the general case.
Over $\mathbb{R}$ and $\mathbb{C}$, multivariate polynomials are analytic, as one would expect, and multivariate polynomials can be differentiated formally over other fields as well.
Many, but interestingly not all, results about single-variable polynomials generalize to the multivariate case. For instance, compare:
The Fundamental Theorem of Algebra: Every degree $d$ polynomial in $\mathbb{C}[x]$ has $d$ roots, counted with multiplicity.
Bezout's Theorem: (paraphrased from here) If $C,D$ are complex projective curves with no common components, then if $i(C\cap D,p)$ is the intersection multiplicity of $C$ and $D$ at $p$, we have $$ \sum_{p\in C\cap D} i(C\cap D,p)= \deg(C)\deg(D) $$The two theorems are similar if we extend our notions of where ($\mathbb{C}$ must be extended to the projective plane) and what (defining intersection multiplicity is slightly involved).
An interesting example where the analogous property of one-variable polynomials is from the 1969 Putnam exam:
$\mathbb{R}^2$ represents the usual plane $(x, y)$ with $-\infty<x,y<\infty$. $p: \mathbb{R}^2 \to \mathbb{R}$ is a polynomial with real coefficients. What are the possibilities for the image $p(\mathbb{R}^2)$?
For $a\in\mathbb{R}$, the usual images $\{\{a\},\mathbb{R},[a,\infty),(-\infty,a]\}$ are retained. However, two variable polynomials can have images of the form $(a,\infty)$ or $(-\infty,a)$, i.e. they do not attain a min/max value, such as $p(x,y)=x^2+y^2+(1-xy)^2$.