You can interpret the inverse of an operation as solving an equation (as in, the inverse of $x \mapsto x^2$ is solving $x^2 = a$,) and apply this to differentiation of functions in $\mathbb{R^n} \to \mathbb{R}$ (or to broader generalizations — e.g. $\mathbb{R^n} \to \mathbb{R^m}$ — at the cost of having to deal with more details.)
Inverse of partial differentiation
The inverse of partial differentiation $f \mapsto \frac {\partial f} {\partial x_i}$ of functions in $\mathbb{R^n} \to \mathbb{R}$ is solving the equation
$$\frac {\partial f} {\partial x_i} = f_{x_i} \tag 1$$
for $f$, given $f_{x_i}: \mathbb{R^n} \to \mathbb{R}$. Despite the notation on the LHS, you can treat (1) essentially as a first-order ordinary differential equation, as each $x_{j \ne i}$ is treated as a constant, hence, for the purpose of solving the equation, $f_{x_i}$ and $f$ can be treated as parameterized univariate functions.
As (any or all) solutions to the univariate case, if any exist, are casually denoted by the indefinite integral $\int f(x) dx$, one may be tempted to (re-/ab-)use the same notation for solutions of (1) as well and write $\int f(\mathbf x) dx_i$, though any such use out of context is questionable. The antiderivatives of a univariate function differ by a plain constant, which happens to be mostly irrelevant where they are used, so keeping them all together under the indefinite integral is convenient and mostly harmless. In contrast, solutions to (1) in general will differ by
$$g(\mathbf x) = h(x_1, ..., x_{i - 1}, x_{i + 1}, ..., x_n)$$
where $h$ is just any function in $\mathbb{R^{n - 1}} \to \mathbb{R}$.
Inverse of "total differentiation"
Defining total differentiation as the mapping of a function to its total derivative, i.e.
$$f: \mathbb{R^n} \to \mathbb{R} \quad \mapsto \quad \frac {df} {dx_i} = \sum {\frac {\partial f} {\partial x_j} \frac {dx_j} {dx_i}}$$
then its inverse is solving the equation
$$\sum {\frac {\partial f} {\partial x_j} \frac {dx_j} {dx_i}} = f_{x_i} \tag 2$$
for $f$, given $f_{x_i}: \mathbb{R^n} \to \mathbb{R}$ and each of $\frac {dx_j} {dx_i} : \mathbb R \to \mathbb R$ for $j \ne i$. Note that when $\frac {dx_j} {dx_i} = 0$ for all $j \ne i$, which can be seen as a (rather ambiguous) way of stating that $x_j$ and $x_i$ are independent, then (2) is reduced to (1).
This is a first-order partial differential equation, and there is no standard notation for its solutions, possibly owing to its solution space being even less orderly than that of (1), and thus even less likely to be of any use when considered as a whole, under a single denominator.1
A family of implicit solutions of (2) of a certain form are commonly2 referred to as its complete integral, for which there is no standard notation, either.
1 Another plausible explanation may be that PDE-s at large are considered a first-class royal zoo, poorly understood except in a handful of special cases, and usually avoided at all costs.
2 To the extent to which commonly applies in PDE-related contexts.