So if you want a more entry-level answer rather than one coming from an experienced mathematician, a different intuition for integration comes from viewing the continuous calculus as an extension of a discrete calculus.
Consider infinite sequences $A_\bullet, B_\bullet$, then we can define things like term-by-term sum and product, and multiplying by a scalar,
$$
\begin{align}
C = k A &~~\leftrightarrow~~ C_i = k A_i,\\
D = A + B &~~\leftrightarrow~~ D_i = A_i + B_i,\\
E = A \cdot B &~~\leftrightarrow~~ E_i = A_i B_i.
\end{align}
$$
We can also define shifting them left or right (with an implicit zero added),
$$
\begin{align}
({\downarrow}A)_i &= A_{i+1}, \\
({\uparrow}A)_i &= \begin{cases} 0, & \text{ if } i = 0,\\ A_{i-1} & \text{otherwise.}\end{cases}
\end{align}
$$
Control question: is ${\downarrow}{\uparrow} A = A$? what about ${\uparrow}{\downarrow}A = A$?
Now this allows us to define the term-by-term difference that losslessly encapsulates the original sequence,
$$
\Delta A = A - {\uparrow} A\\
(\Delta A)_i = \begin{cases} A_0, & \text{ if } i = 0,\\ A_i - A_{i-1} & \text{otherwise.}\end{cases}
$$
If you think about how to undo this, you would do it incrementally: start with $A_0 = \Delta A_0,$ then you would form $A_1 = A_0 + \Delta A_1$, then you would form $A_2 = A_1 + \Delta A_2,$ and so on. So this invites the inverse operator,
$$
(\Sigma A)_i = A_0 + \dots + A_i. $$
And again these are inverse, $\Sigma \Delta X = \Delta \Sigma X = X.$ (This is kind of obvious for $\Delta\Sigma X$ based on the above definition, the $\Sigma \Delta$ case again requires recursion or some random gesturing at "telescoping series" or so.)
So then these actually commute with $\uparrow$ too, so $\Sigma{\uparrow}A = {\uparrow}\Sigma A,~\Delta{\uparrow}A = {\uparrow}\Delta A,$ but you can kind of see that if you try to $\Sigma{\downarrow}A$ then you have to remove $A_0$ from the whole series, for this it helps to define the multiplicative identity $I = (1, 1, 1, \dots)$ and $\Sigma{\downarrow}A = {\downarrow}\Sigma A - A_0 I $. Or sometimes it's helpful to have an indicator term like $\iota(n) = {\uparrow}^n \Delta I$, so it has a 1 right at the $n^\text{th}$ position but otherwise 0.
And at this point one would do a bunch of examples, like that if $N = (0 1, 2, \dots)$ then $\Delta N = {\uparrow}I$, or that $$
\Delta(A \cdot B) = A \cdot \Delta B + \Delta A \cdot B - \Delta A \cdot \Delta B,\\
\Delta(N \cdot N) = 2 N - {\uparrow}I,
$$
which gives from our inverse that the sum of the first N counting numbers is $N^2$ or if you prefer that $\Sigma N = \frac12 N \cdot (N + 1).$
Well, that's a discrete version of calculus. When we make it continuous, the idea is that we form a term-by-term $X = \epsilon N$ and $f(X) = (f(0), f(\epsilon), f(2\epsilon), \dots)$ for some small $\epsilon$, sampling the function from 0 to infinity. For continuous functions, this has the effect of making the series vary extremely slowly, $f_{i+1} \approx f_i,$ and this means ${\downarrow} A \approx {\uparrow} A \approx A$ while $\Delta A \approx 0$ and $\Sigma A \approx A_0 N$ at least to start. Those aren't rigorous, but just to get your brain kind of in the mood for the steps that we would then take:
We don't use just $\Delta$ but $\mathrm d = \epsilon^{-1} \Delta$ for our differences
Dividing our Leibniz rule we find $\mathrm d (A \cdot B) = A \cdot \mathrm d B + \mathrm d A \cdot B + \epsilon~\mathrm d A \cdot \mathrm d B$ and for continuous functions that $\epsilon$ premultiplier nukes the third term... this also implies that $\mathrm d (X^n) = n X^{n-1} + \epsilon E$ for some error term E.
Th blows up to infinity when $f$ is not continuous at some argument $x$, consider defining $\delta(x) = \epsilon^{-1} ~\iota(\lceil x/\epsilon\rceil)$ as the Dirac delta sequence, to have an explicit representation of these terms in the continuous version of the discrete calculus, this also handily sidesteps the fact that it's not a function without needing to introduce to kids the notion of a functional.
The inverse of $\mathrm d$ is now $\int = \epsilon ~\Sigma$. So this gives you Riemann sums directly.
And then for example of how the same thinking is being mirrored here, if you want to calculate the volume of a square pyramid, and you think of like an actual pyramid in Egypt, you would say in the discrete calculus, "well, the pyramid is made of layers and each layer is made of a certain number of bricks of volume $\epsilon$ and the bricks in each layer scale like $k N^2$, so we clearly want to do some sort of $\Sigma N^2$ operation," whereas in the continuous calculus you can kind of fudge out the error term to simultaneously get something easier and more precise, "the slope of the pyramid is some $\tan\theta$, the pyramid has height $h$ and base area $A$, so the volume of one layer of pyramid of height $\epsilon$ at distance $y$ down from the tip of the pyramid, is $\epsilon ~A~(y/h)^2.$ Applying $\Sigma$, hey, we get an $\int$, so this is $$V = \frac{A}{h^2} \left[ \int X^2 \right ]_{\lceil h/\epsilon\rceil} = \frac{1}{3} ~A~ h + \epsilon E,$$ for some error term $E$.
Viewed this way the basic idea of integration is just "Cut something up into little slices that I can sum together, and I have helpful rules for how to do these sums really quickly." The area under the curve just happens to be easy to slice vertically into little rectangles of height $f(X)$ and width $\epsilon$.