My view is that it's best to focus on what the math itself is telling us. Series arise in mathematics as the value of functions evaluated at some point when this is computed using some method that yields a series, like e.g. a Taylor expansion.
Whatever method you use to generate the series, the answer for the value of the function presented to you by the math itself, is not that you should add up an infinite number of terms of the series. In fact, addition is only defined for a finite number of terms, so the math couldn't possibly tell you to add up an infinite number of terms!
What the math tells you is to add up a finite number of terms of the series and then to add a remainder term to that finite sum. And this
is true for both divergent and convergent series, the math itself doesn't distinguish between these types of series. Then with theorems that allow you to bound the remainder term, you can get to approximations of the sum, but to evaluate the sum exactly, you need to be able to calculate the remainder term.
When we're presented a series without any function being specified whose expansion yields the given series, we should assume that there exists such a function that when expanded, yields the given series, and that this function is maximally analytic, meaning that any nonanalytic behavior is dictated by the series. The value of the series is then the value of the function at the point that corresponds to the value of the expansion parameter.
This is then consistent with the standard definition of the value of a convergent series as the limit of the partial sums, because maximal analyticity implies that the remainder term must tend to zero for covergent series.
In case of divergent series, one can argue as follows. If the partial sum of the first $n$ terms is denoted as $S(n)$ with the corresponding remainder term denoted as $R(n)$, then the value of the infinite series is $S(n) + R(n)$. This means that the value of the series is equal to the constant term in $S(n)$ plus the constant term in $R(n)$ at any arbitrary point. As I've explained here, the constant term in the expansion around infinity of the expression:
$$\int_{N-1}^N R(x) dx\tag{1}$$
must be zero. This then fixes the constant term in $R(n)$. We then obtain the expression for the value of the sum of the infinite series as:
$$\operatorname*{con}_{N}\int_{N-1}^NS(x)dx\tag{2}$$
where $\displaystyle \operatorname*{con}_N$ denotes taking the constant term from the large $N$ expansion. I give a number of examples of calculations o the sums of divergent series using (2) in this other more elaborate answer.