3

While reading answers here, an argument is often that a term in an integral does not go to zero fast enough, resulting in $\infty$.

Some examples: 1, 2, 3, ...

  • Is there an intuitive explanation for why the speed of going to zero matters;
  • Is there a boundary from where terms do tend to zero fast enough?

Thank you in advance.

PS: I am aware of this similar question, but it seems to deal with a more specific problem.

4 Answers4

5

Consider the p-series $$\sum_{n=1}^\infty \frac{1}{n^p}.$$ One can prove that this series converges for $p > 1$ but diverges for $p \le 1$.

It is obvious that if the individual summands do not decrease to zero as $n \to \infty$ (e.g., the case $p=0$), then the partial sums $\sum_{n=1}^N \frac{1}{n^p}$ will increase to infinity.

It is also possible that if the summands do decrease to zero, but not quickly enough (e.g., the cases $0 < p \le 1$), such that the partial sums still increase to infinity. Here, the addends are in some sense still too big when adding everything together.

On the other hand, if the summands get tiny very quickly (e.g., the cases $p > 1$), then the partial sums increase less and less, such that the series converges to a number.

The phrasing "quickly enough" is referring to this phenomenon, and an example of a concrete definition of what is "quick" is as given above; in particular the "boundary" is at $p=1$.


The same sort of thing happens for integrals as well (see the integral test). That is, $$\int_1^\infty \frac{1}{x^p} \mathop{dx}$$ converges for $p > 1$ but diverges for $p \le 1$.

angryavian
  • 93,534
4

For exactly the same reason that sums whose terms go to zero don't necessarily converge. The famous example is the harmonic series $\sum_n \frac{1}{n}$ which diverges to infinity despite the fact that its terms decrease. The intuitive reason this doesn't converge is that we have $$ 1 + 1/2 + 1/3+\ldots = 1 + 1/2 +( 1/3 + 1/4)+ (1/5 + 1/6 + 1/7 + 1/8) + (1/9+\ldots + 1/16)+\ldots\\\ge 1 + 1/2 + 1/2+1/2+\ldots = \infty.$$

In other words, if we keep taking more terms, we can always make it as large as we want. The number of terms we need grows, roughly doubling every time you want to go up by another unit of $1/2$ so the growth is logarithmic.

By contrast if we look at something that decays quicker, like a geometric series $$ 1 + 1/2 + 1/4+1/8+\ldots$$ we can see that the sum always remains less than two, no matter how many terms you take. At each step the next term shrinks enough so that we can't quite make it to two.

The convergence of sums and integrals are related by the integral test which roughly says that $\sum_n f(n)$ converges / diverges if and only if $ \int_a^x f(t)dt$ converges /diverges as $x\to\infty$. In particular, we have $$ \int_1^x \frac{1}{t} dt = \ln (x)$$ which grows logarithmically just like the harmonic series. Using the rules of calculus we see that for $p>1$ $$ \int_1^x \frac{1}{t^p}dt = \frac{1}{p-1}\left(1-\frac{1}{x^{p-1}}\right) \to \frac{1}{1-p}$$ so having power law decay with power greater than one is "fast enough" while decaying like $1/x$ is "too slow". So the boundary is, roughly speaking, $1/x.$ More accurately, we say it's $1/x^{1+\epsilon},$ since there are functions that decay strictly faster than $1/x$ for which the integral diverges. For instance, $$ \int_e^x \frac{1}{t\ln t}dt =\ln(\ln(x))\to \infty$$ decays too slow, whereas $$ \int_e^x \frac{1}{t(\ln t)^2}dt =1-\frac{1}{\ln(x)}\to 1$$ decays fast enough.

  • I appreciate the many good answers to this question. I accepted this one because the addition in the beginning of your answer helped me the most to get an intuitive idea of convergence. – Frans Rodenburg Nov 01 '17 at 01:57
2

Definite integral can be represent as a sum:

Let's have the the close interval $[a,b]$ then you have the following sequence: $a=a_0<a_1<\cdots<a_n=b$ Now you have $n$ new intervals that their sum is the original interval, each one is $[a_{i-1},a_i]$. Let's define $\Delta_i=x_i-x_{i-1}$ and $x_i^*\in(a_{i-1},a_i)$

Now we have:$$\int_a^b f(x)\ dx\approx\sum_{i=1}^n f\left(x_i^*\right)\Delta_i$$ So what happened when $b$ goes to infinity?

Btw, this is rough explanation, I recommend reading more about Riemann sum if this is interest you

Holo
  • 10,246
  • If you want to know more you can also read about Darboux integral. A little side note: $x_i^*$ can actually be equal to $a_i$ and $a_{i-1}$ but it is irrelevant for this case – Holo Oct 31 '17 at 05:15
1

The idea of "rapid" decay is non-sense if you do not put it in context. In particular, we generally find rapid decay to be a big part of convergence of sums or convergence of integrals as many people noted. Frankly, for convergence of sums, the estimates for how fast something must decay are well-known. For example, the $p$-series test you learn in calculus. Now keep in mind these are fixed in the sense there is no function here. In particular, if we have a power series, a grand question would be when does it converge for a given $x$. More generally, if we are given a power series with the coefficients in some random or arranged pattern, when would this series converge? If the coefficients decay fast enough, such as $p$-series, it will converge for some range of $x$.

A computational dilemma that one might associate with decay is also how accurate the decay is. Is it decaying smoothly or is it decaying according to some random distribution?

Now to answer your questions more precisely, for the first question, the intuition is generally, if it decays like the graph $1/x^2$, we are happy. If it is something else, we need to identify it with known decays (which will make us happy) or prove it some other way (which will make us angry and then motivated). For your second question, the boundary you speak of is not so much a boundary, but more like a family of boundaries (if you can even call them that really). For example $1/x^1$ is the boundary for when a series will no longer converge, but adding a little bit to that exponent 1 will guarantee convergence laughably. Another "boundary" would be $1/x\log x$ which also does not converge, but adding an exponent will make it converge.

Now for all intents and purposes, you shouldn't really think of these as boundaries, but rather just speeds of decay. If you have a pen and paper with you, draw a curve such as $1/x$ on that sheet of paper and draw $1/x^2$ and try to examine the difference in height for large $x$. That should give you some sense of what makes or breaks a convergent series.

  • Thank you, I appreciate the concrete examples! I don't fully understand how something could decay according to a random distribution, though. Could you give an example? – Frans Rodenburg Nov 01 '17 at 01:53
  • For example, consider a graph with random points placed at every x>0. If you see that the points are tending to 0, then you can see that this is a kind of decay which by virtue decays randomly. If there are outliers, say countably many, we can "ignore" them and we may find some notion of "essential decay". –  Nov 03 '17 at 09:29