18

I'd like to be able to construct polynomials $p$ whose graphs look like this:

enter image description here

We can assume that the interval of interest is $[-1, 1]$. The requirements on $p$ are:

(1) Equi-oscillation (or roughly equal, anyway) between two extremes. A variation of 10% or so in the values of the extrema would be OK.

(2) Zero values and derivatives at the ends of the interval, i.e. $p(-1) = p(1) =p'(-1) = p'(1) = 0$

I want to do this for degrees up to around 30 or so. Just even degrees would be OK.

If it helps, these things are a bit like Chebyshev polynomials (but different at the ends).

The one in the picture has equation $0.00086992073067855669451 - 0.056750328789339152999 t^2 + 0.60002383910750621904 t^4 - 2.3217878459074773378 t^6 + 4.0661558859963998471 t^8 - 3.288511471137768132 t^{10} + t^{12}$

I got this through brute-force numerical methods (solving a system of non-linear equations, after first doing a lot of work to find good starting points for iteration). I'm looking for an approach that's more intelligent and easier to implement in code.

Here is one idea that might work. Suppose we want a polynomial of degree $n$. Start with the Chebyshev polynomial $T_{n-2}(x)$. Let $Q(x) = T_{n-2}(sx)$, where the scale factor $s$ is chosen so that $Q(-1) = Q(1) = 0$. Then let $R(x) = (1-x^2)Q(x)$. This satisfies all the requirements except that its oscillations are too uneven -- they're very small near $\pm1$ and too large near zero. Redistribute the roots of $R$ a bit (somehow??) to level out the oscillations.

Comments on answers

Using the technique suggested by achille hui in an answer below, we can very easily construct a polynomial with the desired shape. Here is one:

achille hui solution

The only problem is that I was hoping for a polynomial of degree 12, and this one has degree 30.

Also, I was expecting the solution to grow monotonically outside the interval $[-1,1]$, and this one doesn't, as you can see here:

behaviour beyond unit interval

bubba
  • 44,617
  • 2
    From your context, where you have $p(0)=\ldots$ should be $p(-1)=\ldots$ correct? Also, you apparently want oscillation of some sort, correct? Since otherwise you could just do an easy fourth order poly to get one "hump" in the middle with your endpoint requirements, using $(x-1)^2(x+1)^2$ scaled appropriately. – adam W Jan 21 '13 at 15:11
  • I would assume some approach similar to the one of Lagrange polynomials could give you what you're after, but you'd have to change the method to focus more on local extrema. – Arthur Jan 21 '13 at 15:28
  • Ah, you want your curve to be tangent to the circle at the end points right? ;-) – WimC Jan 21 '13 at 20:51
  • @WimC -- that's right. Getting greedy :-) BTW, I still can't find your e-mail address. – bubba Jan 22 '13 at 01:06
  • @adam -- yes, I want to be able to specify the degree (and therefore the number of oscillations). Degrees 4 and 6 are easy, as you observed. – bubba Jan 22 '13 at 01:09
  • @Amzoti -- Sorry, but I don't see how Legendre polynomials are relevant. They have p(-1) = p(1) = 1 (not zero) and the end derivatives are not zero, either. – bubba Jan 22 '13 at 01:14
  • @Arthur -- Yes. I can get any polynomial I want by Lagrange interpolation of some set of points. But, in this case, I don't know what points to interpolate. I don't know either the abscissae or the ordinates of the extrema. – bubba Jan 22 '13 at 01:16
  • @bubba A very small modification of this answer results in a curve that is tangent at all points on the circle. Maybe that's interesting for you? – WimC Jan 22 '13 at 14:23
  • @WimC -- sounds somewhat interesting. I suppose you would use the square of the Chebyshev polynomial, rather than the polynomial itself. That has some value, but it will result in larger-than-optimal deviations for a given degree, I think. Not sure how much larger. About 2x, maybe ?? – bubba Jan 23 '13 at 04:42
  • @bubba No need for a square. Just note that the "Chebyshev curve" already touches two circles with radius slightly less and slightly greater than one. You have to modify the parameter $t$ a bit to exactly cover a quarter circle. It also shows that it indeed costs you about a factor two in maximal error. – WimC Jan 23 '13 at 06:14
  • @WimC -- I think I see what you mean. Just scale the Chebyshev curve so that its "peaks" coincide with the ends of the arc. That way, you'll get tangency. But the approximation will lie either completely outside the circle, or completely inside it, depending on which peaks you choose. Is that your idea? That approach is useful, but a 2x increase in max error is painful. – bubba Jan 23 '13 at 12:36
  • something like in this article http://go.helms-net.de/math/pascal/bernoulli_en.pdf ? On the title page there is the complete family of that polynomials and at the end (pg 17/18 the G-functions) they are separated into 4 groups. One of that groups seem to fit -at least roughly- the bill ? – Gottfried Helms Jan 24 '13 at 14:04
  • Looks like some massive amount of cancellation going on, especially toward the endpoints of the interval. Is that OK? – Ron Gordon Jan 24 '13 at 16:03

2 Answers2

7

Starting with any Chebyshev polynomial of $1^{st}$ kind $T_n(x), n > 2$, and any positive root $a$ of it. Compose $T_n(ax)$ with any polynomial $q(x)$ which is monotonic over $[-1,1]$ that satisfies $q(-1) = -1, q(1) = 1$ and $q'(-1) = q'(1) = 0$ , then $T_n(a q(x))$ is something you want. For example, $$ T_n\left(\cos\left(\frac{\pi}{2n}\right) \frac{x(3-x^2)}{2}\right) $$

DonAntonio
  • 214,715
achille hui
  • 125,323
  • $q(x)$ may be monotonic but it's not really nearly-linear over the unit interval; this will lead to 'bunching' of the zeroes, since the roughly evenly-spaced zeroes of the Chebyshev polynomial will be mapped to unevenly-spaced locations. – Steven Stadnicki Jan 24 '13 at 16:56
  • The zeros of $T_n(x)$ are not that uniform in $x$. The density of zeroes blows up like $\frac{1}{\sqrt{1-x^2}}$ as $x \rightarrow \pm 1$. $q(x)$ actually makes the zeroes more evenly-spaced. – achille hui Jan 24 '13 at 18:30
  • Well, you answered the question I asked, though not really the one I intended to ask. I wanted a polynomial of lowest possible degree, but I didn't say that. My example has degree 12. Using your technique, I can get a polynomial that looks quite a lot like mine, but it has degree 30. I added a picture to the question. Not really what I wanted, but your result might be useful, anyway, and you did answer the question – bubba Feb 13 '13 at 11:36
4

Here the Newton version of the Asymptote code that does the job. Now it runs really fast in the range requested. The polynomial is encoded by its roots (stored in the x array). Thanks to the kind soul who had the patience to type 4 spaces in the beginning of each line :).

Edit. Since bubba wanted to know where the main iteration step came from, here is a short explanation. First of all, as I said, it is convenient to work with the sum $f(x)=\sum_k\log|x-x_k|$ instead of the product $\prod_k (x-x_k)$ (for both numerical and analytic reasons). This sum dives to $-\infty$ at each root $x_k$ and achieves a local maxima $y_{k-1}$ and $y_k$ at some points $z_{k-1}\in(x_{k-1},x_k)$ and $z_k\in(x_k,x_{k+1})$. Now, let's make a leap of faith and assume that $y_{k-1}$ and $y_k$ are not very different and also that the $z$-points are approximately in the middle of the corresponding intervals (the first assumption is bound to happen sooner or later if our algorithm works at all and the second one is actually poorly justified by itself but more reasonable than any other wild guess about the location). We want to shift the root $x_k$ so that the maxima $y_{k-1}$ and $y_k$ will become equal. Linearizing and assuming that the intervals $(x_{k-1},x_k)$ and $(x_k,x_{k+1})$ are of approximately equal length, we see that the partial derivatives of $f(z_{k-1})$ and $f(z_k)$ with respect to $x_k$ are about $\frac{1}{4(x_{k+1}-x_{k-1})}$ and $-\frac{1}{4(x_{k+1}-x_{k-1})}$ respectively, so, solving the corresponding linear equation, we get the correction $p=(y_k-y_{k-1})(x_{k+1}-x_{k-1})/8$ to apply to $x_k$. That explains the main formula. Unfortunately, if we just use it literally, a lot of crazy things may happen up to the loss of the order of roots, so we should make sure that our shifts are not too large at the early stages when the disbalances are quite large. The second line is a mere truncation of the shifts to the size which, we believe, will constitute only a fraction of the distance between roots (which initially is $2/n$), so that no rearrangement of roots or screwing up of the Newton iteration scheme will occur. The exact choice of the constant $0.3$ is empirical (i.e., I just checked that it worked in the requested range and, say, $0.5$ did not). The formal justification of the algorithm will require quite a lot of careful estimates (you can easily notice that some of the assumptions I made are on the border of wishful thinking) but, since all we need is one sequence of polynomials of not too high degree, I thought it would be worth trying without worrying too much about formal proofs because the final result is verifiable and once you get it, who cares what steps led to it? I know, this is a dismal thing to say for a mathematician, but, since I'm wearing a programmer's hat here, I decided I could try to get away with it :).

int n=51;
//The number of intermediate roots plus 1 (degree-3)

real[] x,y,z;
//The root array, the maximum array. and the critical point array

for(int k=0;k<n+1;++k) {x[k]=2*k/n-1; if(k>0) z[k-1]=(x[k-1]+x[k])/2;} 
//Just initialized the roots to an arithmetic progression
//and the critical points to midpoints between roots

real f(real t)
{
real s=2*log(1-t^2);
for(int k=1;k<n;++k) s+=log(abs(t-x[k]));
return s;
}
//This is just the logarithm of the absolute value of the polynomial
//with double roots at +1,-1 and simple roots at x[k], k=1,...,n-1

real g(real t)
{
real s=2*(1/(t+1)+1/(t-1));
for(int k=1;k<n;++k) 
s+=1/(t-x[k]);
return s;
}
//This is the derivative of f

real G(real t)
{
real s=-2*(1/(t+1)^2+1/(t-1)^2);
for(int k=1;k<n;++k) 
s-=1/(t-x[k])^2;
return s;
}
//This is the derivative of g

for(int m=0;m<15*n+30;++m)
//The outer cycle; the number of iterations is sufficient 
//to cover n up to 70 with the roots found with machine precision (1E-15) 
{
  for(int k=0;k<n/2;++k) 
  {
    real a=z[k];
    a-=g(a)/G(a); 
    y[k]=f(a); y[n-1-k]=y[k];
    z[k]=a; z[n-1-k]=-a;
  }
  //Newton update of critical points with symmetry taken into account

  real[] xx=copy(x);
  for (int k=1;k<n/2;++k) 
  { 
    real p=(y[k]-y[k-1])*(xx[k+1]-xx[k-1])/8; 
    if (abs(p)>0.3/n) p*=0.3/n/abs(p);
    x[k]+=p; x[n-k]-=p;   
  }
//The main iteration step: move each roots to balance the maxima
//adjacent to it. It can be done 
//better if we look beyond the adjacent maxima to evaluate what will 
//happen but, since it works in a reasonable time, I was too lazy to bother.

}

write(x);
write(y);
//outputs the roots and the maxima of f. Note that the roots at +1 and -1
//are double and the rest are simple.

pause();
//Just doesn't let the window close before you look at it.
fedja
  • 19,348
  • Thanks. Looks interesting. I'm not familiar with Asymptote, and ther only info I could find referred to a 3D vector graphics language. Is that the "Asymptote" that you're using ?? If not, can you provide a link, please. – bubba Feb 13 '13 at 10:36
  • Yeah, that's it. I used it because I also made some pictures when testing the code and it was very convenient to write the program in a language that has good graphic capabilities. The syntax is essentially the same as that of C++ except a couple of different names for standard types (like real instead of double, etc.) and the general syntax and the treatment of functions and arrays being a bit more flexible. You should be able to convert the code into any other language without any trouble if you need to do it. – fedja Feb 13 '13 at 12:14
  • Thanks. I asked a couple more questions, but they seem to have disappeared. I'll try again: (1) Why logarithms? To combat numerical problems? (2) I don't understand the reasoning behind the main iteration step -- the calculation of "p" and the way it's used. (3) Do your results agree with mine for the degree 12 case? – bubba Feb 13 '13 at 14:39
  • Because addition is easier to differentiate and to handle than multiplication 2) Ah, that is a long story. I'll elaborate on it later today when I have time. Just run it and check that everything works now. 3) Yes, up to $10^{-15}$ as promised (you need to convert roots into coefficients of course to see it).
  • – fedja Feb 13 '13 at 19:30
  • OK, I added an "explanation" of the formulae in the main iteration step. – fedja Feb 14 '13 at 02:22
  • Thanks very much for the explanation. The formal proof is of no use to me, so I'm happy with what you wrote. I wish I could accept your answer, in addition to the other one. They have roughly equal value (now that you explained a few things). – bubba Feb 14 '13 at 03:28
  • I'm completely fine with your having accepted the other answer :). I played a bit with an analytical approach but convinced myself that no easy formula is possible for the exact solution (if you want the minimal degree polynomial, that is, otherwise you have plenty of options, as you can see from the answer you accepted). So, I just designed a code that generates the first 40 polynomials in decent time with decent precision and made sure it worked. If you want more than that, just let me and other people know what it is and we'll be back to the drawing board ;). – fedja Feb 14 '13 at 04:21
  • @bubba, fedja's plot looks pretty good. Since it really do what you wish to have, you can unaccept my answer and accept his/her instead. – achille hui Feb 16 '13 at 11:49
  • @bubba Nah, leave it as it is. However, if you could comment a bit on why you need such stuff and whether it all works for you now, I will have my idle curiosity satisfied, which will be the best reward I can get from you :). – fedja Feb 16 '13 at 12:03
  • The two answers have roughly equal value, as I said. One gives an analytical method that had not occurred to me, and I may be able to find a way to reduce the degrees of the polynomials it produces. The other gives a numerical approach that is new (to me) and seems to work pretty well. The application is minimax polynomial approximation of curves. I want approximations that match the given curves exactly at their end-points, both in position and tangency. This is a related question: http://math.stackexchange.com/questions/271319/polynomial-approximation-of-circle-or-ellipse/276969#276969 – bubba Feb 17 '13 at 05:04