9

Say that we know that $$\sum_{i=1}^n x_i = x_1+x_2+...+x_n = 1$$ for some positive integer $n$, with $x_1 \le x_2 \le x_3 \le ... \le x_n$. The values of $x_1$ and $x_n$ are also known. How can the minimum and maximum values of $$\sum_{i=1}^n x_i^2$$ be found?

My attempt:

I found the minimum value by setting all the $x_i$ other than $x_1$ and $x_n$ equal to each other. This means that $(n-2)x_i + x_1 + x_n = 1 \rightarrow x_i = \frac{1-x_1-x_n}{n-2}$. Therefore, $$\sum_{i=1}^n x_i^2 = \frac{(1-x_1-x_n)^2}{n-2}+x_1^2+x_n^2$$

However, I do not know how to find the maximum. The hard part is that $x_1 \le x_i \le x_n$ must be satisfied.

  • $x_1$ and $x_n$ are known and fixed. – Varun Vejalla Aug 30 '19 at 04:06
  • by the maximum principle, the maximum of a convex function over a bounded polyhedral set occurs at an extreme point of that set – LinAlg Sep 04 '19 at 06:20
  • In 1981 Slater has proved an interesting companion inequality to Jensen’s inequality.

    Theorem : Suppose that $\phi:I\subseteq \mathbb{R} \to \mathbb{R}$ is increasing convex function on interval $I$ for $x_1$,$x_2$,$\cdots$,$x_n$ $\in$ $I^{°}$ (where $I^{°}$ is the interior of the interval $I$) and for $p_1$,$p_2$,$\cdots$,$p_n$$\geq 0$ with$P_n=\sum_{i=1}^{n}p_i>0$ if $\sum_{i=1}^{n}p_i\phi'{+}(x_i)>0$, then : $$\frac{1}{P_n}\sum{i=1}^{n}p_i\phi(x_i)\leq\phi\Big(\frac{\sum_{i=1}^{n}p_i\phi'{+}(x_i)x_i}{\sum{i=1}^{n}p_i\phi'_{+}(x_i)}\Big) $$

    – Barackouda Sep 06 '19 at 12:22

2 Answers2

3

$f(x)=x^2$ is a convex function.

Also, $$(x_1+x_2+...+x_{n-1}-(n-2)x_1,x_1,...,x_1)\succ(x_{n-1},x_{n-2},...,x_1)$$ and let $x_n\geq x_1+x_2+...+x_{n-1}-(n-2)x_1.$

Thus, by Karamata $$(x_1+x_2+...+x_{n-1}-(n-2)x_1)^2+x_1^2+...+x_1^2\geq x_{n-1}^2+...+x_1^2,$$ which gives $$\max\sum_{k=1}^nx_k^2=(n-2)x_1^2+x_n^2+(1-x_n-(n-2)x_1)^2.$$

Id est, it's enough to solve our problem for $x_1\leq x_n<x_1+x_2+...+x_{n-1}-(n-2)x_1$ or $$x_1\leq x_n<\frac{1-(n-2)x_1}{2}.$$

I hope it will help.

The minimum we can get by C-S: $$\sum_{k=1}^nx_k^2=x_1^2+x_n^2+\frac{1}{n-2}\sum_{k=1}^{n-2}1^2\sum_{k=2}^{n-1}x_k^2\geq x_1^2+x_n^2+\frac{1}{n-2}\left(\sum_{k=2}^{n-1}x_k\right)^2=$$ $$=x_1^2+x_n^2+\frac{(1-x_1-x_n)^2}{n-2}.$$ The equality occurs for $x_2=...=x_{n-1}=\frac{1-x_1-x_n}{n-2},$ which says that we got a minimal value.

  • The Karamata solution is cool, but does it satisfy $x_{n-1} \leq x_n$? If I'm reading the solution right, we have $x_{n-1} = 1 - x_n - (n-2)x_1$. Is this always $\leq x_n$? –  Aug 30 '19 at 05:07
  • @Bungo I see now. There is a problem with occurring of the equality. – Michael Rozenberg Aug 30 '19 at 05:44
3

For the maximum: Suppose we have fixed values $x_1 \leq \frac{1}{n}$ and $x_n \geq \frac{1}{n}$. Then there is a unique point $x^*=(x_1, x_2, \dots, x_n)$ satisfying $\sum x_i=1$ with at most one index $j$ satisfying $x_1 < x_j < x_n$ (imagine starting with all the variables equal to $x_1$, then increasing them one by one to $x_n$). I claim this is where the unique maximum of your function is.

Consider any other point in the domain, and suppose it has $x_1<x_i\leq x_j<x_n$ for some $i \neq j$.

Let $\epsilon = \min\{x_i-x_1, x_n-x_j\}$. Replacing $x_i$ by $x_i'=x_i-\epsilon$ and $x_j$ by $x_j'=x_j+\epsilon$ maintains the $\sum x_i=1$ constraint, while decreasing the number of "interior to $(x_1, x_n)$" variables by one. Furthermore, the new point is better for our objective function: In the sum of squares objective we've replaced $x_i^2+x_j^2$ by $$x_i'^2+x_j'^2=(x_i-\epsilon)^2+(x_j+\epsilon)^2 = x_i^2+x_j^2 + 2 \epsilon^2 + 2 \epsilon(x_j-x_i) > x_i^2+x_j^2.$$

Repeatedly following this process, we'll eventually reach the point $x^*$ from our arbitrary point, increasing the objective at every step.


The key idea hiding in the background here is that (as Michael Rozenberg noted) the function $x^2$ is convex. So if we want to maximize $\sum x_i^2$ given a fixed $\sum x_i$, we want to push the variables as far away from each other as possible. The $x_1$ and $x_n$ constraints place limits on this, so effectively what ends up happening is we push points out to the boundary until we can't push them out any further. The minimum you observed is the reverse of this: To minimize the sum of a convex function for fixed $\sum x_i$ we push all the inputs together as much as possible (this corresponds to Jensen's Inequality).