8

Let $a < b \in (0,1)$ and for $x \in [a,b]$ define $ \omega(x)= \begin{cases} b-x \text{ if } x \in [a,m]\\ x-a \text { if }x \in [m,b]\\ \end{cases} $ where $m=\frac{a+b}{2}$. Let also $C \ge 1$ and $0 \le c \le 1$. I am looking for the maximal value $R_{\max}$ of $$R_f=\frac{\int_{a}^{b}\omega(x)f(x)\mathrm{d}x}{\int_{a}^{b}f(x)\mathrm{d}x}$$ under the constraints $c \le f(x) \le C$ for all $x \in (0,1)$ and $\int_{0}^{1}f(x)\mathrm{d}x=1$.

Some observations:

  • if $f=1$, then $R_f=\frac{3}{4}(b-a)$.
  • $\omega$ is a V-shape piecewise affine function, so we should put more weight towards the boundaries $a$ and $b$. Without the constraint $c \le f(x) \le C$, we could take at the limit $f$ dirac at $a$ or $b$ and get $R_f=b-a.$ Combining this with the previous points gives that $\frac{3}{4}(b-a) \le R_{\max} \le b-a.$
  • My intuition would be that the best $f$ is piecewise constant, equal to C on $[a,a+t] \bigcup [b-t,b]$ and equal to $c$ elsewhere on $(0,1)$, where $t$ is chosen to ensure that $\int_{0}^{1}f(x)\mathrm{d}x=1$. Is that intuition correct? I don't know how to formalize it.
Skywear
  • 383

1 Answers1

3

It turns out that, it is possible to find an analytic expression for the maximizers of this problem, assuming that $m\leq f(x)\leq M,~~m\geq 0$. However, as it becomes clear in the following analysis, the optimum is not always of the form presented in the OP.

To start, we perform the change of variables $u=\frac{x-a}{b-a}$ in both numerator and denominator; we find that

$$R_f=\frac{b-a}{2}+(b-a)\frac{\int_0^1|u-\frac{1}{2}|f(a+(b-a)u)du}{\int_0^1f(a+(b-a)u)du}$$

and therefore we can, equivalently, solve the maximization problem

$$\max_{g}\frac{\int_0^1|x-\frac{1}{2}|g(x)dx}{\int_0^1 g(x)dx},~~ m\leq g(x)\leq M$$

which can also be rewritten in the more convenient form

$$\max_{\lambda, y,g} R_g:=\max_{\lambda, y,g} \frac{1}{y}\int_0^1\left|t-\frac{1}{2}\right|g(t)dt-\lambda\left(\int_{0}^1g(t)dt -y \right), ~~y\leq 1,~~ m\leq g(t)\leq M$$

I will consider the following subset of piecewise constant functions

$$g(x)=\sum_{n=1}^N G_{n}1_{t_{n-1}\leq x< t_n} \\~~t_0=0,~ t_N=1,~ m\leq G_n\leq M$$

Since any continuous function can be approximated arbitrarily precisely by this subset, it suffices to focus on finding $G_n$ that maximize the expression above. Assume WLOG that $t_{m-1}<1/2$, $t_{m}\geq 1/2$, and the ordering $t_0<t_1<...<t_{N-1}<t_N$ and then one can write the following expressions for the integrals involved:

$$\int_0^1 |x-1/2|g(x)dx=\sum_{n=1}^{m-1}G_n\frac{(t_{n-1}-1/2)^2-(t_{n}-1/2)^2}{2}+\sum_{n=m+1}^{N}G_n\frac{(t_{n}-1/2)^2-(t_{n-1}-1/2)^2}{2}+G_m\frac{(t_{m-1}-1/2)^2+(t_{m}-1/2)^2}{2}$$

$$\\\int_0^1 g(x)dx=\sum_{n=1}^N G_n(t_n-t_{n-1})$$

The maximization must now be performed over the parameters defining the class $g$, $(G_n, t_n, \lambda, y)$. The functional is linear with respect to $G_n$, which means that all of them must assume their boundary values $G_n=m,M$. This means also that the only continuous functions that may maximize the functions are the constants $g(x)=m,M$. We now turn to the $t_n$ maximization- note that the functional $R_g$ is just a sum of quadratic terms

$$\int_0^1 |x-1/2|g(x)dx=G_1\frac{(t_0-1/2)^2}{2y}+G_N\frac{(t_N-1/2)^2}{2y}+\sum_{k=1}^{m-1}\frac{(t_k-1/2)^2}{2y}(G_{k+1}-G_{k})+\sum_{k=m}^{N-1}\frac{(t_k-1/2)^2}{2y}(G_{k}-G_{k+1})$$

To admit a maximizer in the interior of the unit cube $[0,1]^{N-1}$, it must be true that all coefficients of the quadratic terms are nonpositive, but not all of them are allowed to be zero. This is an incredibly restrictive requirement; after some reflection it is revealed that the only possible configurations allowed are

$$N=1: G_1=m, M$$ $$ N=2: (G_1, G_2)=(m,M), (M,m)$$ $$ N=3:(G_1,G_2, G_3)=(M,m,M)$$

Any attempt to raise $N>3$ results in most of the coefficients being zero. The $N=1$ case is trivial, yielding

$$R_{1,\max}=\frac{1}{4}$$

Finding potential maximizers for the case $N=2$ is a bit trickier, but nevertheless tractable: first note that the functional can be written

$$R_g(N=2)=\frac{M+m}{8y}-(M-m)\frac{(t_1-1/2)^2}{2y}-\lambda(t_1(M-m)+m-y)$$

and is maximized iff the following values are attained:

$$t_1^*=1/2+\lambda_2^*y_2^*,~~~\lambda_2^*=\frac{2y_2^*-M-m}{2y_2^*(M-m)},~~~ y_2^*=\sqrt{\frac{m(M+m)}{2}}\\ R_{2,\max}=\frac{M+m}{2(M-m)}\left(1-\sqrt{\frac{2m}{M+m}}\right)$$

The other $N=2$ subcase is related to this one by symmetry, and hence the maximizer is the same. We conclude the analysis by studying the $N=3$ case; in this situation, the functional reads

$$R_g(N=3)=\frac{M}{4y}-\frac{M-m}{2y}\left((t_1-1/2)^2+(t_2-1/2)^2\right)-\lambda((M-m)(t_1-t_2)+M-y)$$

After some algebra, we find another set of maximizers:

$$\lambda^*_3=\frac{M-y^*_3}{2y_3^*(M-m)},~~~ y_3^*=\sqrt{mM},~~~ R_{3,\max}=\frac{M}{2(M-m)}\left(1-\sqrt{\frac{m}{M}}\right)$$

It becomes clear now that the different maximizers become available at different values of $y$, and which maximizer is optimal depends on the relative ordering of $1, y_2^*, y_3^*$. In the easy case $y_3^*\leq 1$, the global maximum is given by the case $N=3$, as conjectured in the OP, since $R_{3,\max}\geq R_{2,\max}~~\forall~ m,M$. The final expression of the maximizer is

$$R_{f, \max}=\frac{b-a}{2}\frac{\sqrt{m}+2\sqrt{M}}{\sqrt{m}+\sqrt{M}},~~ mM\leq 1$$

The analysis for the rest of the cases is rather hairy, so with some computational aid, I found that when $y_2^*<1<y_3^*$, the $N=3$ maximizer evaluated at $y^*=1$ is optimal:

$$R_3=\frac{M(2-m)-1}{4(M-m)}$$

and when $1<y_2^*<y_3^*$ the $N=2$ solution evaluated at $y^*=1$ is optimal instead:

$$R_2=\frac{M+m}{8}-\frac{(2-M-m)^2}{8(M-m)}$$