1

I have an optimal control problem with a control $u\in [a,b]$, a state $x(t)$ and the law of motion $x'(t)=f(t,u(t))$ for some smooth $f$. I want to maximize: $$ \int_0^1 J[t,x(t),u(t)] dt $$ subject to a constraint that: $$ \int_0^1 h(x(t))dt = k, $$ for some smooth function $h$ and constant $k$. I am looking for conditions under which the problem reduces to maximizing: $$ \int_0^1 J[t,x(t),u(t)] -\eta(h(x(t))-k) dt $$ for some multiplier $\eta$. I know that conditions for introducing these kinds of multipliers are quite lax in calculus of variations (see here) and am wondering if the same is true in optimal control. I would suspect so because these are very closely related approaches.

Solution idea: actually, could we then just introduce a state variable $q(t)$ with a law of motion $q'(t)=h(x(t))$ and treat it as a regular state variable with a fixed end-point? Let $\xi(t)$ be the costate associated with $q(t)$. Then by the Maximum Principle we have: $$ \xi'(t) = -\frac{\partial H |_*}{\partial q}=0 $$ since the objective does not depend on $q$. Thus, the costate has to be constant, and therefore has to effectively be a multiplier on the constraint.

Does this mean that strong duality always holds in optimal control problems?

qscty
  • 321

0 Answers0