6

Variation of Calculus seems to have problems without the control with variables such as state and time. Then again Control Theory problems seems to have problems with one extra variable that is control usually denoted by $u$. Variation of Calculus has this kind of conditions according to lecture slides here while Control Theory problems has this kind of conditions here. So the

Questions about the differences between variation of calculus and control theory

  1. Is the difference between the Variation of Calculus problems and Control Theory problems the control variable $u$? Or are there other differences?

  2. I am trying to understand whether I could use the conditions in the same problems by redefining the control as an extra state. So there is some sort of equivalence between the methods?

hhh
  • 5,605
  • I suppose you mean the calculus of variations. A problem of calculus of variations without a control is like a problem of optimization without a variable. You are not on the right track. – Pait May 06 '14 at 13:54

1 Answers1

10

Optimal Control Theory can be applied to a far wider set of dynamic control problems than can the Calculus of Variation because OCT employs Pontryagin's Maximum Principle.

The difference is analogous to static constrained optimization with equality contraints versus inequality constraints. The former can be solved directly by Langrange multipliers and their First and Second Order Conditions. The latter requires taking account of boundary solutions via the Kuhn-Tucker complementary slackness conditions.

Of course, Pait's comment is correct: CoV requires a control variable (the function, $y(t)$). It's just usually not (never?) referred to as a control variable.

Equivalent Problems

Certain classes of OCT problems can be converted into an equivalent CoV problem. When the integrand $F[\bullet]$ is twice differentiable and concave over the admissible region of the control variable, $u(t) \in \mathcal{U}$. Only then is the difference "only" the use of a control variable $u(t)$ -- basically it takes the place of $\dot y$ in the problem.

The fundamental CoV problem is: \begin{align} \min_{y} V [ y ] &= \int^T_0 F [ t, y(t), \dot y(t) ] \ dt \\ \text{subject to} \qquad \quad y(0) &= A \\ y(T) &= Z \end{align} Clearly, the optimizer chooses $y(t)$ over $t \in [0,T]$, and it is therefore a control variable. Since this is solved by the Euler-Lagrange equation ($F_y - \frac{ d F_{\dot y} }{dt} = 0$, equation (6) in the slides you linked to), $F$ must be twice-differentiable.

This can be written as an OCT problem of the form: \begin{align} \min_{u(t)}& \quad V[u(t)] = \int_0^T F\big[ t, y(t), u(t) \big] \ dt \\[0.5em] \text{subject to:}& \quad \dot y = f\big[ t, y(t), u(t) \big] \\ & \quad y(0) = A \\ & \quad y(T) = Z \end{align} the only difference being the explicit or implicit inclusion of $\dot y$ in the integrand.

Where Optimal Control Differs

The more general fundamental OCT problem is \begin{align} \max_{u(t)} V &= \int_0^T F\big[t, y(t), u(t) \big] \\ \text{s.t.} \qquad \dot y &= f(t,y,u) \\ y(0) &= A \\ \text{and} \quad u(t) &\in \mathcal{U}, \qquad \forall t \in [0,T] \end{align} where $\mathcal{U}$ is the admissible set of $u$.

After setting up the Hamiltonian $H[y,u,t]$ you'll find the four conditions listed in the 2nd set of slides you linked to. They key is that Pontryagin's Maximum Principle is more general than the middle equation in (12). It is, in fact, \begin{align} \max_u \ &H(t,y,u, \mu) && \quad \forall t \in [0,T] \end{align} this condition is often written as $\frac{\partial H}{\partial u} = 0, \forall t$, which is fine for continuous problems.

The more general $\max_u$ condition allows the control variable to be discontinuous (jumping at boundary points) which implies the state equation $y$ is piece-wise continuous (i.e., it will have kinks when $u$ jumps). Such results are ruled out by the classic Calculus of Variation.

For example, try solving \begin{align} \max_u V &= \int_0^2 (2y - 3u) \ dt \\ \text{s.t.} \qquad \qquad \dot y &= y + u \\ \text{given} \qquad y(0) &= 4 \\ u(t) & \in \mathcal{U} = [0,2] \end{align} with OCT and then CoV. The FOC of the Hamiltonian w.r.t. $u$ does not yield an interior solution: \begin{equation} \frac{\partial H}{\partial u} = - 3 + \mu \end{equation}.

(Note this comes from Chiang's Elements of Dynamic Optimization, which has useful discussions in Chapters 2 and 7).