0

Imagine I have a 2-variable 2-dimensional vector function $\mathbf{F}(\mathbf{x})=(F_1(\mathbf{x}),F_2(\mathbf{x}))$, where $\mathbf{x}=(x_1,x_2)$. How do I Taylor-expand such function around $\mathbf{x}=\mathbf{0}$ to third order terms?

My approach: Up to second order, we may write $$ \mathbf{F}(\mathbf{x})=\mathbf{F}(\mathbf{0})+\mathbf{J}(\mathbf{0})\mathbf{x}+\frac12 \begin{pmatrix} \mathbf{x}^T \mathbf{H}_1(\mathbf{0})\mathbf{x}\\ \mathbf{x}^T \mathbf{H}_2(\mathbf{0})\mathbf{x} \end{pmatrix}+h.o.t. $$ where $\mathbf{J}$ is the Jacobian matrix of $\mathbf{F}$, and $\mathbf{H}_1$ and $\mathbf{H}_2$ are the Hessian matrices of functions $F_1$ and $F_2$, respectively. How does one write the third term in similar compact notation?

sam wolfe
  • 3,465
  • See this answer of mine for the 3rd order term, and the sublink for the fully general case. – peek-a-boo Jun 10 '22 at 14:38
  • @peek-a-boo Thanks! Is there a name for this operator, as we have the Jacobian and Hessian for first and second order? – sam wolfe Jun 10 '22 at 15:44
  • These are called the (first, second, third...) Frechet derivatives (at the point of interest; in your case at the origin). Often you might just hear it called "total derivative". – peek-a-boo Jun 10 '22 at 15:45
  • @peek-a-boo Unfortunately I think that answer is regarding real-valued functions, as opposed to vector functions like mine. Do you know how to represent the more general result for vector functions? – sam wolfe Jun 11 '22 at 12:16
  • @peek-a-boo Based on some of your answers, I have derived a general representation of the general Taylor expansion and posted an answer below. Please check it and let me know what you think! I have slightly altered your notation. – sam wolfe Jun 11 '22 at 13:11
  • The first link only talks about real valued functions because that's what was asked in that question. However, the theorem holds in much greater generality (follow the links in the links). – peek-a-boo Jun 11 '22 at 14:20
  • Yes, I understood, and I followed such links. Could you let me know if the answer I provided to this question based on your answers is correct? – sam wolfe Jun 11 '22 at 14:24
  • For an introductory discussion that could be useful prior to the introduction of all the symbolism in more general cases, see my answer to What is a good way to teach Taylor expansion of multi-variable calculus?. In light of your own answer to this question (already posted when I first got here), you probably don't need this. However, someone else chancing upon this question might find it useful. – Dave L. Renfro Jun 11 '22 at 15:45
  • @DaveL.Renfro Very interesting. I wonder if there is a compact way of representing the third Frechet derivative (and higher-order). Perhaps in the form of a generalisation of the Jacobian and Hessian matrices. – sam wolfe Jun 11 '22 at 15:56

1 Answers1

1

Following the general answer here and the formulation here, we have the following.

Consider a function $\mathbf{F}:\mathbb{R}^n\to \mathbb{R}^m$ given by $\mathbf{F}(\mathbf{x})=(F_1(\mathbf{x}),...,F_m(\mathbf{x}))$, where $\mathbf{x}=(x_0,...,x_n)$.

The general $k$th-order Taylor expansion of $\mathbf{F}(\mathbf{x}+\mathbf{x}_0)$ about $\mathbf{x}_0$ is given by $$ \mathbf{F}(\mathbf{x}+\mathbf{x}_0)\simeq T_{\mathbf{F},\mathbf{x}_0,k}(\mathbf{x}):=\sum_{j=0}^k\frac{(D^j\mathbf{F})_{\mathbf{x}_0}[(\mathbf{x})^j]}{j!} $$ where the Frechet-derivative terms $(D^j\mathbf{F})_{\mathbf{x}_0}[(\mathbf{x})^j]$ may be written in the vector form as $$ \begin{align}\label{eqfrechetd} (D^j\mathbf{F})_{\mathbf{x}_0}[(\mathbf{x})^j]= \begin{pmatrix} \sum_{i_1,...,i_j=1}^n\frac{\partial^jF_1}{\partial x_{i_1}\cdots \partial x_{i_j}}(\mathbf{x}_0) (x_{i_1}\cdots x_{i_j})\\ \vdots\\ \sum_{i_1,...,i_j=1}^n\frac{\partial^jF_m}{\partial x_{i_1}\cdots \partial x_{i_j}}(\mathbf{x}_0) (x_{i_1}\cdots x_{i_j}) \end{pmatrix} \end{align} $$ where we used the notation $$ \begin{align} \sum_{i_1,...,i_j=1}^n&=\sum_{i_1=1}^n\cdots \sum_{i_j=1}^n \end{align} $$ In my case, $n=m=2$ and so the third term of the Taylor expansion is given by $$ \frac16(D^3\mathbf{F})_{\mathbf{x}_0}[(\mathbf{x})^3]= \frac16\begin{pmatrix} \sum_{i_1,i_2,i_3=1}^2\frac{\partial^3F_1}{\partial x_{i_1}\partial x_{i_2}\partial x_{i_3}}(\mathbf{x}_0) x_{i_1}x_{i_2}x_{i_3}\\ \sum_{i_1,i_2,i_3=1}^2\frac{\partial^3F_2}{\partial x_{i_1}\partial x_{i_2}\partial x_{i_3}}(\mathbf{x}_0) x_{i_1}x_{i_2}x_{i_3} \end{pmatrix} $$

sam wolfe
  • 3,465
  • 1
    Yes this is right (because you defined your notation). But note that it is not common to write ${i}$ and $\sum_{{i}=1}^n$, because that doesn't indicate the size of the multi-index. Rather, it is more common to write $|I|=j$, where $I=(i_1,\dots, i_j)$ and $1\leq i_1,\dots, i_j\leq n$. In this regard, the sums and partials written $\sum_{|I|=j}$, and $\frac{\partial^jF_1}{\partial x_I}(\mathbf{x_0})$ and the multinomials are written $x_I=x_{i_1}\dots x_{i_j}$, and so on. Also, in the final paragraph, you said $n=2$, but in the formula you didn't plug in $n=2$. – peek-a-boo Jun 11 '22 at 14:35
  • Yes, I was a bit unsure about the notation. I will adjust. And like you said, thinking about matrices and generalizations of Jacobians and Hessians is definitely not the way to go in this general case. Thank you so much! – sam wolfe Jun 11 '22 at 14:37
  • I have changed the notation to something more suitable. Let me know if I should change anything else. – sam wolfe Jun 11 '22 at 14:41
  • 1
    What I wrote in my previous comment is just one way of doing things. Another way is to use the multi-index notation $\alpha=(\alpha_1,\dots, \alpha_j)$. Here, $|\alpha|=\alpha_1+\dots+\alpha_j$, and $\frac{\partial^{|\alpha|}f}{\partial x^{\alpha}}=\frac{\partial^{\alpha_1+\dots+\alpha_j}f}{\partial x_1^{\alpha_1}\dots\partial x_{j}^{\alpha_j}}$, and $x^{\alpha}=x_1^{\alpha_1}\cdots x_j^{\alpha_j}$ and so on; see the link above. If you adopt this notation, then Taylor's theorem will look slightly different (due to symmetry of the partials). – peek-a-boo Jun 11 '22 at 14:42