3

In my current research, a curious problem arose. I will elaborate.

I have a Volume function, which is given by: $$ V(u,v,w) = \sum_{i=1}^n N_i(u)M_i(v)L_i(w)\omega _iPi \text{,} $$

where $N_i(u)$, $M_i(v)$, $L_i(w)$ are B-Spline basis functions (polynomials, really), $\omega _i$ are constant weights and $P_i$ are control points, which are also constant, in $x,y,z$ coordinates. This means that $V(u,v,w)$ is a mapping from parameter space $(u,v,w)$ to physical space $(x,y,z)$.

I'm working with differential equations that use this mapping as it's domain. More specifically, I'm working with elastostatics, so I need information on all second derivatives of the displacement: $\frac{\partial ^2 V}{\partial x^2}$, $\frac{\partial ^2 V}{\partial x \partial y}$, etc. EDIT: Please consider this as an $f(V(u,v,w)) = (f_x, f_y,f_z)$.

This is where things get complicated, so let's get to it. Differentiating $V(u,v,w)$ in regards to $x$. To avoid excessive chain rules, we will call the tensor-product $N_i(u)M_i(v)L_i(w)$ = $R_i(u,v,w)$: $$\frac{\partial V}{\partial x} = \frac{\partial}{\partial x} \sum_{i=1}^n R_i(u,v,w)\omega_iP_i ,$$ Chain rule, rearranging constants and summation: $$\frac{\partial V}{\partial x} = \sum_{i=1}^n \omega_i P_i \left[\frac{\partial R}{\partial u}\frac{\partial u}{\partial x} + \frac{\partial R}{\partial v}\frac{\partial v}{\partial x} + \frac{\partial R}{\partial w}\frac{\partial w}{\partial x} \right] .$$

To compute this, we need the partials over $u,v,w$, which we have, as the tensor-product splines have well defined derivatives. We also need the differentials $\frac{\partial u}{\partial x}$, $\frac{\partial v}{\partial x}$ and $\frac{\partial w}{\partial x}$, which are obtained via inversion of the Jacobian matrix: $$J(V) = \begin{bmatrix} \frac{\partial V_x}{\partial u} && \frac{\partial V_x}{\partial v} && \frac{\partial V_x}{\partial w} \\ \frac{\partial V_y}{\partial u} && \frac{\partial V_y}{\partial v} && \frac{\partial V_y}{\partial w} \\ \frac{\partial V_z}{\partial u} && \frac{\partial V_z}{\partial v} && \frac{\partial V_z}{\partial w} \end{bmatrix} $$

and $J^{-1}(V) = (J(V))^{-1}$ via multivariate inverse function theorem, $$J^{-1}(V) = \begin{bmatrix}\frac{\partial u}{\partial x} && \frac{\partial v}{\partial x} && \frac{\partial w}{\partial x} \\ \frac{\partial u}{\partial y} && \frac{\partial v}{\partial y} && \frac{\partial w}{\partial y} \\ \frac{\partial u}{\partial z} && \frac{\partial v}{\partial z} && \frac{\partial w}{\partial z} \end{bmatrix} $$.

So far, so good. Everything is obtainable. The problem starts when we go to second order derivatives: $$\frac{\partial ^2 V}{\partial x^2} = \sum_{i=1}^n \omega_i P_i \frac{\partial}{\partial x} \left[\frac{\partial R}{\partial u}\frac{\partial u}{\partial x} + \frac{\partial R}{\partial v}\frac{\partial v}{\partial x} + \frac{\partial R}{\partial w}\frac{\partial w}{\partial x} \right] ,$$ which, saving any chain rules mistakes, becomes: $$ \begin{split} \frac{\partial ^2 V}{\partial x^2} & = \sum_{i=1}^n \omega_i P_i \left\{ \left[ \left( \frac{\partial ^2 R}{\partial u^2}\frac{\partial u}{\partial x} +\frac{\partial ^2 R}{\partial u \partial v} \frac{\partial v}{\partial x} + \frac{\partial ^2 R}{\partial u \partial w}\frac{\partial w}{\partial x}\right) \frac{\partial u}{\partial x} + \frac{\partial R}{\partial u}\frac{\partial ^2 u}{\partial x^2} \right] + \ldots \\ \ldots + \left[ \left( \frac{\partial ^2 R}{\partial u \partial v}\frac{\partial u}{\partial x} +\frac{\partial ^2 R}{\partial v^2} \frac{\partial v}{\partial x} + \frac{\partial ^2 R}{\partial v \partial w}\frac{\partial w}{\partial x}\right)\frac{\partial v}{\partial x} + \frac{\partial R}{\partial v}\frac{\partial ^2 v}{\partial x^2} \right] +\dots \\ \dots + \left[ \left( \frac{\partial ^2 R}{\partial w \partial u}\frac{\partial u}{\partial x} +\frac{\partial ^2 R}{\partial w \partial v} \frac{\partial v}{\partial x} + \frac{\partial ^2 R}{\partial w^2}\frac{\partial w}{\partial x} \right)\frac{\partial w}{\partial x} + \frac{\partial R}{\partial w}\frac{\partial ^2 w}{\partial x^2} \right] \right\}. \end{split} $$

And there's where the problem lies: how can I obtain the $\frac{\partial ^2 u}{\partial x^2}$? When we evaluate $\frac{\partial ^2 V}{\partial x \partial y}$ , differentials of the kind $\frac{\partial ^2 u}{\partial x \partial y}$ also appear.

Is inverting the Hessian matrix the way to approach this? It feels wrong to me, as I can't fathom how the cross derivatives would behave in this inversion. I'm currently considering two approaches:

First one is going a "scalar" route, by going coordinate by coordinate with this procedure: Denoting $P_i = (P_{ix}, P_{iy}, P_{iz})$ and $V(u,v,w) = (x, y, z)$, we start by deriving the first component by u: $$\frac{\partial V}{\partial u} = \sum_{i=1}^n P_x \frac{\partial R}{\partial u} = \frac{\partial x}{\partial u} .$$ By the inverse function theorem, we can write: $$\frac{\partial u}{\partial x} = \left[\sum_{i=1}^n P_x \frac{\partial R}{\partial u}\right]^{-1}$$. Then we derive this expression in regards to $x$ to obtain an expression for $\frac{\partial ^2 u}{\partial x^2}$. The same procedure goes for the cross-derivatives.

This is the procedure that I'm working with right now, but if this works, why should I bother inverting the Jacobian in the first place? Is this right? Are there any other way to rigorously achieve these derivatives?

The second procedure I'm looking into is by getting an analytical expression for the inverse Jacobian, and then deriving it in regards to x to get a "Inverse Hessian" styled matrix. The problem I'm facing with this route is: my adjoint matrix is null, which is something that is bothering me (either because I'm right and then I can't invert the Jacobian or because I'm wrong and can't do simple algebra).

Dear Mathexchange: a penny for your thoughts?

  • I'm confused... If $V(u,v,w)$ is a mapping from $(u,v,w)$ space to $(x,y,z)$ space, what does it even mean to take partials with respect to $x$, $y$ and $z$? – Hans Lundmark Nov 19 '19 at 13:54
  • You can assume I'm taking partials of an $f(V(u,v,w))$, otherwise second derivatives are just zero. – Guilherme Henrique da Silva Nov 19 '19 at 14:45
  • Sorry, but that didn't make things clearer for me at all. If you write $\partial V/\partial x$, you must have some function $V(x,y,z)$ in mind, and I don't see what that is here. That inverse Jacobian thing is also very strange. Why are you writing $d$ instead of $\partial$ all of a sudden? And are you thinking of $(u,v,w)$ as functions of $(x,y,z)$ via $(u,v,w)=V^{-1}(x,y,z)$??? In that case, $V(u,v,w)$, with $(u,v,w)$ considered as depending on $(x,y,z)$ in that way, would just be $(x,y,z)$ again, and the Jacobian $J(V)$ would simply be the identity matrix. – Hans Lundmark Nov 19 '19 at 15:51
  • The function is right there. $$ V(u,v,w) = \sum_{i=1}^n R_i(u,v,w) \omega_i P_i .$$ It describes a Volume, from a parameter space $[0,1]\times[0,1]\times[0,1]$ to $(x,y,z)$ coordinates. What takes them to the x,y,z are the control points $P_i$, that have $(P_x, P_y, P_z)$ coordinates. $R_i$ are B-Spline basis functions. You can see as a interpolation of such control points.

    The Jacobian, as mentioned, is done exactly as its Wikipedia definition: $$ \boldsymbol{J} = \left[ \frac{\partial \boldsymbol{f} }{\partial x_1} \dots \frac{\partial \boldsymbol{f} }{\partial x_n} \right] $$.

    – Guilherme Henrique da Silva Nov 19 '19 at 16:42
  • PS: Fixed the $ \frac{du}{dx} to \frac{\partial u}{\partial x} $. – Guilherme Henrique da Silva Nov 19 '19 at 16:48
  • Yes, so $V$ is a function of $(u,v,w)$, with $(x,y,z)$ as values. So how on earth would you compute partials with respect to the output variables? What makes sense is taking partials with respect to the input variables: $\partial V/\partial u$, etc. – Hans Lundmark Nov 19 '19 at 16:52
  • That's a good question, actually. I'll try to give a little big more of back information of why I'm looking for these obnoxious derivatives. I'm trying to evaluate residual integrals in the form of: $$ \int_\Omega W \cdot D \boldsymbol{x} d\Omega $$. $ \Omega$ is the physical domain, described by $V(u,v,w)$ over there. D is an differential operator. Usually, there's no issue at all going to parameter domain by way of Jacobians, because generally, the Weight $W$ is $D^T \boldsymbol{t}$. However, my choice of $W$ is the Dirac delta, so the integral vanishes and I have a differential equation. – Guilherme Henrique da Silva Nov 19 '19 at 17:09
  • Actually, a set of linear equations evaluated at a discrete number of points chosen over the domain. Am I overthinking things? – Guilherme Henrique da Silva Nov 19 '19 at 17:10
  • I don't know, maybe if it's a discrete problem you can just write everything as sums instead of integrals and try to work from there... – Hans Lundmark Nov 19 '19 at 17:58

1 Answers1

0

This answer may be helpful. The basic strategy is to repeat the derivation in the proof of Inverse Function Theorem, but with applying second-order derivations instead of first-order ones.