3

If I have a multivariable function, can I split/decompose it into several single-variable functions?

For instance:

Given $f:\mathbb R^2 \rightarrow \mathbb R$, I introduce the functions $g:\mathbb R\rightarrow \mathbb R$ and $h:\mathbb R\rightarrow \mathbb R$ so $$ f(x,y)=g(x)h(y) $$ Or $$ f(x,y)=g(x)+h(y) $$

Is this mathematically correct?

Is function composition the right name?

Ex. 1:

The function $f:\mathbb R^2\rightarrow \mathbb R$ is given by $f(x,y)=2xy$. Introduce the functions $g,h:\mathbb R\rightarrow \mathbb R$ and write $$ f(x,y)=2xy=g(x)h(y) $$ where $g(x)=2x$ and $h(y)=y$.

Ex. 2:

Or if $f(x,y)=2x+y$, we write $$ f(x,y)=2x+y=g(x)+h(y) $$ where $g(x)=2x$ and $h(y)=y$.

JDoeDoe
  • 2,562
  • 1
    It depends on the function $f$, if this is possible. For example, the function $f(x,y) = sin(xy)$ can't be written as a product $g(x)h(y)$ or as a sum $g(x) + h(y)$, but there are plenty of functions where this works. – Tom Jun 14 '18 at 18:31
  • $f(x,y)=\cos(x+y)$ won't fit into either of your two ideas. Neither is function composition, by the way. One is ordinary multiplication, and the other is ordinary addition. – Adrian Keister Jun 14 '18 at 18:31
  • 1
    If the above were true, then for a twice differentiable $f$ we would have either ${\partial^2 f(x,y) \over \partial x^2} = 0$ or ${\partial^2 f(x,y) \over \partial x \partial y} = 0$. So, for a counterexample, pick any function for which both are not true. – copper.hat Jun 14 '18 at 18:35
  • "Decomposition" is a good word for it, in my opinion. – Arthur Jun 14 '18 at 18:44
  • @copper.hat The solutions to $f_{xx} = 0$ are $g(y) + x h(y)$ not $f(x) g(y)$. A possible PDE that has solutions $g(x) h(y)$ but not all functions as solutions would be: $f \cdot f_{xy} = f_x \cdot f_y$. – Daniel Schepler Jun 14 '18 at 19:08
  • @DanielSchepler: Thanks again. I am having a slow week. – copper.hat Jun 14 '18 at 19:28

2 Answers2

3

In general, this is not possible. Of course there exist functions where it is possible (all those that are defined as such a product are of course among them).

Here's one property that such functions have that most functions don't have:

Consider the values $x_1$, $y_1$, $x_2$ and $y_2$ and assume that $f(x_i, y_j)$ is defined for all four possible combinations. Then for functions that can be written as products, you have: $$f(x_1,y_1)f(x_2,y_2)=g(x_1)h(y_1)g(x_2)h(y_2)=f(x_1,y_2)f(x_2,y_1)$$ However in general this is not true. For example, consider $$f(x,y)=x^y$$ Then you have $$f(1,2)f(3,4) = 1^2\cdot 3^4 = 81 \ne 8 = 1^4\cdot 2^3 = f(1,4)f(2,3)$$ Which proves that $x^y$ cannot be written as $g(x)h(y)$ for any real functions $g$ and $h$.

An analogous argument works for any operation that is commutative and associative.

celtschk
  • 44,527
0

It depends on the family of single-variable functions. If we limit this family to single-variable scalar functions, as @celtschk pointed out in his/her answer, it is not possible in general.

However, if we consider single-variable vector functions then we may:

Suppose that $f(x,y)$ is infinitely differentiable in an open interval $I$, and assume that there is a positive constant $A$ such that $|\frac{\partial f^{(i+j)}}{\partial x^i \partial y^j}(x, y)|\le A^n$ for $n=1, 2, 3, \ldots $ for every $(x, y)$ in $I$, then we can approximate $f(x,y)$ with its Taylor's series in $I$ as follow:

$$f(x, y) \approx \sum_{i=0}^n\sum_{j=0}^{n-i} c_{ij} (x-x_0)^i (y-y_0)^j = \sum_{i=0}^n (x-x_0)^i \sum_{j=0}^{n-i} c_{ij} (y-y_0)^j = \begin{pmatrix} (x-x_0)^0 \\ (x-x_0)^1 \\ \vdots \\ (x-x_0)^n \end{pmatrix} . \begin{pmatrix} \sum_{j=0}^{n} c_{0j} (y-y_0)^j \\ \sum_{j=0}^{n-1} c_{1j} (y-y_0)^j \\ \vdots \\ \sum_{j=0}^{n-n} c_{nj} (y-y_0)^j \end{pmatrix} = \vec{\mathbf{G}(x)}.\vec{\mathbf{H}(y)}$$

where $.$ is the dot product between the two vectors and $c_{ij}=\frac{\frac{\partial f^{(i+j)}}{\partial x^i \partial y^j}(x_0, y_0)}{i! j!}$ are Taylor series coefficients centered around $(x_0, y_0) \in I$.

The conditions on $f(x,y)$ are required to guarantee the convergence of Taylor series in $I$.

Note that the dot product and multiplication are not distributive and hence:

$$ \Bigl( \vec{\mathbf{G}(x_1)}.\vec{\mathbf{H}(y_1)}\Bigl)\Bigl(\vec{\mathbf{G}(x_2)}.\vec{\mathbf{H}(y_2)}\Bigl) \ne \Bigl(\vec{\mathbf{G}(x_1)}.\vec{\mathbf{H}(y_2)}\Bigl)\Bigl(\vec{\mathbf{G}(x_2)}.\vec{\mathbf{H}(y_1)}\Bigl)$$

Example:

For the sake of computation ease, we can decompose $f(x,y)$ as follow:

$$f(x, y) \approx \begin{pmatrix} c_{00}(x-x_0)^0 \\ c_{01}(x-x_0)^1 \\ \vdots \\ c_{0n}(x-x_0)^0 \\ c_{10}(x-x_0)^1 \\ \vdots \\ c_{n0}(x-x_0)^n \end{pmatrix} . \begin{pmatrix} (y-y_0)^0 \\ (y-y_0)^1 \\ \vdots \\ (y-y_0)^n \\ (y-y_0)^0 \\ \vdots \\ (y-y_0)^0 \end{pmatrix} = \vec{\mathbf{G}(x)}.\vec{\mathbf{H}(y)}$$

Suppose $f(x, y) = x ^ y$, we can write 5th order of its Taylor series at point (1., 1.):

$f(x, y) \approx P_{a=(1., 1.),k=5}(x, y) = \begin{pmatrix} 1.0000 (x - 1)^0 \\ 1.0000 (x - 1)^0 \\ 1.0000 (x - 1)^1 \\ 0.5000 (x - 1)^1 \\ -0.1667 (x - 1)^1 \\ 0.0833 (x - 1)^1 \\ 0.5000 (x - 1)^2 \end{pmatrix} . \begin{pmatrix} (y - 1)^0 \\ (y - 1)^1 \\ (y - 1)^1 \\ (y - 1)^2 \\ (y - 1)^3 \\ (y - 1)^4 \\ (y - 1)^2 \end{pmatrix}$

For example, we can test this approximation at (1.5, 2.0):

$P_{a=(1., 1.),k=5}(1.5, 2.0) = 2.234375$

$f(1.5, 2.0) = 2.25$

Note that error will be reduced if we use a higher order of Taylor series.

I wrote a script here where you can apply the same idea for other functions or points.

MajidL
  • 101