In some book about continuum mechanics I read that from principle of virtual work follows balance of rotational momentum when $\delta \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{r}, \; \boldsymbol{\delta \varphi} = \boldsymbol{\mathsf{const}}$ ($\boldsymbol{r}$ is location vector, $\delta \boldsymbol{r}$ is its variation, $\boldsymbol{\delta \varphi}$ is not variation, just denoted as it for some reason like being small enough for infinitesimal $\delta \boldsymbol{r}$). Then there is written without any explaination $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. I know that $\boldsymbol{E}$ is bivalent “metric unit identity” tensor (the one which is neutral to dot product operation), and that $\boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{E}$. And that $\boldsymbol{a} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{a} \:\: \forall\boldsymbol{a}$, no minus here. To get minus, transposing is needed: $\left( \boldsymbol{E} \times \boldsymbol{\delta \varphi} \right)^{\mathsf{T}} \! = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$. Thus I can’t get why $\boldsymbol{\nabla} \delta \boldsymbol{r} = - \boldsymbol{E} \times \boldsymbol{\delta \varphi}$ has minus sign.
For constant $\boldsymbol{\delta \varphi}$, $\boldsymbol{\nabla} \boldsymbol{\delta \varphi} = {^2\boldsymbol{0}}$ (bivalent zero tensor). Isn’t it true that $\boldsymbol{\nabla} \! \left( \boldsymbol{\delta \varphi} \times \boldsymbol{r} \right) = \boldsymbol{\delta \varphi} \times \boldsymbol{\nabla} \boldsymbol{r} = \boldsymbol{\delta \varphi} \times \boldsymbol{E} = \boldsymbol{E} \times \boldsymbol{\delta \varphi}$? Searching for how to get gradient of cross product of two vectors gives gradient of dot product, divergence ($\boldsymbol{\nabla} \cdot$) of cross product, and many other relations. But no gradient of cross product $\boldsymbol{\nabla} \! \left( \boldsymbol{a} \times \boldsymbol{b} \right) = \ldots$ Is it impossible or unknown how to find it? At least for the case when first vector is constant.
update
As “gradient” I mean tensor product with “nabla” $\boldsymbol{\nabla}$: $\operatorname{^{+1}grad} \boldsymbol{A} \equiv \boldsymbol{\nabla} \! \boldsymbol{A}$, here $\boldsymbol{A}$ may be tensor of any valence (and I don’t use “$\otimes$” or any other symbol for tensor product). Nabla (differential Hamilton’s operator) is $\boldsymbol{\nabla} \equiv (\sum_i)\, \boldsymbol{r}^i \partial_i$, $\:(\sum_i)\, \boldsymbol{r}^i \boldsymbol{r}_i = \boldsymbol{E} \,\Leftrightarrow\, \boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^{i}_{j}$ (Kronecker’s delta), $\,\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (basis vectors), $\,\partial_i \equiv \frac{\partial}{\partial q^i}$, $\:\boldsymbol{r}(q^i)$ is location vector, and $q^i$ $(i = 1, 2, 3)$ are coordinates.