New episode of the misdeeds of the Matrix-Cookbook.
If $f(x)=x^\top Ax$, the derivative is the linear application
\begin{align*} Df_x: h\in\mathbb{R}^n\rightarrow h^\top Ax+x^\top Ah=(x^\top A^\top+x^\top A)h
\end{align*} and the gradient $\nabla f(x)$ is the vector defined, for every $h$, by the relation
\begin{align*}Df_x(h)=\langle \nabla f(x),h\rangle={\nabla f(x)}^\top h.
\end{align*} Thus $\nabla f(x)=(A+A^\top)x$.
The Hadamard product $a\odot b$ is bilinear and the derivative satisfies
\begin{align*}\mathrm{d}(a\odot b)=\mathrm{d}a\odot b+a \odot \mathrm{d}b
\end{align*} like a standard product of matrices. For instance if $f(x)=(x\odot y)^\top A(x\odot y)$, then
\begin{align*}
Df_x:h\rightarrow (h\odot y)^\top & A(x\odot y)+(x\odot y)^\top A(h\odot y)\\&=[(x\odot y)^\top A+(x\odot y)^\top A^\top](y\odot h)
\\&=([(A+A^\top)(x\odot y)]\odot y)^\top h
\\&=\langle [(A+A^\top)(x\odot y)]\odot y,h \rangle
\end{align*}
because $\langle u, v\odot w
\rangle=\langle u\odot v, w
\rangle$ and $\nabla f(x)=[(A+A^\top)(x\odot y)]\odot y$.
EDIT. Answer to @hans.
Concerning the derivative or the gradient, the standard notation is as follows. Let \begin{align*}f: X=(x,y)\in \Omega\subset \mathbb{R}^p\times\mathbb{R}^q\rightarrow f(X)\in \mathbb{R}^n. \end{align*}
Note that $\mathrm{d}f_X, DF_X, {\mathrm{d}f(X)}/{\mathrm{d}X}$ refer to the same concept: the total differential or the total derivative in $X$; it is a linear application $(h,k)\in\mathbb{R}^p\times\mathbb{R}^q\rightarrow \mathbb{R}^n$. In particular, in the formula
\begin{align*}\frac{\mathrm{d}f}{\mathrm{d}X}=\frac{\partial f}{\partial x}\mathrm{d}x+\frac{\partial f}{\partial y}\mathrm{d}y,
\end{align*} the linear applications $\mathrm{d}x, \mathrm{d}y$, are defined as $\mathrm{d}x:(h,k)\rightarrow h$ and $\mathrm{d}y:(h,k)\rightarrow k$. The "partial derivative" $\partial f(X)/{\partial x}:\mathbb{R}^P\rightarrow\mathbb{R}^n$ is also a linear application.
For the case $n=1$, we can define the gradient of $f$ by duality, using the scalar product $\langle H, K
\rangle=H^\top K$ for vectors or $\langle H, K
\rangle=\mathrm{trace}(H^\top K)$ for square matrices (cf. the beginning of the post).
With our notation,
\begin{align*}Df_X(h,k)=\dfrac{\partial f(X)}{\partial x}h+\dfrac{\partial f(X)}{\partial y}k=\left[\dfrac{\partial f(X)}{\partial x},\dfrac{\partial f(X)}{\partial y}\right][h,k]^\top.
\end{align*} Thus, \begin{align*}\nabla(f)(X)=\left[\frac{\partial f(X)}{\partial x}, \frac{\partial f(X)}{\partial y}\right],\end{align*} that is, the transpose of the Jacobian matrix of $f$.
Of course, the calculation in your post is correct but you calculate the gradient, not the differential nor the derivative.