You should properly bracket $A\cdot\nabla A\let\del\partial$ as $(A\cdot\nabla)A$, at least until you know what you're doing. the operator $A\cdot \nabla$ is a "scalar operator" i.e. it acts in the same way on each of the coordinates of the vector field $A$ it is differentiating. In components, we have
$$ \big((A\cdot\nabla)A\big)_i = (A\cdot\nabla) A_i= \sum_jA_j \del_jA_i.$$One skips writing $\sum_j$ for repeated indices; this is called Einstein summation.
Also, it is not clear what $\nabla \times \nabla A$ is in your guess; note that $\nabla A$ is a matrix. Regardless, your guess cannot be right because (either $\nabla\times\nabla = 0$ or) it doesnt have the right number of derivatives, which can be seen by a scaling argument: Suppose it was true for all vector fields $A(x)$. Then it must also be true for $A_\lambda(x) := A(\lambda x)$, for all $\lambda>0$. But chain rule in your guess gives
$$RHS =\lambda^3 (\nabla \times A \cdot \nabla \times \nabla A)(\lambda x)$$
and
$$ LHS = \lambda \nabla \times (((A \cdot\nabla) A)(\lambda x)) = \lambda^2 \nabla \times ((A \cdot\nabla) A)(\lambda x) $$
which is a contradiction for $\lambda\neq 1$.
The correct answer afaik is not nicely expressed in vector notation. Here it is with Einstein summation, writing the curl with the Levi-Civita symbol:
$$ \nabla\times ((A\cdot \nabla) B)_i=\epsilon_{ijk}\del_j (A_l\del_lB_k)=\epsilon_{ijk}(\del_jA_l)(\del_l B_k) + \epsilon_{ijk}A_l\del_j\del_lB_k $$
The second term is $(A\cdot\nabla) \nabla \times B$ but the first term isn't so nice. One can write it as
$$\epsilon_{ijk}(\del_jA_l)(\del_l B_k) = \bigg(\sum_{l=1}^3 \nabla A_l \times \del_l B\bigg)_i$$
(I write the sum for emphasis; note that $\nabla A_l$ and $\del_l B$ are vector fields, so the expression makes sense) or
$$\epsilon_{ijk}(\del_jA_l)(\del_l B_k) = \sum_{j=1}^3 \sum_{k=1}^3 \epsilon_{ijk} (\del_j A\cdot\nabla) B_k$$
but I don't think these are better than just writing out the indices.