Just for theoretical purposes, I'll write the optimality conditions for both the cases $\|x\|_1 = 1$ and $\|x\|_2 = 1$.
The positivity constraint can be handled through the change of variable $x = y^{\odot 2}$, denoting a component-wise power. The problem becomes
$$\min_{\|y^{\odot 2}\|=1} \|Ay^{\odot 2}\|_F^2 = \min_{\|y^{\odot 2}\|=1} (y^{\odot 2})^\top A^\top A y^{\odot 2}\label{optimization}\tag{1}$$
It is possible to derive the following formulas, slightly more general than those I proved here: for any square matrix $M$ and any column vector $c$ of suitable size, and for any positive integer $k$ it holds that
\begin{align}
&\frac{\mathrm{d}}{\mathrm{d}y} (y^{\odot 2})^\top M y^{\odot 2} = 2\left((M+M^\top)y^{\odot 2}\right)\odot y\\
&\frac{\mathrm{d}}{\mathrm{d}y} c^\top y^{\odot k} = k c \odot y^{\odot k-1}
\end{align}
where $\odot$ denotes the element-wise product, aka Hadamard product.
We will use these formulas to write the optimality condition $\frac{\mathrm{d}}{\mathrm{d}y} \mathcal{F}(y) = 0$, where $\mathcal{F}(y)$ is the cost function in \eqref{optimization}.
CASE $\|x\|_1 = 1$
The Lagrangian is
$$\mathcal{L}(y,\lambda) = \mathcal{F}(y) - \lambda (\mathbb{1}^\top y^{\odot 2} -1),$$
where $\mathbb{1}$ denotes a column vector of ones. We obtain the optimality conditions
$$\boxed{
\left[2A^\top Ay^{\odot 2}-\lambda\mathbb{1}\right]\odot y = 0\\
\mathbb{1}^\top y^{\odot 2} = 1
}$$
In the $x$ variable, these conditions become
$$\boxed{
\left[2A^\top Ax-\lambda\mathbb{1}\right]\odot x = 0\\
\mathbb{1}^\top x = 1\\
x \ge 0
}$$
CASE $\|x\|_2 = 1$
The Lagrangian is
$$\mathcal{L}(y,\lambda) = \mathcal{F}(y) - \lambda (\mathbb{1}^\top y^{\odot 4} -1).$$
We obtain the (quite different) optimality conditions
$$\boxed{
\left[(A^\top A-\lambda I)y^{\odot 2}\right]\odot y = 0\\
\mathbb{1}^\top y^{\odot 4} = 1
}$$
In the $x$ variable, these conditions become
$$\boxed{
\left[(A^\top A-\lambda I)x\right]\odot x = 0\\
\mathbb{1}^\top x^{\odot 2} = 1\\
x \ge 0.
}$$