Does there exist an infinite sequence $a_1, a_2, \dots \in \mathbb{C}$ such that for all integers $k \ge 1$, we have $$\sum_{i = 1}^{\infty} a_i^k = 0?$$
The statement is true if the $a's$ are absolutely convergent, but this question is about if we relax absolute convergence and only have conditional.
If we instead take a finite sequence and sufficiently many powers are zero, then indeed they are all zero, but this method can't work in the infinite case. [If you truncate the series at $a_1\dots a_n$ and only have $n$ equations, there is no guarantee the RHS will be small, and even if there was, there is no guarantee that the coefficients $e_i$ of the polynomial will be small, and even if there was, a polynomial with small coefficients can still have large roots that do not approach zero (eg. $x^n - 1/2^n$ has roots $x = 1/2$.)]
In the style of this excellent video, you could ask if the set of vectors $v_2 = (a_2^1, a_2^2, \dots), v_3 = (a_3^1, a_3^2, \dots)$, each of which is in $l_2$, can be linearly combined to produce $v_1 = (-a_1^1, -a_1^2, \dots)$. Per the video the answer is indeed yes! - but this does not guarantee that the linear combination will be of equal weights $v_1 = 1\cdot v_2 + 1\cdot v_3 + \dots$. [Actually, an equal weight sum of vectors in $l_2$ is not necessarily even in $l_2$ itself - eg., if $v_i$ is the vector of all zeros except for position $i$ which is 1, then $v$ is a vector of all 1s, which is not in $l_2$.]
To try to use complex analysis, it's pretty clear that for any analytic (on the unit disc?) function $f(z) = \sum_{j = 1}^{\infty} c_j z^j$ for which $f(0) = 0$, then $\sum_i^{\infty} f(a_i) = 0$. Or, if you want to remove the $f(0) = 0$ condition, for all analytic functions $g$, we have $\sum_{i=0}^{\infty}a_i g(a_i) = 0$. This works for ANY g! Surely at this point it should be obvious that the $a_i$s must be zero, but alas I can't see it.
The same would hold if $a_i$ was replaced with $\overline{a_i}$, so that $\sum_{i=0}^{\infty}\overline{a_i} g(\overline{a_i}) = 0$. But I don't think this gets us anywhere, since we already know that $a_i$ converges to zero, so we can't use some trick to show that for some $s$, then $g(s) = 0$ for all $s$, contradiction, so all $a_i$ are zero. And we have a fixed set of $a_i$, so we can't move them around to such an $s$ and obtain that $g(s) = 0$.
And we have to use analytic functions - the Unit Disc Stone Weierstrass theorem says that $f(z)$ can be approximated by a polynomial $p(z, \overline{z})$, a polynomial in two variables $z, \overline{z}$. This makes things hard - because we don't have any information on the magnitudes of $a_i$ other than that (WLOG) they are less than 1.
Actually, if the $a_i$s were real numbers (and we eschew the trivial inequality to finish our proof immediately, haha), the usual Stone Weierstrass would almost give a slick proof, but not quite. If we use the same polynomial $p$ to approximate an arbitrary continuous function $f$, so that $|p(x)-f(x)| \le \epsilon$ for all $x$, we'd have the same absolute error bound but an infinite number of terms, so the infinite sum $0 = \sum_{i=0}^{\infty} p(a_i)$ would not be close to $\sum_{i=0}^{\infty} f(a_i)$ as is true in the finite case. If we ignore this issue anyway and assume that $\sum_{i=0}^{\infty} f(a_i)$ is small, then we can construct a neighborhood $[-\epsilon, \epsilon]$ around zero which contains all but a finite nonzero number of the $a_i$ at the beginning of the sequence; then we can construct a continuous function which is zero on this neighborhood but equal to one everywhere else, and we'd obtain a contradiction (the sum would be strictly positive and more than one, vs close to 0), showing that no such interval exists, and thus all the $a_i$ have to be equal, and then it follows that $a_i = 0$ for all $i$.
I wonder if there is a nonzero solution where the $a_i$ are only conditionally convergent but not absolutely. On the other hand, from the vectors idea above, I think a nonzero solution is impossible because we have a uniform summation of the powers of $a_i$, which would prevent the resulting sum from lying in $l_2$ like $v_1 = (-a_1^1, -a_1^2, \dots)$ does.