16
  • Galois's theorem effectively says that one cannot express the roots of a polynomial of degree >= 5 using rational functions of coefficients and radicals - can't this be read to be saying that given a polynomial there is no deterministic algorithm to find the roots?

  • Now consider a decision question of the form, "Given a real rooted polynomial $p$ and a number k is the third and the fourth highest root of $p$ at least at a gap of k?"

A proof certificate for this decision question will just be the set of roots of this polynomial and that is short certificate and hence it looks like $NP$ BUT isn't Galois theorem saying that there does not exist any deterministic algorithm to find a certificate for this decision question? (and this property if true rules out any algorithm to decide the answer to this question)

So in what complexity class does this decision question lie in?


All NP-complete questions I have seen always have a trivial exponential time algorithm available to solve them. I don't know if this is expected to be a property which should always be true for all NP-complete questions. For this decision question this doesn't seem to be true.

user6818
  • 1,165
  • 8
  • 13

3 Answers3

11

I assume you are considering polynomials with integer coefficients.

You've taken the wrong starting point for your investigations; your goal is to find good estimates for the real roots. Looking for an algebraic formula so that you can evaluate it to enough precision is something you can do, but it's not really the right thing to do here. (unless, of course, "the k-th largest real root of a polynomial" is one of your algebraic operations)

A much better starting point is to use Sturm's theorem to isolate the roots of the polynomial. You can then produce better estimates by binary search, but if that's too slow, you can use Newton's method to quickly produce estimates of high precision.


But that's just about finding certificates. There's still the question of what certificates can exist.

First off, I will point out that you can directly compute whether or not two of the roots are exactly $k$ units apart, e.g. by computing $\gcd(p(x), p(x-k))$. You will also have to decide what you want to do about repeated roots and deal with appropriately. I assume you will deal with these case specially.

If we know the two roots are not exactly $k$ units apart, that means that you can produce an estimate of sufficient precision to prove that they are either greater or less than $k$ units apart. e.g. there are two kinds of certificates:

The first kind (proof in the negative) is

  • $a$ is not a root of $p$
  • $p$ has no roots in $(a-k, a)$
  • $p$ has three roots in $(a, \infty)$

The second kind (proof in the positive) is

  • $a$ is not a root of $p$
  • $p$ has at least two roots in $(a-k,a)$
  • $p$ has two roots in $(a, \infty)$

A certificate can be verified by using Sturm's theorem. Now, your question about the size of a certificate boils down to finding how many bits of precision you need to represent $a$.

In other words, what are the bounds on the possible values of $a-b-k$, where $a,b$ are roots of $f$?

I'm not sure of a great approach, but one that should give you something is to observe that all of these values are roots of the polynomial:

$$ g(x) = \mathop{\text{Res}_y}(f(y), f(x + y + k)) $$

Why? Recall that the resultant of two monic polynomials is the product of all differences of their roots, so

$$ g(x) = c^{d^2} \prod_{a,b}(b - (a - x - k)) = \prod_{a,b} (x - (a-b-k))$$

where $c$ is the leading coefficient and $d$ is the degree of $f$. (maybe I've written the formula for $-g(x)$ instead of $g(x)$; I'm never sure on the sign)

So the question is to find estimates for how large the coefficients $g$ can be, and then once you know that, find estimates to how close a root of $g$ can be to zero.

(or, alternatively, find the largest magnitude that a root of the reverse polynomial of $g$ can have; the roots of the reverse polynomial are the inverses of the roots of $g$)

5

Interesting connection, however Galois theory states that no (consistent) method exists for finding roots of quintic using radicals, instead of saying that the problem has a solution (eg a longest path) which may require super-polynomial time. So i would say it is more related to undecidability rather than complexity.

Specificaly, in Galois theory one progressively builds group extensions of the roots of the equation, in a step-by-step way (adding one root at a time). And all these groups should be solvable, in a sense there should be no ambiguity in the process of constructing these extensions in another order. There is a related question on MO on the complexity of constructing the Galois group of an equation.

Another reference here "COMPUTATIONAL GALOIS THEORY: INVARIANTS AND COMPUTATIONS OVER $\mathbb{Q}$", CLAUS FIEKER JURGEN KLUNERS

Furthermore one can systematicaly represent roots of a polynomial euqation using radicals (when the equation is solvable using radicals) based on the construction of the Galois group(s) of the equation. Ref: "Radical Representation of Polynomial Roots", Hirokazu Anai Kazuhiro Yokoyama 2002

The computational complexity of determining if a given monic irreducible polynomial over the integers $\mathbb{Z}$, is soluble by radicals is in $\mathbb{P}$ Ref "Solvability by Radicals Is in Polynomial Time", S. Landau G.L Miller 1984

A survey of recent "Techniques for the Computation of Galois Groups", Alexander Hulpke

Of course if one is looking for good approximation algorithms and their complexity (e.g Newton's method or Sturm's Theorem) this is a slightly different question and the already posted answer provides more infomation in that direction.

Nikos M.
  • 1,016
  • 7
  • 16
3

am going to take your questions as mostly open ended. the galois proof now known as the Abel-Ruffini thm shows the impossibility of polynomial solutions to the quintic. (in contrast to eg the quadratic equation). so its not really a result on the hardness of a problem per se but rather the impossibility. in this sense it is more analogous to eg a proof of undecidability of the halting problem. complexity theory is in general concerned with the "cost" of computing solutions. that is the viewpoint of two leading CS researchers in the introductory section of this following paper (Computability and Complexity / Kleinberg & Papadimitriou), sec 1 The Quest for the Quintic Formula:

Viewed from the safe distance of a few centuries, the story is clearly one about com- putation, and it contains many of the key ingredients that arise in later efforts to model computation: We take a computational process that we understand intuitively (solving an equation, in this case), formulate a precise model, and from the model derive some highly unexpected consequences about the computational power of the process. It is precisely this approach that we wish to apply to computation in general.

elsewhere a loose/ general analogy might be that a P$\neq$NP proof (or other complexity class separation) is analogous to a computational impossibility result somewhat like the Abel-Ruffini thm. a separation result says roughly that problems of a certain type cannot be solved with "computational resources" of another certain type. a P$\neq$NP theorem would be viewed as a (monumental) computational impossibility result.

vzn
  • 11,162
  • 1
  • 28
  • 52