12

Suppose I have a (multivariate) polynomial with coefficients in $\mathbb Z$ or $\mathbb Q$, given in fully expanded form. How can I simplify this to reduce the number of elementary operations (addition, subtraction, multiplication) used in its representation (and therefore evaluation once you substitute values for the indeterminates)? I'm not interested in the case where the polynomial as a whole factorizes, but instead I'm searching for a clever way to factorize parts of it.

Example: how could you turn

$$x^{2} y - x y^{2} - 2 y^{3} + x^{2} - x y - 2 y^{2} - x + 2 y + 1$$

into

$$((x+y)(1+y)-1)(x-2y)+1$$

or something comparably simple? The first form has 20 elementary operations, 12 multiplications, 3 additions and 5 subtractions. The second form has 3 multiplications, 3 additions and 2 subtractions, for a total of 8 operations. I'm not sure this is optimal, but it certainly is a lot better than the fully expanded form. Factorizing the whole polynomial won't help, since it is irreducible over $\mathbb Q$.

A perfect solution would find a representation with guaranteed minimal number of operations in polynomial time. But I'm not sure if this is even possible, so I'm also interested in solutions which don't guarantee minimality, or don't guarantee polynomial runtime, as long as they work reasonably well for practical use cases.

daw
  • 54,637
  • 2
  • 44
  • 85
MvG
  • 44,006
  • If you have some memory to store results along the way, the first example here doesn't really need to use $12$ multiplications. Computing and storing $x^2$, $2y$, and $y^2$, you could end up only using $8$ multiplications. – 2'5 9'2 Oct 25 '14 at 05:44
  • My next comment is that with real computers, multiplication dominates addition and subtraction when measuring how much time is needed for execution. Reducing the number of multiplications by 1 might be comparable to reducing the number of additions by a lot more. So you may be also interested in minimizing multiplications instead of total elementary operations. – 2'5 9'2 Oct 25 '14 at 05:47
  • Last comment: raising to the $n$th powers doesn't actually cost as much as $n-1$ multiplications would. In many cases, a fast logarithm and exponential function are built into hardware, and it's very fast to compute $x^n$ as $\exp(n\ln(x))$. Even without that, if you have some memory to store powers of $x$ along the way, you can compute $x^n$ in about $\log_2(n)$ multiplications. – 2'5 9'2 Oct 25 '14 at 05:50

1 Answers1

2

Given a multivariate polynomial, a good place to start would be to factor out $x$ from all the terms with $x$, then factor $y$ out of all the terms that are left with $y$, and so on. Given a random multivariate polynomial of degree $n$ in $m$ variables, on average $O(n^m)$ terms will have an $x$, meaning just factoring out an $x$ will reduce the number of multiplications by $O(n^m)$. Similarly, factoring out a $y$ from what is left will reduce the number of multiplications by $O(n^{m-1})$, and so on. The resulting polynomial from these steps will have the form, $x*P(x,y,z,...)+y*Q(x,y,z,...)+...+k$. This consists of $m$ multiplications and $m$ additions ($m-1$ if there is no constant term), plus the operations needed for the polynomials, $P(x,y,z,...)$, $Q(x,y,z,...),$ and so on. These polynomials are of degree $n-1$ (since at least one variable has been factored out of every term) and if the same operation is applied to them, will yield another reduction in the number of operations by $O(m*(n-1)^m)$. Continuing this recursively yields a polynomial with about $O(n^m)$ multiplications and $O(n^m)$ additions, as opposed to the $O(n^{m+1})$ multiplications and $O(n^m)$ additions of the original polynomial.

This idea is adapted from the Horner's Method for single variable polynomials.

weux082690
  • 1,008
  • 11
  • 20