8

Today a friend and myself came up with the question of computing partitions of numbers, i.e.: given a number $n$, what is the number $p(n)$ of was of different ways writing $n$ as a sum of non-zero integers (where commuting two terms in the sum counts as a unique way)?

We know of no formulas to do this. I am currently trying to write a bruteforce (computational) approach to the problem, but I wanted to ask you about another more sophisticated way of attacking the problem (still with a computer).

It is well known that $$\sum_{n=0}^\infty p(n)x^n = \prod_{k=1}^\infty\frac{1}{1-x^k}$$ The idea we had is to proceed as follows:

  1. $\log\left(\prod_{k=1}^\infty\frac{1}{1-x^k}\right) = -\sum_{k=1}^\infty\log(1-x^k) = -\sum_{k=1}^N\log(1-x^k) + O(x^{N+1})$ where we used the Taylor expansion of $\log$ around $1$ to get the error term.
  2. $\prod_{k=1}^\infty\frac{1}{1-x^k}\approx \exp\left(\sum_{k=1}^N\log(1-x^k)\right) =: F(x)$ with some error term.
  3. Evaluate $F$ for some $x$ close to zero (so that the higher order terms are no longer relevant) and use polyfit (least squares) to get the coefficient of $x^n$, i.e. $p(n)$.

Now my questions are the following:

  1. Would this method (or something similar) work?
  2. What would the error term in point (2.) look like?
  3. How to choose $N$ and the points $x$ where we evaluate $F$ in step (3.) such that the error is less than, say $0.1$ in the end? Notice that the preceding question is closely related to this one.
  4. What would be the complexity of the resulting algorithm?
  5. How are the $p(n)$ computed usually?

Thank you in advance for the help.

  • 1
    Have you seen this? http://blogs.plos.org/badphysics/2011/01/20/ono/ – Daniel R Jul 16 '13 at 22:11
  • 1
    http://en.wikipedia.org/wiki/Pentagonal_number_theorem – Qiaochu Yuan Jul 16 '13 at 22:16
  • 1
    This might also be useful; especially if you're interested in the exact values for reasonably small $n$. – Peter Košinár Jul 16 '13 at 22:16
  • @DanielR That's really interesting, thank you. Unfortunately I don't know enough number theory and similar stuff to understand Ono's work. – Daniel Robert-Nicoud Jul 16 '13 at 22:25
  • @PeterKošinár Oh, I'd say this answers the 5th question :) I have the impression of having seen the formula somewhere already (maybe in a course in combinatorics I followed some time ago). – Daniel Robert-Nicoud Jul 16 '13 at 22:26
  • Since everyone jumps to question 5 (only), you might regret having added that one. Anyway, what you describe before is not very clear (one needs more than one $x$ to do any polynomial fitting, which is not what "some $x$" in your third step says). It seems you'll use floating point to evaluate logarithms and exponentials, hope to get your precision high enough and finally round $p(n)$ to a integer. I'd say that is quite horrible, unreliable, and inefficient, but that is just a personal opinion. – Marc van Leeuwen Dec 12 '15 at 06:46
  • @MarcvanLeeuwen Yes, that was the idea: to kind of cut off higher order terms on both sides to get polynomials and evaluate close enough to zero so that the error doesn't really matters, and then fit the data to get the first coefficients of the lhs with high enough accuracy. I know this is really ugly, and that's the reason why I called it a "bruteforce" approach. I was (ans still am) curious to know if it could work, and, if so, how one should do to making it work in the least inefficient way. – Daniel Robert-Nicoud Dec 12 '15 at 13:07

4 Answers4

9

I don't want to invoke any of the sophisticated methods that have been found (see the article linked in the other answer), not even the pentagonal number theorem. I just want to say that the most straightforward way of computing the values of $p(n)$ for all $n\leq N$ requires only $O(N^2)$ additions of integers, which is of the same order as a single multiplication of power series up to $O(x^{N+1})$ (and indeed somewhat simpler since no coefficient multiplication is involved). Without having studied your project in detail, it seems unlikely that it would result in something more efficient than this, while it is certainly a lot more complicated.

So here is the straightforward method. I'll present it as C++ code, since that is what I just tested. If your goal is to to compute for a relatively small value of$~N$, such as $N=416$ which is as far as $64$-bit unsigned integer arithmetic will get you, there is no need for anything more sophisticated; computation time is negligible.

 typedef std::vector<unsigned long int> arr;
 arr part(unsigned int n)
 {
   arr c(n+1,0); // array of coefficients of X^0 ... X^n
   c[0]=1;  // start with 1X^0
   for (unsigned int k=1; k<=n; ++k) // multiply by 1/(1-X^k)
     for (unsigned int i=0; i+k<=n; ++i)
       c[i+k]+=c[i];
   return c;
 }

The inner loop performs division by $1-X^k$; using an increasing loop over $i$ this is straightforward (though maybe not as widely known as it should be). I explained it in this answer. In the context of this problem a concrete interpretation for the operation can be given: the number of partitions of $i+k$ into parts of size${}\leq k$ is equal to the number of its partitions into parts of size${}<k$ (when no parts equal to$~k$ occur at all) plus the sum of the number of partitions of $i$ into parts of size${}\leq k$ (with at least one part equal to$~k$, there is $i$ left to partition); the two cases are clearly complementary and exhaustive.

  • This is a very nice and easy algorithm, but my goal is not to look for easy/fast algorithms, but rather to understand if my line of thought can work, and to fill in the detail we weren't able to complete. Thank you anyway for your interesting answer, +1 – Daniel Robert-Nicoud Jul 17 '13 at 17:29
4

A nice survey on the computation of the integer partition function is here:http://www.math.clemson.edu/~kevja/PAPERS/ComputingPartitions-MathComp.pdf. In chapter $3$ the results on the computations are presented, and the complexity is given for the different algorithms (the first two chapters explain what formulas are useful for computation - all links given in the comments are included, I think). It is shown, for example, that the computation of $p(n)$ for all $n ≤ N$ can be done using $O(N^{3/2} log^2(N))$ machine multiplications.

Dietrich Burde
  • 140,055
3

The answer may not be that what you want. I just thought that this could be an answer seeing your question.

You may know the recurrence relation $nP(n)=\sum\limits_{k=1}^n\sigma(k)P(n-k)$, where $\sigma(k)$ is sum of divisors function. This can easily be derived using generating functions of $P(n)$ and $\sigma(n)$. Also you may use Erdos's method:

$nP(n)=\sum_{m=1}^n \sum_{k=1}^{n/m}mP(n-km)=\sum_{r=1}^nP(n-r)\sum_{m\mid r}m=\sum_{r=1}^nP(n-r)\sigma(r)$.

Now note that, you are having $n\ \text{linear equations}$ with variables $x_n=P(n)$ and coefficients $\sigma(n)$.

Using Cramer's rule you may find $P(n)=\frac{\det A_n}{\det B_n}$. It can easily be seen that $B_n$ is upper triangular and determinant of $B_n$ is $n!$.

Now, $A_n(i,1)=\sigma(i)$ and $A_n(j,j)=n-j+1\\A_n(i.j)=-\sigma(j-i), \text{if}\ i<j\\A_n(i,j)=0, \text{if}\ i>j$ for $j>1$.

I think computing $\det A_n$ is 'very difficult' but good in the sense $P(n)$ basically depends on prime factorization of $n$. If you wish to play with the determinant you may find some good recurrence relations, involving $\sigma (n)$'s.

Kunnysan
  • 2,068
1

As explained in the last section of these notes, if you use MacMahon's recurrence (which comes from the pentagonal number theorem as mentioned above), you can compute $p(n)$ by only knowing $2 \sqrt{2n/3}$ of the previous terms.

http://math.berkeley.edu/~mhaiman/math172-spring10/partitions.pdf

Chetak
  • 21