Today a friend and myself came up with the question of computing partitions of numbers, i.e.: given a number $n$, what is the number $p(n)$ of was of different ways writing $n$ as a sum of non-zero integers (where commuting two terms in the sum counts as a unique way)?
We know of no formulas to do this. I am currently trying to write a bruteforce (computational) approach to the problem, but I wanted to ask you about another more sophisticated way of attacking the problem (still with a computer).
It is well known that $$\sum_{n=0}^\infty p(n)x^n = \prod_{k=1}^\infty\frac{1}{1-x^k}$$ The idea we had is to proceed as follows:
- $\log\left(\prod_{k=1}^\infty\frac{1}{1-x^k}\right) = -\sum_{k=1}^\infty\log(1-x^k) = -\sum_{k=1}^N\log(1-x^k) + O(x^{N+1})$ where we used the Taylor expansion of $\log$ around $1$ to get the error term.
- $\prod_{k=1}^\infty\frac{1}{1-x^k}\approx \exp\left(\sum_{k=1}^N\log(1-x^k)\right) =: F(x)$ with some error term.
- Evaluate $F$ for some $x$ close to zero (so that the higher order terms are no longer relevant) and use polyfit (least squares) to get the coefficient of $x^n$, i.e. $p(n)$.
Now my questions are the following:
- Would this method (or something similar) work?
- What would the error term in point (2.) look like?
- How to choose $N$ and the points $x$ where we evaluate $F$ in step (3.) such that the error is less than, say $0.1$ in the end? Notice that the preceding question is closely related to this one.
- What would be the complexity of the resulting algorithm?
- How are the $p(n)$ computed usually?
Thank you in advance for the help.