2

To clear up confusion, I'm talking about finding the first $n$ primes in the sequence, $p_1, p_2...p_n...\infty$ without being given this list prior. This is by no means a rigorous attempt at a proof, just a shower thought:

  1. It is widely known that the sieve of Eratosthenes has $O\big(nlog(log(n))\big)$ time complexity, where $n$ - in this case - is the largest integer to check for primality.

  2. It is also known that the upper bound for the $n$th prime, $p_n$ satisfies: $$n(\ln n+\ln \ln n)>p_n$$ See Bounds for $n$-th prime and let $f(n)=n(\ln n+\ln \ln n)$.

  3. From (1), we conclude that finding the first $n$ primes using that seive has the time complexity, $O(p_nlog(log(p_n)))$.

  4. From (3) and (2), we conclude that finding the first $n$ primes using the seive has an upper bound for it's time complexity, $O(f(n)log(log(f(n))))$

Expanding out the complexity in (4), ignoring the different bases of log, and then simplifying gives us: $$(n(\log n+\log \log n))log(log(n(\log n+\log \log n))) = n log ( n log ( n ) ) log ( log ( n log ( n log ( n ) ) ) )$$

Which, although weird as heck, still renders the complexity in (4) a polynomial time complexity.

So then - does that mean the process of finding the first $n$ primes using the seive of Eratosthenes is in the complexity class $P$ since its upper bound time complexity appears (to me, a hobbyist) to be polynomial?

I've tried my best to explain but please ask questions if I haven't been clear enough.

Edit: The upper bound for the nth prime is only applicable for primes greater than or equal to 6... but this doesn't change my argument much.

-- Thanks!

1 Answers1

2

In computational complexity theory, whenever you ask whether something runs in polynomial time, you need to clarify what quantity the runtime is a polynomial of.

The most common definition of “polynomial time” refers to “a runtime that is polynomial in the number of bits it takes to write out the input to the function.” In the case of asking for the first $n$ primes, writing out the number $n$ takes roughly $\log_2 n$ bits. Let’s define $b$ to be the number of bits in the number $n$, where $b$ is approximately $\log_2 n$.

Writing down all $n$ of the first primes requires writing down at least $\Omega(n)$ bits, since each prime is at least one bit long, and it takes one time step to write out a bit. Therefore, writing down the first $n$ primes requires $\Omega(2^b)$ time, which is not a polynomial in the size of the input.

You can “feel” this intuitively by thinking about what happens if you add a single bit to the end of the number $n$. This doubles $n$, and doubles the number of numbers you have to write out. So adding a single bit to the input can trigger a huge rise in the amount of time it takes to write down the answer.

The disconnect here is why we have a notion of pseudopolynomial time, where the runtime of an algorithm is a polynomial in the numeric value of $n$ but not the number of bits of $n$.