I know the negative binomial distribution denoted $NB(r, p)$ describes the number of i.i.d. Bernoulli trials until the $r$th success with p.m.f. $P(X=k)= {k+r-1 \choose k}(1-p)^kp^r$. Suppose we have a sample $X_1,...,X_n$ of i.i.d. $NB(r, p)$ random variables. I have derived myself that, with $r$ known, the MLE for $p$ is $\hat{p}=\frac{r}{\bar{X}+r}$.
Now I want to show that $\hat{p}$ is biased by finding a different estimator for $p$, say $\tilde{p}$, that is unbiased. I found a similar question here on StackExchange, but I'm wondering if the answer holds in this situation in which we have a sample of negative binomial random variables instead of just one. Thanks for any help.