Recently, I have been studying the Multinomial Probability Distribution
Suppose you go to a casino and there is a game that involves rolling a six-sided die (i.e. one dice). However, you are not told what is the probability that this die lands on any one of these sides - this raises your suspicions and leads you to believe that perhaps the die might not be fair, therefore it might not be worth playing this game. You are still considering whether its worth playing this game - and suddenly find out that the casino has a large screen television that displays the last $100$ numbers that came from this die. Since you know that a die follows a Multinomial Distribution, you can use this fact to estimate the probabilities of the die assuming any given number, as well as the "spread" (i.e. variance) for each of these probabilities.
Using the Maximum Likelihood Estimation, I have been trying to derive the formulae for the parameters of the Multinomial Probability Distribution. In short, given an event $i$ (e.g. the number $2$ on a die), the (very obvious) estimate for the probability $p_i$ of this event is $$\hat{p_{i}}_{\text{MLE}} = \frac{n_{i}}{N}$$ where the number of times that the event $n_{i}$ appears and $N$ is the total number of events that were recorded. As always, probabilities are only defined between $0$ and $1$ - therefore these individual estimates for $p_{i}$ can never be greater than $1$ or less than $0$.
Next, using the equivalence between the (inverse) Fisher Information and Variance, I was able to work out the formula for the "variance of these probabilities". In short, the variance of $p_{i}$ is given by $$\text{var}(\hat{p_{i}}_{\text{MLE}}) = \frac{p_{i}^{2}}{n_{i}}$$
Finally, using the theory of Asymptotic Normality of MLE, we can derive Confidence Intervals for the estimates of these parameter estimates (i.e. each individual value of $p_{i}$). That is, you might have observed that the probability of rolling a $2$ on this die is $0.31$ - but there is also a $95\%$ chance that the probability of rolling a $2$ might be anywhere between $(0.28, 0.33)$. We can construct a $95\%$ Confidence Interval for any of these probabilities as: $$p_{i} \pm 1.96 \cdot \left( \sqrt{\frac{p_{i}^{2}}{n_{i}}} \right)$$
Question: I am worried that for certain values of $p_{i}$ and $n_{i}$, this expression $$p_{i} \pm 1.96 \cdot \left( \sqrt{\frac{p_{i}^{2}}{n_{i}}} \right)$$ might be greater than $1$ or less than $0$.
As an example, if $p_{i} = 0.9$ and $n_{i} = 16$, this results in a range estimate for the probability exceeding $1$, i.e. $$0.9 + 1.96 \cdot \sqrt{\frac{0.9^2}{16}}$$
Have I done this correctly? Is it really possible for a probability value to be outside a range of $(0,1)$?
Thanks!
Note: I obviously think I have done something wrong, because I don't know much in math - but out of the few things I know, probabilities will never be outside the range of $[0,1]$.