Suppose that you are trying to estimate some quantity $\mu$ by performing some random experiment $X$ with mean $\mu$ and variance $\sigma^2$. In order to obtain a better estimate, you can repeat the experiment $n$ times and take the average $\bar{X} = \frac{X_1+\cdots+X_n}{n}$, which also has mean $\mu$ but its variance is only $\sigma^2/n$. This means that
$$ \mathbb{E}[(\bar{X} - \mu)^2] = \frac{\sigma^2}{n}. $$
Moreover, under mild assumptions, the distribution of $\bar{X}$ will be "close" to a normal distribution with mean $\mu$ and variance $\sigma^2/n$.
The standard deviation is the square root of the variance, which is $\sigma/\sqrt{n}$; this decreases like square root of $n$. The standard deviation is a standard measure for error. If you approximate the distribution of $\bar{X}$ by a Gaussian, then the width of the Gaussian scales like the standard deviation, and so it has inverse square root behavior (as a function of the number of experiments).
In your case, the experiment $X$ has two possible answers, $0$ and $1$, and so its variance is $\mu(1-\mu) \leq 1/4$, and its standard deviation is at most $1/2$. Taking the average of $n$ experiments gives us a standard deviation of at most $1/(2\sqrt{n})$.
Therefore if you average over 100 iterations, your standard deviation will be at most $1/20$. This doesn't mean that you will be $1/20$ away from the true value. No such guarantee is possible — if you're unlucky then all $X_i$'s will be equal to $0$ or all of them to $1$, one of which will result in an error of at least $1/2$. It means that the distribution of the output will look roughly like a standard Gaussian scaled down a factor or 20, and centered around the true answer $\mu$.