I have a question regarding no-regret algorithms (of online learning). As far as I can see, such algorithms allow the absolute regret up to round $n$, which is $R_n$, to grow by $\sqrt{n}$. So, in the limit, the per round regret ($R_n/n$) approximates $0$. Which makes it a no-regret algorithm. The problem is that I cannot see how to interpret $\sqrt{n}$. To clarify what I mean with "interpret" just an example: I know that in a binary game the mistake of algorithms is quite often bounded by $\log_2 n$. As an explanation one can think of a binary tree:
n=0 {0,1}
/ \
n=1 {0,1} {0,1}
/ \ / \
n=2 ... ... ... ...
Since:
log2(1) = 0
log2(2) = 1
log2(4) = 2
etc.
One can link $n$ (the number of rounds) and $\log_2(n)$ (the number of possible ways to $n$). Is there a similar link between $n$ and $\sqrt{n}$ for real values? What does the $\sqrt{n}$ stand for?