Given two integers $x$ and $n$ in binary representation, what is the complexity of computing the bit-size of $x^n$?
One way to do so is to compute $1+\lfloor \log_2(x^n)\rfloor=1+\lfloor n\log_2(x)\rfloor$ by computing an approximation of $\log_2(x)$ with sufficient precision. It appears that computing $\log_2(x)$ with $k$ bits of precisions can be done in $O(M(k)\log k)$ where $M(k)$ is the time needed to compute the product of two integers of length $k$. This yields a (not specially simple) algorithm of complexity approximately $O(s\log^2 s)$ if $s$ is a bound on the bitsize of both $x$ and $n$ (if I made no error).
Can we beat $O(s\log^2(s))$ where $s$ is the size of $x$ and $n$ (in the case where they have comparable sizes)? Is there a simple algorithm to get this complexity or better?
Note: I am interested in the complexity in a theoretical model such as Turing machines.