Calculators don't actually need to define logarithms -- it is enough that they can somehow calculate a number that is within a desired tolerance from the true value of the logarithm, and if a method achieves that, then it doesn't matter whether it's based on a principled definition of the logarithm function or not.
Calculators that are based on binary floating point typically calculate $\log_2 x$ natively and then scale that to get logarithms to other bases.
First, multiply or divide by an appropriate integer power of $2$ such that the number is between $1$ and $2$. (This is an extremely cheap operation to do with binary floating-point representation). The integer power becomes the integral part of $\log_2 x$, and then one only needs to be able to approximate $\log_2 x$ on the interval $[1,2)$ to find the fractional part.
Methods to do this approximation vary. Sometimes this is done with a polynomial approximation -- which might be a Taylor approximation of $\log_2(x+1)$, or a hand-chosen polynomial that minimizes the error over the entire interval $[1,2)$ with fewer terms than a Taylor polynomial would need.
Some implementations uses several approximating polynomials, chosen for subintervals of $[1,2)$ by table lookup.
There are also CORDIC-like methods, where you try express $x$ as a product of factors of the form $1\pm2^{-n}$ (multiplying by such a factor is cheap!), and then add together the logarithms of the factors you used -- which are themselves looked up in a table baked into the calculator during manufacture.