Maths isn't my strong point, so bear with me...
Consider the following equation:
Scientific Notation
1.66326556250318387496486473290910501884632684934011000036134769212750344872873130323634253270599878982347298639560762710202913347498385519593436371856666117959759724526678511903221928471314026125128684482119644679463422998200172742144786752760410308837890625 × 10^-111
Literal
0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000166326556250318387496486473290910501884632684934011000036134769212750344872873130323634253270599878982347298639560762710202913347498385519593436371856666117959759724526678511903221928471314026125128684482119644679463422998200172742144786752760410308837890625
So × 10^-111 in this case means that the literal value has 111 zeros (1 to the left of the decimal point, and 110 to the right). In total it takes 369 digits to represent the whole value.
Is there any computable relationship between the original equation (65536 ^ -23) and the digit length of the literal result (369), or to put it another way, is there an algorithm that could be used to determine how many digits are required to represent the equation?