I came across this idea in a lecture on elementary topology. While it makes sense algebraically, I'm hoping someone could shed some light on the way this is possible.
So you begin with a square of side length $1$ and $4$ disks lying within the square, one in each corner, of diameter $1/2$. In the space at the center, there is another disk, of maximum possible diameter. To find the diameter we recognize that the diagonal length of the square is $\sqrt{2}$, and if one extends the construction in a similar way along the diagonal direction, there is half the center disk's diameter in each corner, making the total $\sqrt{2} = 1+2d$. (The two disks along the diagonal contribute twice their diameter, which comes to 1).

From here we can transpose to get:
$$ d = \frac{\sqrt{2}-1}{2} $$
It is not much of a step to see that this generalises to higher dimensions, as the only thing that changes is the term under the square root, due to Pythagoras' theorem.
$$ d_m = \frac{\sqrt{m} - 1}{2} $$
Where m is the dimension of the ambient space and all the balls.
However, while the equation is sensible for low dimensionalities, it gives a strange result for $m > 9$, ie.
$$ d_{10} = \frac{\sqrt{10} - 1}{2} > 1 $$
Here the diameter of the ball exceeds the side length of the container! How is this possible?