So the last few years I have used Krylov subspace methods (mostly Conjugate Gradient) for solving various kinds of problems in science and engineering, but in all of these applications I have only needed to consider floating point arithmetics.
Double precision works but often seems overkill. Single precision does fine. Maybe even half precision would work (which I hear is often used when training neural networks on some platforms), although I don't have any hardware platform to try it on.
Well, now to the point.. will Krylov subspace methods work differently when working with numbers of fixed point precision rather than floating point precision? And if so, will I need to alter the algorithm somehow to ensure / speed up the convergence?