0

So the last few years I have used Krylov subspace methods (mostly Conjugate Gradient) for solving various kinds of problems in science and engineering, but in all of these applications I have only needed to consider floating point arithmetics.

Double precision works but often seems overkill. Single precision does fine. Maybe even half precision would work (which I hear is often used when training neural networks on some platforms), although I don't have any hardware platform to try it on.

Well, now to the point.. will Krylov subspace methods work differently when working with numbers of fixed point precision rather than floating point precision? And if so, will I need to alter the algorithm somehow to ensure / speed up the convergence?

mathreadler
  • 26,534
  • What were the results of a thorough search of existing literature? A quick perusal shows multiple publications and at least one patent application that discuss CG using fixed-point arithmetic. Issues mentioned are those typical for most non-trivial uses of fixed-point arithmetic: (1) quantization effects requiring careful scaling (2) dynamic range limitations requiring different fixed-point formats at different stages. – njuffa Apr 28 '20 at 18:13
  • @njuffa Those patents are quite uninteresting. Nothing of substance that would impress anyone. – mathreadler Apr 28 '20 at 19:17

0 Answers0