I came across this article on deep learning for computational MRI and found an interesting sentence "However, early stopping has to be performed to not overfit to the noisy measurements." which made me think, in section III "Physics-driven ML methods in computational MRI", subsection "Generative models" a sentence. Apart from early stopping, any other available techniques in the deep learning community to avoid overfitting on noise in the data, such as a "noise penalty term" in the loss function? I myself haven't searched out any explicit results directly facing this issue.
I am sort of expecting to include an SNR term in the loss function. But not sure whether it works and whether it is far from optimal and elegant. Wellcome any updates in the field or existing literature would be great help!