Noise addition is not the only thing that needs to be done to obtain differential privacy: you also need to bound the contribution from each user's gradient update, and calibrate the noise accordingly.
If a user can have an arbitrarily large influence on the ML model, because the gradient update from this user is unbounded, then you can't know how much noise needs to be added afterwards to hide this arbitrarily large contribution. So no, you can't add noise to the model after training is completed and obtain DP, or at least not in a straightforward way.
Alternative approaches to DP-SGD exist, like PATE, which kinds of corresponds to your idea of "adding noise after the training is completed", but that method involves multiple "teacher" ML models "voting" in a private way to train a "student" model, so it's more complicated than simply adding noise to the model.