I wonder why we work with constant discretization in Time Discretization of numerical approximation for numerical scheme if we take not necessarily constant Discretization Is the numerical scheme (Such : Euler, Runge-Kutta4, Euler-maruyama, milstein ...) still true?
For Example Euler–Maruyama method :
they took an equidistant mesh of size N to divide a time interval [0, T] into N subintervals which means constant discretization what will be happend if we wrok with non constant one?
I would like to see that in point of view of Numerical analysis ??
Thank you for a clear answer, simple and detailed.