All the well-known methods are designed to work for scalars as well as for vector-valued ODE or ODE systems. That is, for the application it makes no difference if the $x$ in $\dot x=f(x)$ is a scalar or a vector, and $f$ correspondingly a scalar function or a system of differential equations.
This property is not universally true. There are Runge-Kutta methods where one gets different error orders, the scalar variant having a higher order than the vector variant. This is due to the fact that in the scalar case all derivatives are scalar valued and thus commute. In the system case the derivatives are tensors, vector-valued symmetric multi-linear forms where commutativity does not make much sense. For example, $f'(x)f''(x)[f(x),f(x)]$ is different from $f''(x)[f'(x)f(x),f(x)]$. Expressed in partial derivatives with summation convention
$$
\frac{\partial f_i}{\partial x_j}
\frac{\partial^2f_j}{\partial x_k\partial x_l} f_kf_l
~~\text{ vs. }~~
\frac{\partial^2f_i}{\partial x_j\partial x_k}
\frac{\partial f_j}{\partial x_l} f_lf_k
$$
Or one order higher,
$$
f'f''[f'f,f],~~ f''[f'f'f,f]~~\text{and}~~ f''[f'f,f'f]
$$
are all different in the non-scalar case. This means that certain order conditions for the method coefficients that are different for the vector case combine into one if derived strictly for the scalar case.
The most famous example is the 5th order 5-stage method that Martin Kutta gave in his original paper in 1901. Such a method with these properties is impossible in the system case.
In summary:
Any change in the coefficients gives a different method, any contrary claim is wrong, it is an error either in the naming of the method or in the coefficients presented.
Any method has to be at least zero-order consistent. Meaning that the integration of $\dot x=f(x)=1$ has to result in $x(t)=t+x_0-t_0$ without any errors. This implies that the sum of the combination coefficients for the result of the step has to be $1$, and $\frac16(1+1+1+1)=\frac23$ as per the presented source is not one.