From here - Batch Opening of KZG PCS
One can prove multiple evaluations $(\phi(e_i) = y_i)_{i\in I}$,for arbitrary points $e_i$ using a constant-sized KZG batch proof, $\pi_I = g^{q_I(\tau)}$, where
\begin{align} \label{eq:batch-proof-rel} q_I(X) &=\frac{\phi(X)-R_I(X)}{A_I(X)}\\ A_I(X) &=\prod_{i\in I} (X - e_i)\\ R_I(e_i) &= y_i,\forall i\in I\\ \end{align}
$R_I(X)$ can be interpolated via Lagrange interpolation in $O(\vert I\vert\log^2{\vert I\vert})$ time as:
\begin{align} R_I(X)=\sum_{i\in I} y_i \prod_{j\in I,j\ne i}\frac{X - e_j}{e_i - e_j} \end{align}
My question here is as to why Lagrange interpolation is needed for finding $R_I(X)$?
All the $e_i$'s are known, so $A_I(X)$ is a known polynomial. If you use long division to divide $\phi(X)$ by $A_I(X)$, you will get $q_I(X)$ with $R_I(X)$ as the reminder. So why Lagrange Interpolation is needed here?
Down below, the page also says that $A_I(X)$ is also interpolated. Again why?