I have the following system of differential equations: \begin{equation} \frac{d}{dt} \, \begin{pmatrix} x_1 (t) \\ x_2 (t) \\ x_3(t) \\ x_4(t) \\ \vdots \\ x_M (t) \end{pmatrix} = \begin{pmatrix} f_1(t) & \epsilon_R & 0 & 0 & 0 & \ldots & 0 &0 \\ \epsilon_L & f_2(t) & \epsilon_R & 0 & 0 & \ldots & 0 & 0 \\ 0 & \epsilon_L & f_3(t) & \epsilon_R & 0 & \ldots & 0 &0 \\ 0 & 0 & \epsilon_L & f_4(t) & \epsilon_R & \ldots & 0 &0\\ \vdots & \vdots & \vdots & \vdots & & \ldots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \ldots & \epsilon_L & f_M(t) \end{pmatrix} \, \begin{pmatrix} x_1 (t) \\ x_2(t) \\ x_3(t) \\ x_4(t) \\ \vdots \\ x_M(t) \end{pmatrix} , \qquad\qquad \big( \text{assuming} \,\,\epsilon_R, \epsilon_L \ll f_i(t) \big), \end{equation} with a given initial condition $\mathbf{x}(0)$. In matrix notation, this can be written as $\frac{d}{dt}\mathbf{x}(t) = \mathbf{A}(t) \, \mathbf{x}(t)$ with $$A_{ij}=f_i(t) \,\delta_{ij} + \epsilon_R \, \delta_{i+1,j} + \epsilon_L \, \delta_{i-1,j} $$ for $(i\neq 1, M)$ and $A_{1j}=f_1(t) \, \delta_{1j}+\epsilon_R \delta_{2j}$ and $A_{Mj}=f_M(t) \, \delta_{Mj}+\epsilon_L \delta_{M-1,j}$. The assumption $\epsilon_L \sim \epsilon_R \ll f_i(t)$ for all $i$ and $t$ means that $\mathbf{A}$ is a diagonally dominant tridiagonal matrix. (Note that $\epsilon$'s are constant over time.)
Let's assume we can write the solution to this system as $$x_i(t) = G_i [f_1,f_2,\ldots,f_M; \epsilon_L, \epsilon_R] (t)$$ for each $ (1 \leq i \leq M)$, where $G_i$'s are functionals of all the diagonal elements, with $\epsilon_L$ and $\epsilon_R$ as parameters.
Main Question: What I want to obtain is a perturbative expansion of the following functional derivative $$ R_{ij}(t,\tau) = \frac{\delta G_i \big[f_1,\ldots,f_M; \, \epsilon_L,\epsilon_R \big](t)}{\delta f_j(\tau)}, $$ in powers of $\epsilon_R$ and $\epsilon_L$. In the physics literature, $R_{ij}$ represents the response matrix of the dynamical system defined by the above linear ODEs. I believe an analytical expression for $R_{ij}$ is not possible in the general case (related to this unanswered question). However, I only want to obtain a perturbative expression in powers of $\epsilon_L$ and $\epsilon_R$. I wonder whether this can be done in a systematic way.
Below are a few special cases that can be worked out. I am not that well-versed in linear algebra and have a feeling that the following results can be obtained more succinctly. I would appreciate any input.
SPECIAL CASE [I]: $\epsilon_R = \epsilon_L = 0$
This is rather trivial since the matrix becomes diagonal. Let's call this diagonal matrix $\mathbf{A}^{(0)}$ with elements $A^{(0)}_{ij}(t) = \delta_{ij} \, f_i(t)$. The solutions read $x_i(t) = x_i(0)\exp\big({\int_0^t du \, f_i(u)}\big)$, or in matrix notation $$\mathbf{x}(t) = e^{\int_0^t du \, \mathbf{A}^{(0)}(u)} \, \mathbf{x}(0). $$ This way of writing the solution is only possible because $\mathbf{A}^{(0)}$ is diagonal and so the commutator between any two times vanishes: $[ \mathbf{A}^{(0)}(t_1),\mathbf{A}^{(0)}(t_2)]= \mathbf{A}^{(0)}(t_1) \mathbf{A}^{(0)}(t_2) - \mathbf{A}^{(0)}(t_2)\mathbf{A}^{(0)}(t_1) = 0$.
Taking a functional derivative then yields $$R_{ij}(t,\tau) = \delta_{ij} \, \theta(t-\tau) \, x_i(t), $$ where $\theta$ is a step function.
SPECIAL CASE [II]: $f_i(t)=f_i$
Assuming $f_i$'s are constant in time, and thus $\mathbf{A}$ is time-independent, the solution can be written as $$\mathbf{x}(t) = e^{t \mathbf{A}} \, \mathbf{x}(0). $$ I have seen a few posts like this that discuss the exponential of tridiagonal matrices for which it appears a closed form is not available. I still think a perturbative expansion could be possible. A brute force calculation of powers of $\mathbf{A}$ yields: \begin{equation} (\mathbf{A}^n)_{ij} = \delta_{ij} \, f_i^n+ \delta_{i+1,j} \, \epsilon_R\bigg( \, f_i^n \, \frac{1-(f_{i+1}/f_i)^{n+1}}{1- (f_{i+1}/f_i)}\bigg) + \delta_{i-1,j} \, \epsilon_L\bigg( \, f_i^n \, \frac{1-(f_{i-1}/f_i)^{n+1}}{1- (f_{i-1}/f_i)}\bigg)+ O(\epsilon^2). \end{equation} In case of $f_i=f_{i-1}$ or $f_i=f_{i+1}$, the corresponding factor multiplying $\epsilon$ in this expression needs to be replaced with $(n+1) f_i^n$. Using this first-order expression, it is straightforward to show \begin{equation} \big(e^{t\mathbf{A}}\big)_{ij} = \delta_{ij} \, e^{tf_i} + \delta_{i+1,j} \, t\epsilon_R \bigg( \frac{f_i e^{tf_i}-f_{i+1} e^{tf_{i+1}} }{f_i - f_{i+1}} \bigg) + \delta_{i-1,j} \, t\epsilon_L \bigg( \frac{f_i e^{tf_i}-f_{i-1} e^{tf_{i-1}} }{f_i - f_{i-1}} \bigg) + O\big((t\epsilon)^2\big), \end{equation} And if $f_i=f_{i\pm1}$, the corresponding expression will be replaced by $(tf_i+1)\, e^{tf_i}$. From this expression, it is straightforward (but somewhat cumbersome) to obtain the response through $R_{ij}(t,\tau) = t^{-1} \big( \partial n_i(t) / \partial f_j \big)$.
Question: Is it possible to extend this approach to higher-order terms, potentially achieving a recursive formula?
TIME-DEPENDENT CASE:
For the general case, a formal solution similar to the cases above is not acceptable since $\mathbf{A}$'s at different times no longer commute. Here is what I attempted:
Let's write ${A}_{ij} (t) = {A}_{ij}^{(0)} (t) + \delta A_{ij}$,
where $A_{ij}^{(0)}(t)$ are the time-dependent diagonal elements defined in Special Case [I], and we have defined the time-independent matrix
$$\delta A_{ij} = \epsilon_R \, \delta_{i+1,j} + \epsilon_L \, \delta_{i-1,j}.$$
I am hoping that the diagonal structure of $A$ and the time-independence of the tridiagonal matrix $\delta A$ would allow us to obtain the perturbative expansion.
Let's write the perturbative expansion
$\mathbf{x}(t) = \sum_{m=0}^\infty \mathbf{x}^{(m)}(t)$ where $\mathbf{x}^{(m)}$ is $m$-the order term ($\propto \epsilon^m$).
For the initial conditions, we can choose $\mathbf{x}^{(0)}(0) = \mathbf{x}(0)$ and $\mathbf{x}^{(m)}(0) = \mathbf{0}$ if $(m\neq 0)$. Clearly, we still have
$$ \mathbf{x}^{(0)}(t) = e^{\int_0^t du \mathbf{A}^{(0)}(u)} \, \mathbf{x}(0). $$
For the leading correction, we have
\begin{equation}
\frac{d}{dt} \mathbf{x}^{(1)}(t) = \mathbf{A}^{(0)}(t) \, \mathbf{x}^{(1)}(t) + \mathbf{\delta A} \, \mathbf{x}^{(0)}(t) \, .
\end{equation}
It seems straightforward to solve this equation through an integrating factor:
\begin{equation}
\mathbf{x}^{(1)}(t) = \int_0^t dt' \,
\bigg[ e^{\int_{t'}^t du \, \mathbf{A}^{(0)}(u)} \bigg] \, \mathbf{\delta A} \,
\bigg[ e^{\int_0^{t'} du \, \mathbf{A}^{(0)}(u)} \bigg] \, \mathbf{x}(0).
\end{equation}
Using the components of these matrices eventually yields:
\begin{equation}
x_i^{(1)}(t) = \int_0^t dt' \, e^{\int_{t'}^t du \, f_i(u)} \,
\bigg[ \epsilon_R \, e^{\int_0^{t'} du \, f_{i+1}(u)} \, x_{i+1}(0) +
\epsilon_L \, e^{\int_0^{t'} du \, f_{i-1}(u)} \, x_{i-1}(0) \bigg].
\end{equation}
Now it is straightforward to obtain $R_{ij}$ through functional derivation.
Question: can a similar (or better) approach be applied systematically to obtain higher-order terms?
PS: This post might be relevant for Case [II] above. However, it uses an uncontrolled truncation of the Baker-Campbell-Hausdorff formula which may not be suitable for obtaining a systematic perturbative expansion.