- update Remark: the interpretation of the matrix $T$ in its use as binomial coefficients (as matrix $P$) has some inconvenience because of the beginning of index-counting at $1$ as well as in the formulae after that. I hope I got this correct anyway. However, the coefficient $c_0$ is set to be zero, and this should indicate, that we are as well working with the iteration of $\exp(x)-1$ and not $\exp(x)$. This should be made more clear by the OP. For the time being I'll leave the text as it is, because we only need to change (eq 6c) and (eq 6d) and replace $B_b$ by $fS2F$ on the rhs resp. $\exp(x)$ by $\exp(x)-1$.
Additional remark: if $\exp(x)-1$ is under discussion, then the coefficients $c_k$ are easily determined
As far as I've analyzed the given equations & conditions, I must state, that your system gives a description for the Abel-function $\varphi(x)$ for the iteration of the $\exp(x)$ but no (new) way to actually determine the coefficients $c_k$ such that $\varphi(z)= \sum_{j=1}^\infty c_j \cdot x^j$.
The following is how far I came analyzing your conditions/equations and putting them into (my standard) framework of Carleman-matrices to show their functional properties.
- T= Pascalmatrix - On the second eqn
$\phantom a$
First, the matrix $T$ is the factorially scaled and shifted binomial-matrix.
I would write
$$ T(n,k)=k! \frac{(n-1)!}{(n-k)!(k-1)!}=k! \binom{n-1}{k-1} \tag {1a}$$
Note this is the shifted lower triangular pascalmatrix with columns scaled by factorials:
$$ \small \begin{bmatrix}
1 & . & . & . & . & . \\
-1 & 1 & . & . & . & . \\
0 & -1 & 1 & . & . & . \\
0 & 0 & -1 & 1 & . & . \\
0 & 0 & 0 & -1 & 1 & . \\
0 & 0 & 0 & 0 & -1 & 1
\end{bmatrix} \cdot
\small \begin{bmatrix}
1 & . & . & . & . & . \\
1 & 1 & . & . & . & . \\
1 & 2 & 1 & . & . & . \\
1 & 3 & 3 & 1 & . & . \\
1 & 4 & 6 & 4 & 1 & . \\
1 & 5 & 10 & 10 & 5 & 1
\end{bmatrix} \cdot \small \begin{bmatrix}
1 & . & . & . & . & . \\
. & 1 & . & . & . & . \\
. & . & 2 & . & . & . \\
. & . & . & 6 & . & . \\
. & . & . & . & 24 & . \\
. & . & . & . & . & 120
\end{bmatrix} \\ =
\small \begin{bmatrix}
1 & . & . & . & . & . \\
0 & 1 & . & . & . & . \\
0 & 1 & 2 & . & . & . \\
0 & 1 & 4 & 6 & . & . \\
0 & 1 & 6 & 18 & 24 & . \\
0 & 1 & 8 & 36 & 96 & 120
\end{bmatrix}
$$
$$ \tag {1b}$$
With my standard notation for the algebraic matrix-operations:
$$ T = \Delta \cdot P^\tau \cdot \,^dF \tag{1c}$$
What you use of this in your first equation is just the transposed submatrix from second row and column on, which is exactly the upper triangular Pascalmatrix $P$ scaled by the factorials
$$ \text{FP} = \small \begin{bmatrix}
1 & 1 &1 &1 &1 & ... \\
. & 2 & 4 & 6 & 8 & ... \\
. & . & 6 & 18 & 36 & ... \\
. & . & . & 24 & 96 & ... \\
. & . & . & . & 120 & ... \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{bmatrix} \tag {1d}$$ which I denote here for shortness $\text{FP}$
- Description of $A$ - On the first eqn.
$\phantom a$
Using binomial notation instead of your $T()$ we have your first equation as:
$$ a_n=\sum_{k=1}^n k! \binom{n-1}{k-1}c_k a_{n−k} \qquad a_0=1 \tag {2a} $$
For the $a_n$ (assumed to be collected in a columnvector $A$) I get from this the columnvector of polynomials in $c_k$:
$$ \text{A} = \small \left[ \begin{array}{l}
1 \\
c_1 \\
c_1^2+2c_2 \\
c_1^3+6c_2c_1+6c_3 \\
c_1^4+12c_2c_1^2+24c_3c_1+12c_2^2+24c_4 \\
c_1^5+20c_2c_1^3+60c_3c_1^2+(60c_2^2+120c_4)c_1+120c_3c_2+120c_5
\end{array}\right] \tag {2b}
$$
- The Carleman-matrix $\phi$ reflecting the power series of your $\varphi()$ function
$\phantom a$
By inspection (and crosscheck) this shows the rowsums of the Carlemanmatrix $\phi$ of a function $\varphi(x)= c_1x+c_2x^2+c_3x^3 + ...$
Let's check this. We construct the Carlemanmatrix $\phi$ from this:
$$ \phi = \small \begin{bmatrix}
1 & . & . & . & . & . \\
0 & c_1 & . & . & . & . \\
0 & c_2 & c_1^2 & . & . & . \\
0 & c_3 & 2c_2c_1 & c_1^3 & . & . \\
0 & c_4 & 2c_3c_1+c_2^2 & 3c_2c_1^2 & c_1^4 & . \\
0 & c_5 & 2c_4c_1+2c_3c_2 & 3c_3c_1^2+3c_2^2c_1 & 4c_2c_1^3 & c_1^5
\end{bmatrix} \tag {3a}
$$ and do the factorial scaling $^dF \cdot \phi \cdot \,^df$ (again simplifying the notation of its name as $F \phi f$):
$$ F\phi f = \small \begin{bmatrix}
1 & . & . & . & . & . \\
0 & c_1 & . & . & . & . \\
0 & 2c_2 & c_1^2 & . & . & . \\
0 & 6c_3 & 6c_2c_1 & c_1^3 & . & . \\
0 & 24c_4 & 24c_3c_1+12c_2^2 & 12c_2c_1^2 & c_1^4 & . \\
0 & 120c_5 & 120c_4c_1+120c_3c_2 & 60c_3c_1^2+60c_2^2c_1 & 20c_2c_1^3 & c_1^5
\end{bmatrix} \tag {3b}
$$
Visual inspection shows, that we have equality with $A$ and rowsums in $F \phi f$
So in my matrix-lingo we have, with a columnvector $\,^cV(x)$ at $x=1$ which does the rowwise summation:
$$ F \phi f \cdot ^cV(1) = A \tag {3c}$$
and for each row $n$:
$$ F \phi f_{n,} \cdot V(1) = a_n \tag{3d}$$
But putting the leading factorial away by premultiplication of the inverse of $F$ (simply say $f$) we could write
$$ \phi f \cdot V(1) = f \cdot A \tag {3e} $$
and then reorganize the second dot-product in the lhs, writing $fV(x)$ for $ [1,x/1!,x^2/2!,...]$
$$ \phi \cdot fV(1) = fA \tag {3f} $$
- The second determination of $A$ - on the third eqn.
$\phantom a$
I would similarly as in (eq $1$) expand the factorials to binomials and write
$$ a_n= n! \sum_{k=n}^\infty \frac{k!}{(k-n)!n!}c_k = n! \sum_{k=n}^\infty \binom{k}{n} c_k
\tag {4a}$$
This means, we work with derivatives of the function $\varphi()$ and we simply have
$$ a_n = \varphi^{(n)}(1) \tag {4b}
$$
- The Abel's-functional equation - combining (3) and (4)
$\phantom a$
Using, that the matrix $\phi$ (due to its construction as a Carlemanmatrix for $\varphi()$) has in its second column the coefficients of the function $\varphi(x)$ and that coefficients are the derivatives at $x=0$ divided by the factorials:
$$\phi_{[,1]}= \text{vector}[ \varphi^{(r)}(0)/r! ]_{r=0}^\infty \tag {5a}$$
and rescaling (eq 4a) by the reciprocal factorials
$$ fA=\text{vector}[ \varphi^{(r)}(1)/r! ]_{r=0}^\infty \tag {5b} $$
getting the taylorseries for $\varphi()$ at $x=1$ we can as well write
$$ fA = P \cdot \phi[,1] \tag {5c} $$
because the binomial-matrix does exactly that shift:
$$ V(0) \cdot \phi = [1, \varphi(0), \varphi(0)^2,...]] \\
V(0) \cdot P \cdot \phi = [1, \varphi(1), \varphi(1)^2,...]] \tag{5d} $$
so we have now established the equality (eq 5c) and (eq 3f):
$$ P \cdot \phi_{[,1]} \underset{\text{by (eq 3f)}}= \phi \cdot fV(1) \tag {5e} $$
- The matrix $B_b$ for the exponentiation
$\phantom a$
Now we may remember, that the second column of matrix $B_b$ is equal to just that vector $fV(1)$ on the rhs, and I think it is not too hazardous to conclude, that your equations lead to the formula
$$ P \cdot \phi = \phi \cdot B_b \tag {6a}$$
where the matrix $B_b$ is just the Carlemanmatrix for $\exp(x)$ .
This is the functional form:
$$ V(x) \cdot P \cdot \phi = V(x) \cdot \phi \cdot B_b \tag {6b} $$
which - when the dotproducts are evaluated - simply denotes the Abel-equation
$$ \varphi(x+1) = \exp(\varphi(x)) \tag {6c} $$
Well, this confirms the basic sanity of your equations, and relates it to the well known Abel equation for the iteration of the $\exp(x)$ function.
We can do one more step, replacing $x$ by $\varphi^{\circ -1}(x)$ to get a better formulation for the iterative aspect:
$$ \small {\varphi(\varphi^{\circ -1}(x)+1) = \exp(x)} \\
\small {\varphi(\varphi^{\circ -1}(x)+2) = \exp^{\circ 2}(x)}\\ \small {\vdots} \\
\small {\varphi(\varphi^{\circ -1}(x)+h) = \exp^{\circ h}(x)} \tag {6d} $$
Unfortunately we have not more information about the coefficients $c_k$ from your observations (and/or some hypotheses), so we have to stop here.
An amateurish ansatz in this spirit (solving (eq 6d), calling $\varphi^{\circ -1}(x)$ the "superlog" or $\text{slog}(x)$) has been proposed in about 2005 by A. Robbins, who was also co-founder of the tetration-forum. He tried to construct the sequence of $c_k$ by assuming small dimensions of the involved matrices, say beginning at $n=3$ , increasing $n$ and look, whether the occuring matrices converge to some constant values of $c_k$ in the small indexes $k$. This is an easy approach and gives approximations for the fractional iterates, but is lacking any known relation to solutions like the Schröder-, the Ecallé-, the Walker- or the Kneser-method. (His paper is online on github).
A widely accepted ansatz for the construction of an explicite powerseries for the Abel-function $\varphi()$ has been given by J. Ecallé (don't know whether he was the first one) but which is more involved and is a Laurent series (having a leading term ${c_{-1} \cdot x^{-1}}$ ) with a logarithmic term ${\log(h)/3}$ and a constant $c_0$. This version is used for instance frequently in the tetration-forum. Or you may get a start at some question in MO and its related q&a's.
If I recall correctly, it is moreover based on the Schroeder-mechanism, and not by some attempt to find the $c_k$ as in your system directly.
It is too much to go deeper here, I must give my compliments that you seem to have found the basic formal properties of an Abel-system and its formal powerseries - whether it is solvable with that type of coefficients ($c_0=0$? $a_0=1$?) or not.
update2
7. Application and further development of the Robbins-ansatz
To show a method how to at least arrive at some estimates for the coefficients and get meaningful approximations for the values in the $\varphi^{\circ -1}()$ function.
I follow here the ansatz of A. Robbins.
First we observe, that in (eq 6d) $ \varphi(\varphi^{\circ -1}(x)+1)=\exp(x)$ we can $\varphi^{\circ -1}()$ apply on each side. (Of course we must assume that the powerseries/dotproducts have some finite radius of convergence). We get then the equation
$$ \varphi^{\circ -1}(x)+1=\varphi^{\circ -1}(\exp(x)) \tag {7a}$$
In matrix-notation (let's write for notational comfort $Q=\phi^{-1}$):
$$ V(x) \cdot Q \cdot P = V(x) \cdot B \cdot Q $$
and then the identity of coefficients in the matrix-compositions alone
$$ Q \cdot P = B_b \cdot Q \tag {7b} $$
We assume the inverse $\varphi()$-function to be
$$ Q(x)= q_1 x + q_2 x^2 + q_3 x^3 + ...\tag {7c} $$
and that we have that coefficients in the second column in the (Carleman-) matrix $Q$:
$$ Q_{[,1]} = [0,q_1,q_2,q_3,...] \tag {7d} $$
Then, if we look again at (eq 7b) we can reduce the equality to the essential part
$$ (Q \cdot P)_{[,1]} = (B_b \cdot Q)_{[,1]} \tag {7e} $$
which is
$$ V(0) + Q_{[,1]} = B_b \cdot Q_{[,1]} \tag {7f}$$
Now we can rearrange the left $Q_{[,1]}$ to the right
$$ V(0) = B_b\cdot Q_{[,1]} -Q_{[,1]} \\
V(0) = (B_b - I)\cdot Q_{[,1]} \tag {7g}$$
Now, using a finite size, say $n=8$ we could feed this in a solver, like
matsolve(Bb-I,V(0)) but unfortunately the first column in the expression $B_b - I$ is completely zero (because $B_b$ has the form of a Carlemanmatrix) and is thus not invertible.
The idea of Robbins was here,
- to cheat a bit: to remove the first column and the last row to have again a square matrix, but which can now be inverted. If this works at all it might later be justified by the goal that we want to work with matrices of infinite size where such a transition might be possible and still meaningful.
- to try this for increasing size of the matrices and to look, whether that increasing would make the leading coefficients converge to something.
I've reproduced that idea, and because the improving of approximation is slow, document only the doubled sizes: $n=2^1,2^2,2^3,...,2^7$ (for the shortened $B_b-I$!) and got the following output:
Table $7h$: approximations to coefficients $q_k$ of powerseries $Q(x)$
$$ \small \begin{array} {r|rrrrrrrrrrrrr}
n & q_0 & q_1 & q_2 & q_3 & q_4 & q_5 & q_6 & q_7 & ...\\ \hline
2&0 & 1.0000000 & 0 & . & . & . & . & . \\
4&0 & 0.92307692 & 0.24615385 & -0.18461538 & 0.015384615 & . & . & . \\
8&0 & 0.91644296 & 0.24850496 & -0.11395859 & -0.093197376 & 0.020154115 & 0.040614135 & -0.021681561 \\
16&0 & 0.91595862 & 0.24921856 & -0.11061148 & -0.093582187 & 0.010588156 & 0.035629206 & 0.0054841446 \\
32& 0 & 0.91594399 & 0.24933929 & -0.11046111 & -0.093883524 & 0.010030915 & 0.035820110 & 0.0064801129 \\
64& 0 & 0.91594562 & 0.24935378 & -0.11046270 & -0.093932220 & 0.010001216 & 0.035889177 & 0.0065707651 \\
128& 0 & 0.91594601 & 0.24935461 & -0.11046450 & -0.093936126 & 0.010002714 & 0.035897333 & 0.0065738120\\ \hline
700^* & 0 & 0.91594606 & 0.24935460& -0.11046476& -0.093936255& 0.010003233& 0.035897922& 0.0065734011
\end{array}
$$
(*) Data computed by Jay D Fox (2007) in Tetration-forum
This values are also documented in the Robbins-article, however he wrote them as $n$'th derivatives, and his values must be divided by factorials. (After that I got differences to Robbins' data in about $7$'th decimal digit):
Table $7i$: comparision of coefficients of computed $Q(x)$ and Robbins' documentation:
$$ \small \begin{array} {r|r}
\text{reproduction} & \text{Robbins}\\
n=128 & n=160 \\ \hline
0 & . \\
0.91594601 & 0.91594603 \\
0.24935461 & 0.24935462 \\
-0.11046450 & -0.11046461 \\
-0.093936126 & -0.093936220 \\
0.010002714 & 0.010002913 \\
0.035897333 & 0.035897641 \\
0.0065738120 & 0.0065737081 \\
-0.012305809 & -0.012306288
\end{array}
$$
Robbins claims then additionally that a meaningful assumption would be $q_0=-1$ , because then we have normalized $Q_R(0)=-1$ and $Q_R(1)=0$ and $Q_R(e)=1$ (let's write that modified function $Q()$ indexed as $Q_R()$ from now). $Q_R(x)$ is then the function $\text{slog}()$ in Robbins' article.
Calculations by this style give then meaningful numbers with accuracy of some $6$ to $10$ digits precision, such that $Q_R(e^x)-Q_R(x)\approx 1$ and $Q_R(1)=0$ and for (small!) values $Q_R(\,^he) = h$. For instance using that value that Robbins gives for $ x =\,^{0.5}e \approx 1.6463546427466649$ and using this value in the $Q_R()$ function gives $Q_R( x ) = 0.50000026$ which I find quite a nice approximation (using $n=64$ terms/matrices). Note that this $x$ seems to exceed the range of convergence, but since $Q()$ has coefficients with (roughly) alternating signs the sum can be done by Euler-summation with small order. (With that Eulersummation we find by binary search the more precise value $x_{0.5}=1.64635422897119922045=\,^{0.5}e $ and $Q_R(x_{0.5})=0.5$ to $20$ digits)
Robbins does not give the coefficients $c_k$ (by the inversion of the powerseries $Q_R(x)$).
But assuming the $Q(x)$ function with $q_0=0$ we can do inversion and get coefficients:
$$ \hat \varphi(x)= 1.0917674 x - 0.32449483 x^2 + 0.34983599 x^3 - 0.23085419 x^4 + 0.20133057 x^5 + O(x^6) \tag {7j} $$
(where I write $\hat \varphi()$ to denote that this is not taken from the original definition of $\varphi()$ above)
Looking at this with $64$ coefficients it is much likely that the radius of convergence is small, but it seems, that we can subtract the coefficients of the $\log(1+x)$-Mercator series from this, and get a better series even with larger range of convergence, say
$$ \hat \varphi_2(x) = \hat \varphi(x) - \log(1+x) \tag {7k} $$
The first few coefficients are then
$$ \hat \varphi_2(x)= 0.091767404 x + 0.17550517 x^2 + 0.016502661 x^3 + 0.019145810 x^4 + 0.0013305742 x^5 + 0.0023142725 x^6 - 2.1040292 E-5 x^7 + O(x^8) \tag {7L} $$
With this I get the following results:
Table $7m$ - tetrations of $e$ to height $x$
$$ \begin{array} {}
x & y=\hat \varphi_2(x)+\log(1+x) & \exp(y) &\text{comment}\\ \hline
0& 0& 1\\
0.5 & 0.49856329 & 1.6463542 & 13 \text{ dig correct to } x_{0.5}\\
\pi/4 & 0.77623646 & 2.1732776 & \\
1 & 1.0000000 & 2.7182818 & 22 \text{ dig correct}\\
1.5 & 1.6463542 & 5.1880309 & 16 \text{ dig good} \\
2.0 & 2.7182818 & 15.154262 & \text{$9$ and $8$ dig good} \\
e & & 2075.9680 & \text{by using $e-2$ and } \exp(\exp(y)) \\
& & &\text{ the Robbins-value is } 2075.9681
\end{array}$$
This looks very good, and leaves only, to add the exponentiation to the operation to have a tetration-function for the basis $e$.
We can thus fractionally tetrate to base $e$ and the final form should then be:
$$ \hat {\text{tet}}(x) = \exp(\hat \varphi_2(x) +\log(1+x)) \\
= \,^x e \tag {7n}$$
Well, this seems to have now escaped from your construction of the
$c_k$ and your
$\varphi(x)$-function. At the moment, I don't see how to establish a clearer relation, perhaps I can come back to this later...