I am curious about this. It has been a very long time since I have ever toyed with this topic but it was an old interest of mine quite some time ago - maybe 8 years (250 megaseconds) ago at least, and I never really got to a conclusion, nor do I think anyone did entirely satisfactorily, and some comments on a Youtube video I was watching inspired me to dust it off and try once more to give it another thwack.
And the question is, basically, how can one construct a "reasonable" interpolation of the "tetration" operation, also called a "power tower", which for those who have not heard of it is defined for natural $n > 1$ and real $a > 1$ by
$$^{n} a = a \uparrow \uparrow n := \underbrace{a^{a^{a^{...^a}}}}_{\mbox{$n$ copies of $a$}}$$
where the nested exponentiations on the left are evaluated in rightassociative fashion, so the deepest ("highest") layer is done first, e.g.
$$^{3} 3 = 3^{3^3} = 3^{27} = 7\ 625\ 597\ 484\ 987$$
and not left-associative, i.e.
$$^{3} 3 \ne (3^3)^3 = 3^{3 \cdot 3} = 3^9 = 19\ 683$$
What the "interpolation" part means is basically, given that this definition clearly only works for values of the second argument, $n$ (called as such by analogy with exponentiation even though it is written first in the "left-superscript" notation just introduced), usually called the "height" of the tetration or power tower for obvious reasons, that are natural numbers at least 1, since we have to have a whole number of "copies of $a$" - half a copy, say, wouldn't make much sense, as while you can literally write a half-written "$a$", that is formally nonsense and has no mathematical meaning, although it may have other forms of meaning from other angles of human understanding and analysis, e.g. perhaps as a form of artsey commentary. Mathematical meaning is, though, of course, what we're interested in. We can, of course, naturally extend this in a way similar to extending exponentiation to the integers by noting that
$$^{n+1} a = a^{^n a}$$
and thus
$$^{n-1} a = \log_a(^n a)$$
and if we do this we can at least extend that $^0 a = 1$, similar to exponentiation, and $^{(-1)} a = 0$, a rather interesting result when viewed in contrast to exponentiation given that the first negative exponentiation of a number is not a constant but instead its reciprocal. Of course, we cannot extend now to $^{(-2)} a$, as then we get $\log_a(0)$ which is undefined (though of course if you want to stretch the maths a bit and expand the codomain to the extended reals, you can say $^{(-2)}a = -\infty$. In any case though, $^{(-3)} a$ and further are definitely, really undefined, since no real exponential can be negative, much less negative infinity!). So this peters out.
Of course, the most interesting bit - as hinted at with the "half a copy" business above - is trying to extend the height $n$ to real values, presumably in $(-2, \infty)$ at least.
And there have been a number of methods fielded in those past epochs which attempt to do this as well as some interesting ones regarding the conditions which are required to produce a suitably "natural" extension, given that it is trivially obvious that one can, of course, "interpolate" a given sparse sequence of points in any way that one desires and, moreover, even with the identities $^{n+1} a = a^{^n a}$, they only suffice to make it unique insofar as whole-number increments of the tower are concerned - fill any unit interval with anything you like, and the identity will extend to provide an interpolant that will satisfy it. For exponentiation, this non-uniqueness is much less of a problem because we also have the additional identity $a^{n+m} = a^n a^m$, which lets us extend to rational values, however no such identity exists for tetration.
In this regard, extension of tetration is similar to the question of extension of the factorial, which is similarly impoverished of identities, with new ones interestingly only coming about after the extension was done by Euler in the form of the gamma function, to meet a challenge originally proposed by Bernoulli to do exactly this. The gamma function, however, is still ostensibly more "natural" simply because a) it often crops up and b) it has some very cute integral representations, esp. the darlin'
$$\Gamma(x) = \int_{0}^{1} [-\log(u)]^{n-1}\ du$$
(Though with regard to objection a), one could say this may be simply because we have not yet found such an expression, and thus places where it might be useful, could be currently written off as "unsolvable".)
Yet clearly, that doesn't seem to have been the case for tetration, either. Moreover, in all these past discussions, many of the extension methods proposed are in fact extremely cumbersome and computationally intensive to approximate, involving elaborate constructs like Riemann mappings, infinite limits of integral equations, and so forth - all things that are, while mathematically valid, both inelegant and also not something you're going be able to program into a software pack like Mathematica and have it spit out 2500 digits of $^{1/2} 2$ in the blink of an eye.
But nonetheless, one particular method out of these proposed methods seems be both fairly simple and like that it might possible be amenable to more detailed analysis, and that is the "Carleman matrix" operator method.
This method is most succinctly expressed for the specific case $a = e$, to construct the "natural tetrational" $\mathrm{tet}(x) :=\ ^x e$ with real height $x$, so we'll just focus on that for now. But basically it is based on the following two observations. The first is that one can consider the result of the height-$n$ power tower of $e$ as the iterated exponential evaluated at $1$, namely
$$^n e = \exp^n(1)$$
or perhaps more nicely for what we're about to do,
$$^{n-1} e = \exp^n(0)$$
which has some interesting gamma-function like quality about it with the offset.
And the second one is the following. If we let $\exp$ be given by its power series,
$$\exp(x) = \sum_{n=0}^{\infty} \frac{x^n}{n!}$$
then we can actually represent such iterated exponentials using what is called its Carleman matrix, basically the infinite-order "matrix" with entries
$$C[\exp]_{ij} = \frac{i^j}{j!}$$
such that if we have the infinite vector of exponential coefficients $\mathbf{a} = \left[ \frac{1}{1!}\ \frac{1}{2!}\ \frac{1}{3!}\ \cdots \right]^T$ then the vector $\mathbf{b}_n = (\mathbf{C}[\exp])^n \mathbf{a}$ is the coefficients of $\exp^n$. In particular, if we sum the top row, we get exactly what we want: $^{n-1} e$.
Now the question is, though, how can we compute this matrix power for fractional $n$ in some explicit form? It seems one possible way to do this, and the way that I saw when this method was suggested (by Gottfried Helms, who was here a long time ago, not sure if they're still so) was to try to diagonalize the matrix, so that you can use the fact that if a matrix $\mathbf{A} = \mathbf{P}\mathbf{D}\mathbf{P}^{-1}$ for some diagonal matrix $\mathbf{D}$ (which happens to be the matrix of eigenvalues) then $\mathbf{A}^t = \mathbf{P}\mathbf{D}^t\mathbf{P}^{-1}$ where that the inner power is easy to compute as you just take exponents of the diagonal terms.
And numerically this seems to work for at least finite truncations of the matrix $\mathbf{C}[\exp]$, but analytically it may be on shaky ground, as I see with this thread here that I just saw when trying to dig into this once more:
Diagonalization of an infinite matrix
and moreover it's still not super efficient as we'd like - we're not computing 2500 digits of $^{1/2} e$ and I want them computed, dammit!
However, it seems that in this case some of the objections raised in that thread do not perhaps apply here. In particular, it is mentioned how that an infinite matrix diagonalization is ambiguous (seems to mirror the situation with the tetration interpolation generally where there is great freedom) due to choice of suitable normed vector space over which to make sense of it, and moreover in that question it was pointed that the usual most "natural" space, $\ell^2$, did not work for that questioner's particular matrix, because in particular it would "map" most "vectors" of $\ell^2$ to effectively outside of the space. However, this Carleman matrix seems better behaved - in particular, due to the fact that any vector $[\ a_0\ a_1\ a_2\ \cdots\ ]^T \in \ell^2$ by definition must have terms that converge absolutely as an infinite sum, then that owing to the factorials in the matrix when we multiply by it we should also get an (even more) convergent series, as is illustrated by considering the bounding "vector" $[\ 1\ 1\ 1\ \cdots\ ]^T$ as "formally" acted upon by $\mathbf{C}[\exp]$. So it seems that in that regard, we are in better shape against at least the objections raised in that thread in this case than for that poster's scenario.
Thus the question I have is, if we take the relevant target space as $\ell^2$, can we find a "natural" infinite diagonalization and thus matrix-power for this case, and moreover express it somehow in terms of at least some sort of infinite combinatorial sums or otherwise easily-manipulated expressions for its coefficients?
Moreover, another interesting and seemingly natural question provoked by this is, exactly how sensitive is the method to the choice of at least norm used to interpret the matrix power, i.e. can we have absolute freedom to interpolate $^n e$ in any way we please, and if not, then just how much do we get? I suspect "a lot", but there are some "lots" that are more than others in mathematics, even when infinities are concerned, thanks to Cantor. And can we do the inverse - i.e. if I fill in $^{n} e$ with some freely-chosen interpolant in the interval $[0, 1]$ (for the purpose of making things easy I'll assume it's continuous and moreover equals $1$ at $x = 0$ and $e$ at $x = 1$) - can we find a norm such that the associated Carleman matrix power will produce that interpolant? Does it have to be analytic (seems right, but keep in mind that we are summing the top row, not necessarily creating a power series valid for all or even any inputs, though again it also "seems" right that if not analytic, it'll diverge)? If so, what's the proof?
ADD (epoch time 1545.46 Ms): In the quest for an at least summatory-formula shot at the diagonalization, I note this matrix has the interesting relations among rows and columns given by
$$C[\exp]_{(i+1)j} = \sum_{k=0}^{j} \frac{1}{(j - k)!} C_{ik}$$
and
$$C[\exp]_{i(j+1)} = \frac{i}{j+1} C_{ij}$$
Not sure if this helps anything, though. But at least it shows there is structure and thus we're not just dealing with effectively purely random matrices and thus in theory it might somehow be exploited in some fashion to simplify things.
ADD 2 (same time): You should actually sum the second row of the matrix power to get the tetration $^n e$, not the first to get $^{n-1} e$. The first row always sums to 1.
ADD 3 (ue+1545.47 Ms): The first row formula above allows us to derive the interesting property that if $\mathbf{C}[\exp]$ is right-multiplied by the factorial matrix
$$\mathbf{D} := \begin{bmatrix} 1 & \frac{1}{1!} & \frac{1}{2!} & \frac{1}{3!} & \cdots \\ 0 & 1 & \frac{1}{1!} & \frac{1}{2!} & \cdots \\ 0 & 0 & 1 & \frac{1}{1!} & \cdots \\ 0 & 0 & 0 & 1 & \cdots \\ & & \cdots & & \end{bmatrix}$$
where $D_{kj} = \frac{1}{(j-k)!}$ (with negative-argument factorials "naturally" extended as $\infty$ so these $D$ terms are $0$), it shifts up by one row.
ADD 4 (sim. time): It looks like we can move both up and down, in particular the matrix $\mathbf{D}$ with entries $D_{kj} = (-1)^{k-j} \frac{1}{(k-j)!}$ will shift down, while the other matrix above, perhaps better called $\mathbf{U}$ instead will shift up. Shifting left and right is possible as well but via the Hadamard product and not the ordinary product, as the second set of relations between columns indicates. In particular, the Hadamard product with the matrix $\mathbf{L}$ with entries $L_{ij} = \frac{i}{j+1}$ will shift left, and the matrix $\mathbf{R}$ with entries $R_{ij} = \frac{j}{i}$ will shift right. Thus we have some interesting system for "moving" the matrix about like some kind of tableau or grid - not sure what these symmetry properties do though for easing the analytic solution/explicit formula of the matrixpower as a summation.