EDIT:
The simple argument contains an error which is revealed by studying the initial value problem
\begin{align}
x(0) &= 1 \\ x'(t) &= x(t)
\end{align}
In this case, all quantities can be computed explicitly and compared. The exact solution is $$x(t) = e^t.$$ Euler's method takes the form
\begin{align} y_0 &= 1 \\ y_{j+1} &= y_j + y_j h. \end{align}
It follows that $$y_n = (1+h)^n.$$
The truncation error at $t_j = jh$ is
$$ \epsilon_j = x(t_{j+1}) - \left( x(t_j) + x(t_j)h\right) = e^{(j+1)h} - e^{jh}(1+h).$$
It follows that the sum of the truncation errors are
$$ T_n = \sum_{j=0}^{n-1} \epsilon_j = e^h \sum_{j=0}^{n-1} (e^h)^j - (1+h) \sum_{j=0}^{n-1} (e^h)^j = (e^h - 1 - h) \frac{e^{nh}-1}{e^h-1}.$$
In contrast, the global error at $t=nh$ is
$$ E_n = e^{nh} - (1+h)^n.$$
We have $T_n = E_n$ if and only if
$$ (e^{nh} - (1+h)^n)(e^h - 1) = (e^h - 1 - h)(e^{nh}-1)$$
or equivalently
$$ e^{(n+1)h} - e^{nh} - (1+h)^n e^h + (1+h)^n = e^{(n+1)h} - e^h - e^{nh} + 1-he^{nh} + h$$
or equivalently
$$(1-e^h)(1+h)^n = 1-e^{h} + h(1-e^{nh})$$
or equivalently
$$(1+h)^n = 1 + h \frac{e^{nh} - 1}{e^h - 1}$$
or equivalently
$$\frac{(1+h)^n - 1}{h} = \frac{e^{nh} - 1}{e^h - 1}$$
Now for fixed $n$ the left hand side is a polynomial in $h>0$ of degree $n-1$. The right hand side is not a polynomial of $h>0$ of degree $n-1$ unless $n=1$.
We conclude that $T_n = E_n$ if and only if $n=1$.
The simple argument contains an error which hinges on the difference between the global error and the local truncation error.
In order to eliminate any doubt, I will give all relevant definitions Consider the initial value problem
\begin{align}
x'(t) &= f(t,x(t)) \\
x(t_0) &= x_0
\end{align}
where $f$ is such that there exists a unique solution $x$ for any initial value $(t_0,x_0)$. Let $h > 0$ be given and set $t_j = t_0 + jh$. Euler's explicit method is given by
\begin{align}
y_{j+1} &= y_j + h f(t_j,y_j), \quad j = 0,1,2,\dots \\
y_0 &= x_0
\end{align}
and we use $y_j$ as an approximation of $x(t_j)$. The global error $e_j$ is given by $$e_j = x(t_j) - y_j, \quad j = 0,1,2\dotsc $$
In order to bound the global error we need an error formula. We have an iteration for $y_j$, and so it is natural to seek an iteration for $x_j:=x(t_j)$. By Taylor's formula we have at least one $\xi_j \in (t_j,t_{j+1})$, such that
$$x_{j+1} = x(t_{j+1}) = x(t_j + h) = x(t_j) + x'(t_j)h + \frac{1}{2}x''(\xi) h^2 \\= x_j + f(t_j,x_j)h + \frac{1}{2}x''(\xi_j) h^2.$$
We conclude that the $x_j$ satisfy an iteration which is quite similar to Euler's method. The local truncation error $\epsilon_j$ at time $t_{j}$ is the error term given by $$\epsilon_j = \frac{1}{2}x''(\xi_j) h^2.$$
We see that the local truncation error is the difference between $x(t_{j+1})$ and a single step of Euler's method starting from the initial point $(t_{j},x(t_j))$, i.e.,
$$ x_{j+1} - ( x_j + f(t_j,x_j)h) = \frac{1}{2} x''(\xi_j) h^2.$$
It is this relationship which allows us to derive an inequality of the type
$$ |e_{j+1}| \leq (1 + Lh) |e_j| + \frac{1}{2} M h^2,$$
which leads to the bound
$$ |e_n| \leq \frac{1}{2} Mh \frac{e^{LT} - 1}{L}, \quad 0 \leq nh \leq T$$
However, in general it is not true that the global error satisfies
$$e_n = \epsilon_0 + \epsilon_1 + \epsilon_2 + \dotsc + \epsilon_{n-1}$$
The right hand side incorporates information about single Euler steps from $n$ different points on the exact solution curve. In contrast to the left hand side, it does not have information about the repeated application of Euler's method starting from a single point.