The claim is intuitive, but I have not been able to find a formal proof.
Just don't be afraid to formalize each observation in your intuition exactly as it is.
I will prefer to think that the maximal waiting time is $m$, the arrival rate is $1$, and the service rate is $\lambda$ (just not to have stupid denominators everywhere).
1) With this normalization, the average length of the queue is the same as the average time a customer spends in the queue. If the customer leaves without being served, her time in the queue is just $m$, so it suffices to check that the expected time of a served customer in the queue is close to $m$.
2) Let $T$ be the time the current customer at the service station spent in the queue before getting to the server and let $0$ be her arrival time to the queue. Let her service time be $\tau$. Then the next customer that will be served is the first one who arrives after the moment $\max(0,T+\tau-m)$. This moment depends on $T$ and $\tau$ only, so the time $t$ before the next arrival from that moment is independent of $T,\tau$ and exponentially distributed with mean $1$. Thus, the queue waiting time of the next served customer is $\max(\min(T+\tau,m)-t,0)$. In the equilibrium state that should be equidistributed with $T$.
3) This gives you some equation to solve, but since you do not want too much, we'd rather proceed in some crude way. The intuition is that since $E\tau=\frac 1\lambda>1$ and $Et=1$, this recursion is typically a push to the right with truncation there and occasional long moves to the left whose probability decays exponentially in length. The intuition then says that we should end up with the probability distribution for $T$ that is concentrated near the right end of the interval $[0,m]$ and decays exponentially when we move away from it. Thus, we should formally choose some small $\alpha>0$ and try to estimate the exponential moment $Ee^{\alpha(m-T)}$. Plugging in the recurrence relation, we get
$$
Ee^{\alpha(m-T)}=E\exp\{\alpha(m-\max(\min(T+\tau,m)-t,0))\}
\\
=
E\exp\{\alpha(\min(\max(m-T-\tau,0)+t,m))\}
\\
\le E\exp\{\alpha(\max(m-T-\tau,0)+t)\}=Ee^{\alpha t}E e^{\alpha\max(m-T-\tau,0)}\,.
$$
4) Now notice that if $m-T$ is large, the conditional expectation $E_\tau e^{\alpha\max(m-T-\tau,0)}$ is close to $e^{\alpha(m-T)}Ee^{-\alpha\tau}$ in the sense that for every $\gamma>1$, there is $M_\gamma>0$ such that if $m-T>M_\gamma$, then
$$
E_\tau e^{\alpha\max(m-T-\tau,0)}\le\gamma e^{\alpha(m-T)}Ee^{-\alpha\tau}
$$
Thus, splitting the last expectation in the long display into the parts corresponding to $m-T\le M_\gamma$ (this part is obviously bounded by
$C(\gamma)=e^{\alpha M_\gamma}$) and the part corresponding to $m-T>M_\gamma$, which is bounded by $\gamma Ee^{-\alpha\tau}Ee^{\alpha(m-T)}$, we get
$$
Ee^{\alpha(m-T)}\le C'(\gamma)+\gamma Ee^{-\alpha\tau}Ee^{\alpha t} Ee^{\alpha(m-T)}
$$
However
$$
\gamma Ee^{-\alpha\tau}Ee^{\alpha t}=\gamma\frac 1{1-\alpha}\frac{\lambda}{\lambda+\alpha},
$$
which for every fixed $\lambda<1$ can be made smaller than $1$ if $\alpha>0$ close to $0$ is chosen first and $\gamma>1$ close to $1$ is chosen second. Thus, this inequality implies the bound
$$
Ee^{\alpha(m-T)}\le C''(\lambda)<+\infty
$$
and, thereby, the bound $E(m-T)\le C'''(\lambda)$, which is strong enough to conclude that your conjectured $o(m)$ is really just $O(1)$.