I think that the proof in Casella-Berger, and even the one in Lehmann's classic book on Hypothesis testing, is not as clear as they seem at fist reading.
Throughout this posting, it is assumed that
- The family of densities $\{f_\theta:\theta\in\mathbb{R}\}$ is identifiable ($f_\theta\not\equiv f_{\theta'}$ whenever $\theta\neq\theta'$.
- For any $\theta<\theta'$, the ratio $f_{\theta'}(\mathbf{x})/f_{\theta}(\mathbf{x})=r_{\theta,\theta'}(T(\mathbf{x}))$ is monotone increasing for $t\in\tau:=\{T(x):f_{\theta}(x)+f_{\theta'}(x)>0\}$.
Suppose $0<\alpha<1$.
Let $c_l$ and $c_u$ be the smallest and largest $(1-\alpha)$- quantiles of the statistic $T(X)$ with respect to $P_{\theta_0}$. It could happen that
- $c:=c_l=c_u$, in which case $P_{\theta_0}(T>c)\leq \alpha\leq P_{\theta_0}(T\geq c)$ .
- $c_l<c_u$ in which case, $P_{\theta_0}(T>c_l)=\alpha=P_{\theta_0}(T \geq c_u)$ and $P_{\theta_0}(T>c_u)\leq\alpha$; hence, $f_{\theta_0}$ vanishes on $\{\mathbf{x}: c_l<T(\mathbf{x})<c_u\}$
Let $\theta_1>\theta_0$ and denote by $\psi(X)$ the Neymann-Pearson statistic
\begin{align}
\psi(X)=\left\{ \begin{array}{lcr}
1 &\text{if} & r_{\theta_0,\theta_1}(T(X))>k\\
\gamma &\text{if} & r_{\theta_0,\theta_1}(T(X))=k\\
0 &\text{if} & r_{\theta_0,\theta_1}(T(X))<k
\end{array}\right.
\end{align}
where $k$ is the $(1-\alpha)$-quantile of $Y(X)=r_{\theta_0,\theta_1}((T(X))$ with respect to $P_{\theta_0}$ and $\gamma$ is taken so that
$$\alpha = E_{\theta_0}[\psi(X)]$$
Let
\begin{align}
t_*&:=\inf\{t\in \tau: r_{\theta_0,\theta_1}(t)\geq k\}\qquad t^*&:=\inf\{t\in \tau: r_{\theta_0,\theta_1}(t)> k\}
\end{align}
Notice that $r_{\theta_0,\theta_1}(t)=k$ for $t\in\tau\cap(t_*,t^*)$.
Case (2): $c_l<c_u$. It follows that $f_{\theta_0}$ vanishes in $T^{-1}(\tau\cap(c_1,c_u))$. Since $t\mapsto r_{\theta_0,\theta_1}(t)$ is monotone no decreasing, then either $\tau\cap(c_l,c_u)=\emptyset$ or $r_{\theta_0,\theta_1}$ takes the value $\infty$ on $(c_l,c_u)\cap\tau$. This implies that
$$t_*\leq c_l\leq t^*$$
Case (1): $c:=c_l=c_u$. Claim: $t_*\leq c$. Suppose the contrary, that $c<t_*$. Then
\begin{align}
\alpha&\geq P_{\theta_0}[T(X)>c]\geq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))\geq k]\\
&\geq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))>k]+ \gamma P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))= k]=\alpha
\end{align}
Hence $\alpha=P_{\theta_0}[T>c]$.
- If $\{r_{\theta_0,\theta_1}(T(X))\geq k\}=\{T(X)>t_*\}$,
\begin{align}
\alpha&=P_{\theta_0}[T(X)>c]=P_{\theta_0}[c<T(X)\leq t_*] +P_{\theta_0}[T(X)>t_*]\\
&\geq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))>k] + P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))=k]\\
&\geq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))>k] + \gamma P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))=k]=\alpha
\end{align}
Hence $P_{\theta_0}[T>t_*]=\alpha$. Since $c$ is the (unique) $(1-\alpha)$-quantile of $T(X)$, then $t_*=c$, contradiction!
- Similarly, if $\{r_{\theta_0,\theta_1}(T(X))\geq k\}=\{T(X)\geq t_*\}$, then
\begin{align}
\alpha&=P_{\theta_0}[T(X)>c]=P_{\theta_0}[c<T(X)< t_*] +P_{\theta_0}[T(X)\geq t_*]\\
&\geq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))>k] + P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))=k]\\
&\geq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))>k] + \gamma P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))=k]=\alpha
\end{align}
Hence $P_{\theta_0}[T(X)\geq t_*]=\alpha$. Since $c$ is the (unique) $(1-\alpha)$-quantile of $T(X)$, then $t_*=c$, contradiction!
This proves the claim and so, $t_*\leq c$. Now, from
$$\mathbb{P}_{\theta_0}[T(X)>t_*]\leq P_{\theta_0}[r_{\theta_0,\theta_1}(T(X))>k]\leq\alpha,$$
and the fact that $c$ is the unique $(1-\alpha)$-quantile of $T(X)$ under $P_{\theta_0}$, it follows that $c\leq t^*$. Therefore,
$$t_*\leq c\leq t^*$$
To conclude the proof of Karlin-Rubin's theorem notice that as $t_*\leq c_l\leq t^*$, we have that on $\{r_{\theta_0,\theta_1}(T(X))\neq k\}$, when $T(X)>c_l$, $r_{\theta_0,\theta_1}(T(X))>k$,
and when $T(X)<c_l$, $r_{\theta_0,\theta_1}(T(X))<k$.
Then, by Neymann-Pearson's lemma, the statistic
\begin{align}
\widetilde{\psi}(X)=\left\{ \begin{array}{lcr}
1 &\text{if} & T(X))>c_l\\
\gamma &\text{if} & T(X)=c_l\\
0 &\text{if} & T(X)<c_l
\end{array}\right.
\end{align}
Is an UMP statistic of size $\alpha$ for testing $\bar{H_0}: \theta=\theta_0$ against $\bar{H_1}: \theta=\theta_1$.
Observe that $\widetilde{\psi}(X)$ does not depend on the choice of $\theta_1$ (only that $\theta_1>\theta_0$). Hence, $\widetilde{\psi}(X)$ is an UMP test for $\bar{H}_0:\theta=\theta_0$ against $H_1: \theta>\theta_0$.
The idtntifiablitly if the population
$\{P_\theta:\theta\in\mathbb{R}\}$ implies that the power $$\beta(\theta_0)=E_{\theta_0}[\widetilde{\psi}(X)]<\beta(\theta_1)=E_{\theta_1}[\widetilde{\psi}(X)]$$
for all $\theta_1>\theta_0$. Furthermore, the same arguments used to prove the UMP property of $\widetilde{\psi}(X)$ for testing $\theta_0$ versus $\theta_1$ ($\theta_0<\theta_1$) show that in fact $\widetilde{\psi}(X)$ is also an UMP os size $\beta(\theta)=E_\theta[\widetilde{\psi}(X)]$ for testing $\theta$ against $\theta'$ for any $\theta<\theta'$.
In particular, $\widetilde{\psi}$ is the UMP test of size $\alpha$ for testing $H_0:\theta\leq \theta_0$ against $H_1:\theta>\theta_0$.