The answer is negative; in fact $(1-x)f_{1/2}(x)$ oscillates as $x\to 1^-$. The (zeta-style) approach below computes explicitly the Fourier expansion of the oscillating part, as a function of $\log(-\log x)$.
The series $A(s):=\sum_{n=1}^\infty a_n^{-s}$ converges (absolutely) for $s\in\mathbb{C}$ with $\Re s>1/2$, and defines an analytic function there. Now let $A_\pm(s):=\sum_{n=1}^\infty(-1)^{n-1}a_n^{-s}$; since $a_{2n}=4a_n$ and $a_{2n+1}=4a_n+1$, we get
\begin{gather}
A_\pm(s)=\sum_{n=1}^\infty a_n^{-s}-2\sum_{n=1}^\infty a_{2n}^{-s}=(1-2^{1-2s})A(s);
\\
A_\pm(s)=1-\sum_{n=1}^\infty\big((4a_n)^{-s}-(4a_n+1)^{-s}\big).
\end{gather}
The last series converges for $\Re s>-1/2$, and defines the analytic continuation of $A_\pm(s)$ there.
Thus, we have $A(s)$ analytically continued onto $\Re s>-1/2$, with simple poles at
$$
s=\frac12+s_m,\quad s_m:=\frac{m\pi i}{\log2},\quad m\in\mathbb{Z}
$$
if $A_\pm(1/2+s_m)\neq 0$, which can be verified numerically for a few values of $m$.
Next, we need an estimate of $|A_\pm(\sigma+i\tau)|$ as $|\tau|\to\infty$, with $\sigma>-1/2$ fixed. Write
$$
a^{-\sigma-i\tau}-(a+1)^{-\sigma-i\tau}=a^{-\sigma}\big(a^{-i\tau}-(a+1)^{-i\tau}\big)+(a+1)^{-i\tau}(a^{-\sigma}-(a+1)^{-\sigma})
$$
(with $a>0$). Since
$$
|a^{-i\tau}-(a+1)^{-i\tau}|=2\left|\sin\left(\frac\tau2\log\frac{a+1}a\right)\right|\leqslant\frac{|\tau|}a
$$
and (one can obtain from Bernoulli's inequality that)
$$
|a^{-\sigma}-(a+1)^{-\sigma}|\leqslant|\sigma|a^{-1-\sigma},
$$
we obtain
$$
\left|a^{-\sigma-i\tau}-(a+1)^{-\sigma-i\tau}\right|\leqslant(|\tau|+|\sigma|)a^{-1-\sigma}.\tag{*}\label{estimate}
$$
If we put $a=4a_n$ and sum over $n$, we get a crude estimate $|A_\pm(\sigma+i\tau)|=O(|\tau|)$.
Coming back to $f_{1/2}(x)=\sum_{n=1}^\infty a_n^{1/2}x^{a_n}$, we use the Cahen–Mellin integral
$$
e^{-t}=\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\Gamma(s)t^{-s}\,ds\qquad(t,\sigma>0)
$$
with $\color{blue}{\sigma>1}$ to justify the interchange of $\sum$ and $\int$ below:
\begin{align}
f_{1/2}(e^{-t})&=\frac1{2\pi i}\sum_{n=1}^\infty a_n^{1/2}\int_{\sigma-i\infty}^{\sigma+i\infty}\Gamma(s)(a_n t)^{-s}\,ds
\\&=\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\Gamma(s)A(s-1/2)t^{-s}\,ds
\\&=\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\frac{\Gamma(s)A_\pm(s-1/2)t^{-s}}{1-4^{1-s}}\,ds.
\end{align}
The crude estimate above shows that, as $|\Im s|\to\infty$, the exponential decay of $|\Gamma(s)|$ "outperforms" the (possible) growth of $|A_\pm(s-1/2)|$. This allows us to shift the line of integration to $\color{blue}{0<\sigma<1}$, using the residue theorem:
\begin{align}
f_{1/2}(e^{-t})&=\frac1{2t\log2}\sum_{m\in\mathbb{Z}}\Gamma(1+s_m)A_\pm(1/2+s_m)t^{-s_m}
\\&+\frac1{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\frac{\Gamma(s)A_\pm(s-1/2)t^{-s}}{1-4^{1-s}}\,ds.
\end{align}
The integral is now $O(t^{-\sigma})$ as $t\to 0^+$, and we can take $\sigma$ arbitrarily close to $0$.
And the sum is the (non-constant) Fourier expansion announced at the beginning.
ADDENDUM on computation of $A_m:=A_\pm\left(\frac12+\frac{m\pi i}{\log2}\right)$ (and $A_\pm(s)$ in general). Let
$$
A_\pm(s)=S_N(s)+R_N(s),\qquad R_N(s):=\sum_{n=N+1}^\infty\big((4a_n)^{-s}-(4a_n+1)^{-s}\big).
$$
The estimate \eqref{estimate} above, and the inequality $4a_n\geqslant n^2$, imply
$$
\big|R_N(\sigma+i\tau)\big|\leqslant(|\tau|+|\sigma|)\sum_{n=N+1}^\infty n^{-2-2\sigma}\leqslant\frac{|\tau|+|\sigma|}{1+2\sigma}N^{-1-2\sigma}.
$$
This estimate at $N=2$ is (already) sufficient to prove that $\color{blue}{A_1\neq 0}$.
To accelerate the convergence for more efficient computations, consider
$$
\Sigma_k(s):=S_{2^k-1}(s),\qquad\Delta_k(s):=R_{2^k-1}(s).
$$
It appears that $\Delta_k(s)$ has an asymptotic expansion of the form
$$
\Delta_k(s)\asymp\sum_{j=0}^{(\infty)} c_j(s)2^{-(2j+2s+1)k}.\qquad(k\to\infty)
$$
This eventually leads to the following "recipe".
Let $\sum_{j=0}^r p_{r,j} x^j:=\prod_{j=0}^{r-1}(4^j x-1)$ for a (small) positive integer $r$, and define
$$
\Sigma_k^{(r)}(s)=\frac{\sum_{j=0}^r 2^{(2s+1)j}p_{r,j}\Sigma_{k+j}(s)}{\sum_{j=0}^r 2^{(2s+1)j}p_{r,j}}.
$$
Then $\Sigma_k^{(r)}(s)$ is computed in roughly the same time as $\Sigma_{k+r}(s)$ (if the values are computed in sequence with increasing $k$) but converges to $A_\pm(s)$ much faster. This way, I've got the following values.
| $m$ |
$\approx\Re A_m$ |
$\approx\Im A_m$ |
| $0$ |
$\phantom{-}0.9301188714777068256428634$ |
$\phantom{-}0$ |
| $1$ |
$\phantom{-}0.6530947000949628569271101$ |
$-0.4875373269525809327441595$ |
| $2$ |
$\phantom{-}0.1502485249871372257970461$ |
$-0.4820952084817942842557168$ |
| $3$ |
$-0.0092994993442554458938163$ |
$-0.0963080518133936172598479$ |
| $4$ |
$\phantom{-}0.2783414309554035350627076$ |
$\phantom{-}0.1200023391019127895989171$ |
| $5$ |
$\phantom{-}0.5592044887794212271016653$ |
$-0.1168455562119756445211653$ |