Specifically I am working on a problem of the form: $$ \int_0^{\infty} \varphi(x) \delta( x^2 - \alpha^2) dx. $$
I know that when composed with a smooth function $g(x)$ we have: $\delta(g(x))= \displaystyle{\sum_i} \frac{\delta(x-x_i)}{|g'(x_i)|}$, we may then proceed to compute: $$ \int_0^{\infty} \varphi(x) \frac{\delta( x - \alpha)}{2|\alpha|} dx + \int_0^{\infty} \varphi(x) \frac{\delta( x + \alpha)}{2|-\alpha|} dx.$$
It is at this point that I refused to take $$ \int_a^{b} \varphi(x) \delta(x-x_0) dx = \begin{cases} \varphi(x_0) & x_0 \in (a,b) \\ 0 & x_0 \notin (a,b)\\ \end{cases} $$
at face value. In another answer I read it was noted that giving: $ \quad \delta[\varphi]= \displaystyle{\int_{-\infty}^{\infty}} \varphi(x)\delta(x)dx, \ $ is a convenient notation rather than a formal definition of the $ \delta$-distribution with respect to the integral, which makes complete sense to me. My question is this then, how do we make sense of the expression which I am dealing with? I have never taken a class on functional analysis, but I am familiar with the concepts and have a good idea of the algebraic nature of the space of test functions.