Given any event $A$ and $[0,1]$-valued random variable $X$ the conditional probability $P(A|X)$ is a random variable uniquely defined (outside a set of measure zero) by the requirement that
$$\displaystyle \int_{S}P(A|X) d P = P(S\cap A)$$
for every $X$-measurable set $S$, i.e $S = X^{-1}(U)$ for some measurable $U$.
In particular for $S$ the whole probability space we have
$$P(A)=\displaystyle \int P(A|X) d P $$
The discrete version of the above is as follows: Suppose Y is discrete random variable taking vales $y_1,y_2,\ldots y_n$ we can write
$P(A) = \sum_i P(A,Y=y_i) = \sum_i P(A | Y=y_i)P(Y=y_i)$.
meaning we integrate with respect to $X$
The notation suggests that if $Y$ is a discretisation of $X$, i.e $Y$ is $X$ rounded to the nearest multiple of $1/n$, and we let the discretisation become very fine then the right-hand-side should tend to
$$\sum_i P(A | Y=y_i)P(Y=y_i) \to \int P(A|X) dX$$
Here $dX$ means we integrate with respect to the measure induced by $X$. i.e $\mu_X(A) = P(X \in A)$.
But this is different from the definition of the conditional probability. Should I find this surprising? Where has my intuition for the meaning of these objects broken down?