Say we have $P:A\to[0,1]$ where $A\subseteq \mathbb{R}$
$$P(x)=\begin{cases} x^2+5 & x\in\left\{\frac{1}{a}:a\in\mathbb{N}\right\}\cup\left\{\frac{1}{b^{\sqrt{2}}+0.1}+\frac{1}{5}:b\in\mathbb{Z}\right\}\cap[0,1] \\ x & x\in\left\{\frac{1}{c+.1}+0.6:c\in\mathbb{Z}\right\}\cap[0,1] \end{cases}$$
There are three limit points: $x=0,0.2,0.6$. My intuition states since $\left\{\frac{1}{b^{\sqrt{2}}+0.1}+\frac{1}{5}:b\in\mathbb{Z}\right\}\cap[0,1]$ converges to $0.2$, as $b$ grows larger, faster than other sets to their limit points, $0.2$ should have weight of $1$ and the other set’s limit point should have weight of zero.
Hence the average of $P$ should be $P(0.2)$.
Is my average intuitive? Could we produce a sum that gives this? Does the measure from this answer work.
If $A$ is finite, it's obvious how to define the average of $P$ (just do $\frac{1}{|A|}\sum_{x \in A} P(x)$). So, assume $A$ is infinite. Consider the sets $A\cap E_1, A\cap E_2, \dots$. Let $x_1$ be a point in the first nonempty one of these sets. Let $x_2$ be a point in the second nonempty of these sets, etc. Look at the measures $\delta_{x_1}, \frac{\delta_{x_1}+\delta_{x_2}}{2}, \dots, \frac{\delta_{x_1}+\dots+\delta_{x_N}}{N},\dots$. Since $[0,1]$ is a compact metric space, there is some probability measure $\mu$ on $[0,1]$ that is a weak* limit of some subsequence of these measures, i.e. there is some $(N_k)_k$ with $\frac{1}{N_k}\sum_{j=1}^{N_k} f(x_j) \to \int_0^1 f d\mu$ for each $f \in C([0,1])$.
We then define the average of $P$ over $A$ to be $\int_0^1 Pd\mu$.
Benefits of this definition: (1) It coincides with the Lebesgue measure when $A = \mathbb{Q}$; in fact, it coincides with the Lebesgue measure whenever $A$ is dense in $[0,1]$. (2) It is localized to the right places (e.g. $A \subseteq [0,\frac{1}{2}]$ implies $\mu$ lives on $[0,\frac{1}{2}]$). (3) It is intuitively an average; we are sampling "randomly" from the interval $[0,1]$ and taking a limit of these empirical averages of samples.
Cons of this definition: (1) It is not unique (for two reasons: (a) the choice of the $x_i$'s is not unique; (b) there might be multiple weak* limits). However, I don't think this can be avoided -- I don't think one can get a unique, intuitive average over an arbitrary countably infinite set.
My answer to your question from 2 years ago (!) might be useful (good you're still studying this stuff).