I am trying to prove the following algorithm to be correct:
Sum(A[1..n], s)=
sum = 0
for r=1 to s
i = some random index i in interval [1, n]
sum = sum + A[i]
return n(sum/s)
where A is an array consisting of $n$ integers, where each integer A[i] belongs to the interval $[1, M]$ and $s=1/\epsilon$ ($\epsilon$ is some constant) is a fixed integer value.
Simply put, the algorithm wants to estimate the sum of all array elements that are contained within A by sampling $s<n$ random elements. Afterwards, we then take an arithmetic average over those s sample values and then multiply with number of array elements (i.e. $n$) to get the estimated sum of all array elements.
I was advised by some people I know to check into either of Chernoff's bound and Hoeffding's inequality to prove a randomized algorithm, but I am not sure about how to apply them on this randomized algorithm (I am not experienced with proving randomized algorithms and thus I maybe need a few guidelines here to better understand the application of them).
If it is none of them, then what other alternative theorems are there to use to prove this algorithm to be correct?