0

Let $x_1, \ldots, x_n$ be a realization of a random sample $X_1, \ldots , X_n$ from continuous distribution. Then why $L(\theta)=f_\theta(x_1)\ldots f_\theta(x_n)$ holds? $f_\theta(x_i)$ means $P(X_i<x_i)$ which may overlap with another probability for some $k<i$. Does this hold because $X_i$ are independent?

StubbornAtom
  • 17,932
Peter
  • 121
  • The case when the sample is from discrete distribution is more intuitive. – Peter Jun 08 '19 at 10:18
  • 1
    By the way, $f_θ(x_i)$ refers to the PDF of $X_{i}$ at $x_{i}$, not the CDF (whereas $P(X_i < x_i)$ is the CDF). – Minus One-Twelfth Jun 08 '19 at 10:30
  • That makes sense, but the integral would give $0$ for all $x_i$? – Peter Jun 08 '19 at 10:37
  • Do you mean that $P(X_i = x_i) = 0$? Yes, this is true – but remember, the PDF at $x_i$ is not the probability that $X_i = x_i$ (for continuous random variables). It is the "probability density" at $x_i$. See e.g. Difference between Probability and Probability Density for discussions on the difference between a probability and a probability density (or search for other pages about this). – Minus One-Twelfth Jun 08 '19 at 10:38
  • Yes, you are right.. Too many concepts and one gets confused.. Then what interpretation does the PDF have for a specific value? – Peter Jun 08 '19 at 10:39
  • The PDF at a particular point $a$ gives us the "probability density" at $a$, which is roughly the "probability per unit length" at that point. That is, if $f(a)$ is the PDF value of $X$ at $a$, then the probability that $X$ takes a value in some small interval near $a$ is approximately equal to $f(a)$ times the length of the interval ($P(a < X < a + \epsilon) \approx f(a)\epsilon$). So intuitively, the bigger the PDF value at a point $a$, the more likely $X$ is to take values near the point $a$ compared to places where the PDF value is small. – Minus One-Twelfth Jun 08 '19 at 10:42

0 Answers0