4

Given a sample $ X_1,X_2 \dots X_{100}$ and the density function $ f(x;\theta) = \frac{1}{\pi \cdot \left(1+\left(x-\theta \right)^2\right)}$ , find an approximate solution for $\hat{\theta}_{MLE.}$

My attempt:

I have found the joint likelihood $L(\theta;x_1,x_2\dots x_{100}) = \prod _{i=1}^{100}\left(\frac{1}{\pi \cdot \left(1+\left(x_i-\theta \right)^2\right)}\right)\:$

$l$ = $\log(L) = -100*\ln(\pi)-\sum^{100}_{i=1}(\ln(1+(x-\theta)^2)$.

I'm not sure of this step

$\frac{\partial }{\partial \theta}\left(\log(L)\right) = \sum_{i=1}^{100}(\frac{2(x_i-\theta)}{1+(x_i-\theta)^2}$

then I used Newton's method to find the maxima.

this is the script I used to calculate the maxima

#deravitive of log(L).
fun1 <- function(theta){
  y1 <- 0
  for(i in 1:length(x)){
    y1 <- y1 + (2*(theta-x[i]))/(1+(x[i]-theta)^2)
  }
  return(y1)
}

#derivative of fun1. fun1.tag <- function(theta){ y <- 0 for(i in 1:length(x)){ y <- 2(theta^2+(x[i]^2)-20x[i]-1)/((1+(x[i]-theta)^2)^2) } return(y) }

The Newton's method.

guess <- function(theta_guess){ theta2 <- theta_guess - fun1(theta_guess)/fun1.tag(theta_guess) return(theta2) } theta1 <- median(data$x) epsilon <- 1

theta_before <- 0

while(epsilon >0.0001){ theta1 <- guess(theta1) epsilon <- (theta_before- theta1)^2 theta_before <- theta1 }

What I got was $\hat{\theta}_{MLE} = 5.166$

I'm now trying to plot the data(in my case x) and check if $\hat{\theta}_{MLE} = 5.166$ is actually a maxima.

Mahajna
  • 53
  • 4

2 Answers2

1

You have a typo in your formula

#derivative of fun1.
fun1.tag <- function(theta){
  y <- 0
  for(i in 1:length(x)){
    y <- y + 2*(theta^2+(x[i]^2)-20*x[i]-1)/((1+(x[i]-theta)^2)^2)
  }
  return(y)
}

There is y + missing inside the loop.

desertnaut
  • 2,154
  • 2
  • 16
  • 25
kate-melnykova
  • 548
  • 2
  • 11
1

It seems it is even easier

The MLE is defined as

$$ \theta_{MLE} = \arg\max -(100 \ln \pi + \sum_{i=1}^{100} \ln(1 + (x_{i} - \theta)^{2})) $$

so you need to minimize the sum of logs and applying the exponential to each element of the sum does not change the result of the argmin because it is a monotone increasing function, so at the end of the day you have to solve

$$ \theta_{MLE} = \arg\min \sum_{i=1}^{100} (x_{i} - \theta)^{2} $$

and since it is clearly convex the argmin can be found where the derivative is zero so

$$ \frac{\partial (\sum_{i=1}^{100} (x_{i} - \theta)^{2} ))}{\partial \theta} = 0 $$

so finally

$$ \theta_{MLE} = \frac{1}{100} \sum_{i=1}^{100} x_{i} $$

which is the cnter of mass of the distribution of the observations

Nicola Bernini
  • 281
  • 2
  • 3