4

This question is to answer SO question which does not support mathjax.

To prove that a given point is global maximum of a function, you should calculate the determinant of Hessian matrix evaluated at the point.

Let $ X_1, …, X_n$ be a random sample from $N(\theta_1, \theta_2)$.

Then the log likelihood function is $$ lnL(\theta_1, \theta_2 ) = -\frac{n}{2}ln(2\pi\theta_2) - \frac{1}{2\theta_2}\sum_{i=1}^{n}(x_i - \theta_1)^2 $$ The first-order derivatives are $$ \frac{\partial L}{\partial \theta_1} = \frac{1}{\theta_2} \sum_{i=1}^n (x_i - \theta_1) \\ \frac{\partial L}{\partial \theta_2} = -\frac{n}{2\theta_2} + \frac{\sum_{i=1}^n (x_i - \theta_1)^2}{2\theta_2^2} $$ Thus, MLE $\hat\theta_1, \hat\theta_2$ are $$ \hat\theta_1 = \frac{1}{n}\sum_{i=1}^n x_i = \bar x \\ \hat\theta_2= \frac{1}{n}\sum_{i=1}^n (x_i - \hat \theta_1)^2 = \frac{1}{n}\sum_{i=1}^n (x_i - \bar x)^2 $$ The second-order derivatives are $$ \begin{align*} & \frac{\partial^2 L}{\partial \theta_1^2} = -\frac{n}{\theta_2} \\ & \frac{\partial^2 L}{\partial \theta_2^2} = \frac{n}{2 \theta_2 ^2} - \frac{\sum_{i=1}^n(x_i - \theta_1)^2}{ \theta_2 ^3} \\ & \frac{\partial^2 L}{\partial \theta_1 \partial \theta_2} = -\frac{\sum_{i=1}^n(x_i - \theta_1)}{\theta_2^2} \end{align*} $$ To evaluate these three second-order derivatives at $(\hat\theta_1, \hat\theta_2)$ $$ \begin{align*} & \left. \frac{\partial^2 L}{\partial \theta_1^2}\right \vert_{\hat \theta_1.\hat\theta_2} = -\frac{n}{\sum_{i=1}^n(x_i - \bar x)^2} = -\frac{n}{\hat\theta_2} \\ & \left. \frac{\partial^2 L}{\partial \theta_2^2}\right \vert_{\hat \theta_1.\hat\theta_2} = \frac{n}{2 \hat\theta_2 ^2} - \frac{n\hat\theta_2 ^2}{ \hat\theta_2 ^3} = \frac{n}{2 \hat\theta_2 ^2} - \frac{n}{ \hat\theta_2} = \frac{n(1-2n^2)}{[\sum_{i=1}^n (x_i - \bar x)]^2} \\ & \left. \frac{\partial^2 L}{\partial \theta_1 \partial \theta_2}\right \vert_{\hat \theta_1.\hat\theta_2} = 0 \end{align*} $$

The determinant of Hessian matrix evaluated at $(\hat\theta_1, \hat\theta_2)$ is $$ \left. \vert H \right. \vert= \left. \frac{\partial^2 L}{\partial \theta_1^2} \frac{\partial^2 L}{\partial \theta_2^2} - \left( \frac{\partial^2 L} {\partial \theta_1\partial \theta_2} \right) ^2 \right \vert_{\hat\theta_1 \hat\theta_2} $$

Since samplie size $n \geq 1$, the first term of the right hand side is a multiplication of two negative terms.

Finally, $|H| > 0$ and $\left. \frac{\partial^2 L}{\partial \theta_1^2}\right \vert_{\hat \theta_1.\hat\theta_2} < 0$ implies that $lnL(\theta_1, \theta_2)$ is concave at $(\hat \theta_1, \hat \theta_2)$

We had only one candiate from first-order derivates, so this is the global maximum.

qpzm
  • 151
  • The log-likelihood function $\log L(θ1,θ2)$ is not concave in the vector $(θ1,θ2)$, whereas it is concave in each of $θ1$ and $θ2$; see here for more details. – Amir Mar 11 '25 at 16:01

0 Answers0