Problem:
Given $\mathbf{A}_D\in [0,1]^{N\times N}$ ($D,N\in\mathbb{Z}^+$ and $D\ge N$) converging to the identity matrix $\mathbf{I}_N$ in probability, i.e., for any $\epsilon>0$ and choice of norm $\|\cdot\|$, there is: $$ \mathbb{P}[\|\mathbf{A}_D-\mathbf{I}_N\|\geq\epsilon]\to0~~(D\rightarrow \infty). $$
Can we say that $\mathbb{E}[\ln(\det(\mathbf{A}_D))] \rightarrow 0$? How to prove/disprove this?
Can we directly calculate the value of $\mathbb{E}[\ln(\det(\mathbf{A}_D))]$?
(Please see the Update part for more details about how $\mathbf{A}_D$ is generated in my task.)
Background:
I post a previous problem at Here, which is resolved by the answer from @JacobManaker.
Now I am confused by how to show if the convergence of expectation holds. I first try to learn something from Here. However, the above problem is still too difficult for me.
Intuitively, I guess that $\mathbf{A}_D\rightarrow \mathbf{I}_N$, $\det(\mathbf{A}_D)\rightarrow 1$ and $\ln(\det(\mathbf{A}_D))\rightarrow 0$.
One key thing is that all elements of $\mathbf{A}_D$ are bounded in $[0,1]$.
But how to exactly analyse this?
Update 1 (The Generation Method of $\mathbf{A}_D$):
Here I supplement more details about how the matrix $\mathbf{A}_D$ is generated (the previous problem):
Given $\alpha\in\mathbb{R}^+$, $N\in \mathbb{Z}^+$ and $D\in \{N, N+1, N+2, \cdots\}$, a random matrix $\mathbf{A}_D$ is generated by the following steps:
$(1)$ Randomly select $N$ numbers from $\{1,2,\cdots,D\}$ to form a sequence $p=\{p_i\}_{i=1}^N$.
$(2)$ Then calculate $\mathbf{A}_D=[a_{ij}]_{N\times N}$, where $a_{ij}=e^{-\alpha |p_i - p_j|}$.
Update 2 (Some of My Efforts):
I am confused by how to start.
I may know that the diagonal elements of $\mathbf{A}$ will be all ones, since $|p_i-p_i|=0$.
And I may know that all elements of $\mathbf{A}$ are in $[0,1]$ and $\mathbf{A}$ is symmetric.
Intuitively, I guess that when $D$ increases, the absolute distances between each two $p_i$s may become larger and larger, so $a_{ij}$ is expected to be smaller and smaller.
I also write the following Python program for numerical validation:
import numpy as np
import random
from scipy import spatial
alpha = 1
N = 10
I = np.eye(N)
for D in range(N, 10000):
MSE = 0.0
for i in range(100):
p = np.array(random.sample(range(1, D + 1), N)).reshape(N, 1)
A = np.exp(-alpha * spatial.distance.cdist(p, p))
MSE += np.sum((A - I) ** 2.0)
MSE /= (100 * N * N)
print(MSE)
I can see that when $D$ increases, the mean squared error between $\mathbf{A}$ and $\mathbf{I}_N$ converges to zero.
0.027683220252563596
0.02508590350202309
0.02317795057344325
...
0.0001934704436327538
0.00032059290537374806
0.0003270223508894337
...
5.786435956425624e-05
1.1065792791574203e-05
5.786469182583059e-05