5

There are 3 players and one dealer in a casino. The dealer chooses a player randomly($p_1=\frac{1}{3}$). The chosen player tosses a coin($p_2=\frac{1}{2}$).

If the coin lands head, the chosen player will get 3 dollar, the dealer and the other two players will lose 1 dollar each.
If the coin lands tail, the chosen player will lose 3 dollar, the dealer and the other two players will get 1 dollar each.

This game repeats $n$ times. Let $x_{1n},x_{2n},x_{3n}$ be the total net profit(or loss) of players and let $y_n$ be the total net profit(or loss) of the dealer.

Let $g(n)=prob(y_n>max(x_{in}) )$

For example, $g(1)=0, g(3)=\frac{1}{36}$.

Prove: $g(n)$ is increasing as odd number $n$ goes $n+2$

comments: By intuition, by central limit theorem, It seems that $x_i$ will goes $0$ more deeply than $y_i$. However, the coupling structure makes this problem not easy.

2 Answers2

2

Let $\ R_{ni}\ $ be the number of times player $\ i\ $ gets chosen and tosses tails, and $\ R_{n\,i+3}\ $ the number of times he or she gets chosen and tosses heads. The random variables $\ R_{nj}\ $ are multinomially distributed with parameters $\ n\ $ and $\ \frac{1}{6}, \frac{1}{6},\dots, \frac{1}{6}\ $: \begin{align} P\big(R_{n1}=n_1,R_{n2}=n_2,\dots,&R_{n6}=n_6\big)\\ =&\cases{\displaystyle \frac{n!}{\big(n_1!n_2!\dots n_6!\big)6^n}&if $\ \displaystyle n=\sum_{i=1}^6n_i$\\ \hspace{3em}0&otherwise, } \end{align} and \begin{align} y_n&=\sum_{j=1}^3\big(R_{nj}-R_{n\,j+3}\big)\\ x_{in}&= 4\big(R_{n\,i+3}-R_{ni}\big)+ \sum_{j=1}^3\big(R_{nj}-R_{n\,j+3}\big)\ . \end{align} Therefore, \begin{align} P\big(y_n>\max_{i\in\{1,2,3\}}x_{in}\big)&=P\big(R_{n1}>R_{n4}, R_{n2}>R_ {n5},\ R_{n3}>R_{n6}\big)\\ &=\frac{1}{8}P \big(R_{n1}\ne R_{n4}, R_{n2} \ne R_ {n5},\ R_{n3}\ne R_{n6}\big) \end{align} by symmetry. Now \begin{align} P\big(R_{n1}\ne R_{n4},\ R_{n2} &\ne R_ {n5},\ R_{n3}\ne R_{n6}\big)\\ =1-&3P\big(R_{n1}=R_{n4}, R_{n2}\ne R_ {n5},\ R_{n3}\ne R_{n6}\big)\\ -&3 P\big(R_{n1}=R_{n4}, R_{n2}= R_ {n5},\ R_{n3}\ne R_{n6}\big)\\ -& P\big(R_{n1}=R_{n4}, R_{n2}= R_ {n5},\ R_{n3}= R_{n6}\big)\\ =1-&3\big(P\big(R_{n1}=R_{n4}, R_{n3}\ne R_{n6}\big)\\ &\hspace{2em}-P\big(R_{n1}=R_{n4}, R_{n2}= R_ {n5},\ R_{n3}\ne R_{n6}\big)\big)\\ -& 3 P\big(R_{n1}=R_{n4}, R_{n2}= R_ {n5},\ R_{n3}\ne R_{n6}\big)\\ -& P\big(R_{n1}=R_{n4}, R_{n2}= R_ {n5},\ R_{n3}= R_{n6}\big)\\ =1-&3P\big(R_{n1}=R_{n4}\big)+3P \big(R_{n1}=R_{n4}, R_{n3}= R_{n6}\big)\\ -& P\big(R_{n1}=R_{n4}, R_{n2}= R_ {n5},\ R_{n3}= R_{n6}\big)\ ,\\ P\big(R_{n1}=R_{n4}\big)=&\sum_{j=0}^{\left\lfloor\frac{n}{2}\right\rfloor}\frac{n!}{(j!)^2(n-2j)!}\left(\frac{1}{6}\right)^{2j}\left(\frac{2}{3}\right)^{n-2j}\ ,\\ P \big(R_{n1}=R_{n4},\ R_{n3}&= R_{n6}\big)\\ =\sum_{j=0}^{\left\lfloor\frac{n}{2}\right\rfloor}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor-j} &\frac{n!}{(j!)^2(k!)^2(n-2(j+k))!} \left(\frac{1}{6}\right)^{2(j+k)} \left(\frac{1}{3}\right)^{n-2(j+k)}\ ,\\ P\big(R_{n1}=R_{n4}, R_{n2}&= R_ {n5},\ R_{n3}= R_{n6}\big)\\ =&\cases{0&if $\ n\ $ is odd\\ \displaystyle\frac{1}{6^{2r}}\sum_{j=0}^r\sum_{k=0}^{r-j}\frac{(2r)!}{(j!)^2(k!)^2((r-j-k)!)^2}&if $ n=2r$}\ . \end{align} Putting all this together, we have \begin{align} g(n)= \frac{1}{8} &\Bigg(1-3 \sum_{j=0}^{\left\lfloor\frac{n}{2}\right\rfloor}\frac{n!}{(j!)^2(n-2j)!}\left(\frac{1}{6}\right)^{2j}\left(\frac{2}{3}\right)^{n-2j}\\ +&3 \sum_{j=0}^{\left\lfloor\frac{n}{2}\right\rfloor}\sum_{k=0}^{\left\lfloor\frac{n}{2}\right\rfloor-j} \frac{n!}{(j!)^2(k!)^2(n-2(j+k))!} \left(\frac{1}{6}\right)^{2(j+k)} \left(\frac{1}{3}\right)^{n-2(j+k)}\Bigg) \end{align} if $\ n\ $ is odd, or \begin{align} g(2r)= \frac{1}{8} &\Bigg(1-3 \sum_{j=0}^r\frac{(2r)!}{(j!)^2(2(r-j))!}\left(\frac{1}{6}\right)^{2j}\left(\frac{2}{3}\right)^{2(r-j)}\\ +&3\sum_{j=0}^r\sum_{k=0}^{r-j} \frac{(2r)!}{(j!)^2(k!)^2(2(r-j-k))!} \left(\frac{1}{6}\right)^{2(j+k)} \left(\frac{1}{3}\right)^{2(r-j-k)}\\ -&\frac{1}{6^{2r}}\sum_{j=0}^r\sum_{k=0}^{r-j}\frac{(2r)!}{(j!)^2(k!)^2((r-j-k)!)^2}\Bigg)\ , \end{align} if $\ n=2r\ $ is even.

The values $\ g(n)\ $ for $\ n=1,2,\dots,10\ $ are given in the following table $$ \begin{array}{c|c|c|c|c|c|c|c|} &1,2&3,4&5,6&7&8&9&10\\ \hline \text{exact}&0&\frac{1}{36}&\frac{55}{1\,296}&\frac{2\,401}{46\,656}&\frac{7\,217}{139\,968}&\frac{97\,147}{1\,679\,616}&\frac{16\ 235}{279\,936}\\ \hline \text{approx.}&&0.028&0.042&0.05146&0.05156&0.0578&0.05800\\ \hline \end{array} $$

  • Thank you @lonza leggiera. I will read you answer thoroughly and response about it. – martian03 Oct 19 '20 at 11:19
  • g(5)=0.034722. g(7)=0.037508573388203. g(9)=0.03898807822740435 This is different from yours. – martian03 Oct 20 '20 at 07:00
  • Furthermore, we need to "prove" for all n. – martian03 Oct 20 '20 at 07:02
  • 1
    I obtained the numbers in my table by using the formulae given in my answer. As a check, I wrote a script to calculate the dealer's and players' winnings for all $\ 6^n\ $ possible outcomes, for $\ n=5,7\ $ and $9$, and count the number of times the dealer's winnings exceeded those of every player. I got $330$ for $\ n=5\ $, $14,406\ $ for $\ n=7\ $ and $\ 582,882\ $ for $\ n=9\ $. When divided by $\ 6^n\ $ for the appropriate value of $\ n\ $ these give the same numbers I obtained in my table. – lonza leggiera Oct 20 '20 at 10:33
  • 1
    As a second check, in the script I used to calculate the number of times the dealer's winnings exceeded those of the players, I added code to calculate the means and variances of the dealer's and players' winnings, and obtained the correct results of $0,0,0,0$ for the means, and $\ n,\frac{11n}{3}, \frac{11n}{3}, \frac{11n}{3}\ $ for the variances. It therefore seems likely to me that my scripts correctly implement my understanding of how the game works, and the formulae I give in my answer are also correct for that understanding. – lonza leggiera Oct 20 '20 at 11:26
  • Of course it's nevertheless possible that I've misunderstood your description, although it does seem pretty clear to me. – lonza leggiera Oct 20 '20 at 11:28
  • 1
    I've written a script which also agrees with your figures @lonzaleggiera. Can you show that the power series is increasing, i.e. confirming that the probability increases? – owen88 Oct 20 '20 at 18:02
  • I don't know whether that's doable or not. I had a brief look at the expression for $\ \frac{8(g(2r+1)-g(2r-1))}{3}\ $. Proving it to be positive looked sufficiently challenging that I put off trying seriously to do so until I have more time to devote to it. – lonza leggiera Oct 20 '20 at 19:18
  • Apologize to @lonzeleggiera. My calculation was wrong. You are right. Now we only need to prove the increasing trend. – martian03 Oct 22 '20 at 02:03
1

Update: Even in the $n$ even case, the association argument doesn't quite work because I got an inequality flipped and forgot about some signs - the amount by with the negative terms increase maybe more or less than the amount by which the associated positive terms decrease. However, it's easy to write down an expression for the difference which I expect to be small, and perhaps it will be possible to bound it by the decrease of the other terms. I'll get back to this when I have more time later.


I can prove this in the case $n$ is even, and I provide a modification which should prove the case when $n$ is odd but I feel a bit uneasy about it (though I don't see a mistake. I haven't checked explicit examples to verify it though).

This can be viewed as a random walk on a 3D lattice by associating to each point the triple $(x_1,x_2,x_3)$ (i.e. the coordinates tell you how many dollars each player who is not the dealer has), where the legal steps are $\{(3,-1,-1), (-3,1,1), (-1,3,-1), (1,-3,1), (-1,-1,3), (1,1,-3)\}$.

If, after this walk, you are at $(x,y,z)$, then the dealer has $-x-y-z$ dollars, and the condition that the dealer has the most dollars is equivalent to the equations $$\begin{gather} -x-y-z>x\\ -x-y-z>y\\ -x-y-z>z. \end{gather}$$ This is an open region bounded by 3 hyperplanes that each contain the origin.

Taking a linear transformation, we can map the legal steps to the unit vectors, and this sends the 3 hyperplanes to 3 other hyperplanes. To be specific, I'll pick the linear transformation so that $(3,-1,-1)\mapsto(1,0,0), (-1,3,-1)\mapsto(0,1,0)$, etc.

We can also specify what the hyperplanes are: If either $x_1$ or $x_2$ loses 3 dollars, the dealer is tied with $x_3$ for the most dollars, so one of the hyperplanes is the $z=0$ hyperplane. Similarly, the other hyperplanes are the $x=0$ and $y=0$ hyperplanes, so the region in which the dealer has the most dollars is one of the 8 regions bounded by these 3 hyerplanes.

By symmetry, the probability of being in this region is 1/8 the probability of not being on one of the 3 hyperplanes.

The problem is now equivalent to: Consider the usual random walk on a 3D cubical lattice, starting at the origin. Show that, after $n+2$ steps, the probability of at least one of the coordinates being 0 is strictly less than that for $n$ steps. (So far, this is basically equivalent to what lonza did, except without the calculations.)

First I'll show that the probability of at least one of the coordinates being 0 decreases when going from $n$ steps to $n+2$ steps when $n$ is even, and then I'll modify the proof for $n$ odd. I don't see anything wrong with the proof, but I haven't verified it by a concrete example.

It suffices to show this claim on all 3D torus grid graphs (i.e. the cartesian product of 3 cycle graphs), because the first $n$ steps of a random walk on the cubical 3D lattice update the probability distribution in the same way on a sufficiently large torus grid graph.

Consider a $p,q,r$ torus grid graph (i.e. the Cartesian product of $C_p,C_q,C_r$), and label the vertices with elements of $\mathbf{Z}/p\mathbf{Z}\times\mathbf{Z}/q\mathbf{Z}\times\mathbf{Z}/r\mathbf{Z}$. Consider the $pqr$-dimensional $\mathbf{C}$-vector space of formal linear combinations of the vertices. To write an element of this vector space as a column vector, order the vertices as follows: The first vertex is $(0,0,0)$, followed by all vertices labelled $(x,0,0)$ for some nonzero $x$ (in any order), followed by the vertices labelled $(0,y,0)$ for some nonzero $y$, then $(0,0,z)$ for some nonzero $z$, then $(x,y,0)$ for nonzero $x,y$, then $(x,0,z)$ for nonzero $x,z$, then $(0,y,z)$ for nonzero $y,z$, and finally all remaining points in any order.

Let $A$ be the update matrix for the Markov Chain defined by taking one step in a random walk on this graph. Let $v$ denote the vector with 1s at all entries that correspond to vertices with at least one entry in the label that is 0. Let $w$ denote the vector with 1 in the first entry and 0 elsewhere (which corresponds to the point with label $(0,0,0)$). Then the probability that, after $n$ steps of the random walk on this torus grid graph, we end up at a vertex with a 0 somewhere in its label is $v^TA^nw$.

We will now express $A=UDU^*$ where $D$ is diagonal, $U$ is unitary, and $U^*$ denotes conjugate transpose (we know by the spectral theorem and the fact that $A$ is symmetric that we can pick $U$ to be real and orthogonal, but we will use a different matrix. However, this gives us that the diagonal entries in $D$ are all real, which we will use later). This is equivalent to picking matrices $U$ and $D$ such that each column of $U$ is an eigenvector of $A$ of unit norm, the columns of $U$ are orthogonal (i.e. have inner product 0, where the inner product of vectors $v_1,v_2$ is $v^*_1v_2$), and $D$ is a diagonal matrix where the $k$th diagonal entry is the eigenvalue of the $k$th column of $U$.

To do this, associate each column of $U$ to a vertex of the graph using the same ordering as before. Now it suffices to associate to each vertex of the graph an eigenvector of unit norm. We will first associate each vertex to an eigenvector, and then we will specify the normalization.

Pick primitive roots of unity $\zeta_p,\zeta_q,\zeta_r$ (where $\zeta_p$ is a primitive $p$th root of unity, and similar for $\zeta_q,\zeta_r$). We define the association of a vertex to an eigenvector as follows: Consider an arbitrary vertex, and let its label be $(x_0,y_0,z_0)$. Associate to it the eigenvector where the coefficient of the vertex $(x,y,z)$ is $\zeta_p^{x_0x}\zeta_q^{y_0y}\zeta_r^{z_0z}$. This is well-defined because $\zeta_p^t$ only depends on $t\pmod{p}$, and similar for $q,r$. To see that this is an eigenvector, the update locally at any given point is the same as at any other point except scaled by a constant complex number.

To define the normalization, all entries in this vector are roots of unity, and therefore have unit norm, so the norm of the eigenvector is $\sqrt{pqr}$. So divide the vector by $\sqrt{pqr}$.

This defines our matrix $U$, and $D$ comes from taking the corresponding eigenvalues.

Now $v^TA^nw=v^TUD^nU^*w$, and we want to show that this decreases when we increase $n$ to $n+2$. To avoid dealing with the normalization constant, we will actually show $pqrv^TUD^nU^*w=\sqrt{pqr}v^TUD^n\sqrt{pqr}U^*w$ decreases.

So far we haven't assumed $n$ is even, but if $n$ is even then we can write $n=2m$ for some $m$. Then $v^TA^nw=v^TU(D^2)^mU^*w$, and now the diagonal entries of $D^2$ are all positive reals, which will be useful later. Also, all diagonal entries are at most 1 because the eigenvalues of any stochastic matrix have absolute value at most 1. Also, by the Perron Frobenius theorem, if at least one of $p,q,r$ is odd then there is only one eigenvalue of 1, and otherwise there are only 2.

We now compute $\sqrt{pqr}v^TU$ and $\sqrt{pqr}U^*w$. Computing $\sqrt{pqr}U^*w$ is easy - it is the all 1s vector, because each entry corresponds to the coefficient of $(0,0,0)$ in one of the eigenvectors we defined, and they are all 1 (and the $\sqrt{pqr}$ cancels out the the normalization factor from earlier).

To compute $\sqrt{pqr}v^TU$, it is the vector where the coefficient of a vertex $(x_0,y_0,z_0)$ is given by looking at the eigenvector associated to that vertex and taking the sum of the coefficients of all vertices in the graph where the label contains a 0. We can compute this using the principle of inclusion exclusion: letting $l_{x_0,y_0,z_0}(x,y,z)$ denote the $\sqrt{pqr}$ times the label of $(x,y,z)$ under the eigenvector associated to $(x_0,y_0,z_0)$, the sum equals $$\sum_{y,z}l_{x_0,y_0,z_0}(0,y,z)+\sum_{x,z}l_{x_0,y_0,z_0}(x,0,z)+\sum_{x,y}l_{x_0,y_0,z_0}(x,y,0)-\sum_{z}l_{x_0,y_0,z_0}(0,0,z)-\sum_{y}l_{x_0,y_0,z_0}(0,y,0)-\sum_{x}l_{x_0,y_0,z_0}(x,0,0)+l_{x_0,y_0,z_0}(0,0,0).$$

If none of $x_0,y_0,$ or $z_0$ are 0, then this sum is 1 - indeed, the sum of $l_{x_0,y_0,z_0}(x,y,z)$ along any line parallel to an axis is 0, because the sum of the powers of any non-1 root of unity is 0, so the only nonzero term out of the above 7 is $l_{x_0,y_0,z_0}(0,0,0)$.

If $x_0=0$ but $y_0,z_0\ne0$, then the sum is $-(p-1)$, because the sum over a line parallel to the $y$- or $z$- axis is still 0, so the first 5 terms are 0, but the sum over a line parallel to the $x$-axis passing through $(0,y,z)$ is $p\zeta_q^{y}\zeta_r^z$, so the sixth term is $-p$, and the seventh term is $l_{x_0,y_0,z_0}(0,0,0)=1$. Similarly, if $y_0=0$ but $x_0,z_0\ne0$, the sum is $-(q-1)$, and if $z_0=0$ and $x_0,y_0\ne0$ the sum is $-(r-1)$.

If $x_0=y_0=0$ but $z_0\ne0$, the sum is $(p-1)(q-1)$ by a similar analysis, and symmetrical things hold for when another pair of coordinates is 0.

If $x_0=y_0=z_0$ then the sum is just the number of points, since $l_{x_0,y_0,z_0}(x,y,z)=1$ for all $(x,y,z)$, so the sum is $pqr-(p-1)(q-1)(r-1)$.

To show that $v^TU(D^2)^mU^*w$ decreases when increasing $m$, we look at the sum componentwise. Each vertex of the graph corresponds to 1 addend (noting that $D^2$ is diagonal, and the fact that $U^*w$ has all equal entries implies that the sum is just the average of the entries in $v^TU(D^2)^m$), and the negative addends are the ones that correspond to vertices where the sum we just computed is negative, i.e. the vertices where exactly one coordinate of its label is 0. Because each entry in $D^2$ is between 0 and 1 (inclusive), all positive addends decrease in magnitude, decreasing the total sum. The negative addends also decrease in magnitude, but we can account for this as follows: Associate to each vertex with label $(x,y,0)$ the vertex $(x,0,0)$. The amount by which multiplying the coefficients of $(x,y,0)$ by their corresponding eigenvalues increases the sum is less than the amount by which multiplying the coefficients of $(x,0,0)$ by their corresponding eigenvalues decreases the sum; the reason is that the eigenvalue corresponding to $(x,y,0)$ is less than the eigenvalue corresponding to $(x,0,0)$ (the former eigenvalue is $\zeta_r^x+\zeta_r^{-x}+\zeta_q^y+\zeta_q^{-y}+2$, and the latter replaces $\zeta_q^y+\zeta_q^{-y}$ with 2, which is greater), and each vertex of the form $(x,0,0)$ is mapped to by $n-1$ vertices, so the sum of the original values is the same. Similarly, associate to each vertex with label $(x,0,z)$ the vertex $(0,0,z)$, and associate to each vertex with label $(0,y,z)$ the vertex $(0,y,0)$. This completes the proof in the case that $n$ is even.

In the case that $n$ is odd, it suffices to show that, if instead of starting the random walk at the origin, we start at a point 1 step away from the origin, it holds that the probability that, after $n$ steps, the probability that at least one coordinate is 0 is strictly larger than that after $n-2$ steps, where we can assume $n$ is even. The reason this suffices is that, by symmetry, the probability that at least one coordinate is 0 after $n$ steps starting from the origin is the same as the probability that at least one coordinate is 0 after $n-1$ steps starting from one step away from the origin.

The probability that, after $n$ steps starting at 1 step away from the origin, at least one of the coordinate is 0 is the same as the probability that, after $n$ steps starting at the origin, either the $x$ or $y$ coordinate is 0 or the $z$ coordinate is 1. This means we can adjust our computation above by changing $v$ to have 1s in positions corresponding to vertices in our new set. The computation above stays the same, except now the coefficient of $(x,y,z)$ is multiplied by $\zeta_r^z$.

Consider taking the sum over the line parallel to the $z$-axis passing through $(x,y,0)$ of the new coefficient with the corresponding eigenvalue. It is $$\sum_{i=0}^{r-1}\zeta_r^i(\zeta_r^i+\zeta_r^{-i})^{2m}$$ which is 0 because the constant term when expanding $\zeta_r^i(\zeta_r^i+\zeta_r^{-i})^{2m}$ is 0 (here, we assume $r>2m$, which is fine - this just means that if we want to show that the claim is true up to $2m+1$ steps, we need to pick $r>2m$).

We can then break up the sum as follows: back in the origin case (where we start at the origin instead of 1 step away), start with the vector with 1 at every vertex. Add the vector with $-n$ at each vertex with $x$-coordinate 0; add similar vectors for 0 $y$-coordinate and 0 $z$-coordinate (so the origin how has coefficient $-3n+1$). Add the vector with $n^2$ at each vertex with $x$- and $y$-coordinate 0, and similar for other pairs.

If we scale the coefficient of $(x,y,z)$ by $\zeta_r^z$, multiply the coefficient of each vertex by its corresponding eigenvalue raised to the $2m$, and take the sum, this is the quantity we want to show decreases as we increase $m$.

Starting with the all 1s vector everywhere, we can break this up into a sum over lines parallel to the $z$-axis, so that component contributes 0 regardless of $m$ (as long as $r$ is large enough).

A similar argument shows that the vector with $-n$ where $x=0$ contributes 0, and similar for the vector with $-n$ where $y=0$, and also the vector with $n^2$ where $x=y=0$.

We are left with the vector with $-n$ where $z=0$, and the vector with $n^2$ where $x=z=0$, and the vector with $n^2$ where $y=z=0$. But we can use the argument from before where we associate to each vertex $(x,y,0)$ (where $y$ can be 0) the vertex $(x,0,0)$, and association works out the same way as before because we are only dealing with vertices where $z=0$.

alphacapture
  • 3,228
  • Thank you @alphacapture. I will read your solution. Even though the bounty will expire 30 minutes later, I will put new bounty again tomorrow. – martian03 Oct 22 '20 at 10:26
  • I deeply apologize my late response. I want to confirm your answer today and I want to give you the bounty. However, once the bounty period expires, stackexchange system says that I cannot make second bounty again. Do you know how I can give you the bounty to you? If there is no way of giving bounty, I will just confirm your answer as correct answer for this question. – martian03 Feb 02 '21 at 04:21
  • @martian03 I don't think you can move the bounty, but you should be able to award a second bounty if you wish; see https://meta.stackexchange.com/questions/16065/how-does-the-bounty-system-work (in particular, the questions "After awarding the bounty, can I remove it or move it to another answer at a later time?" and "Can I offer a second bounty after the first one has expired? / Can I raise my bounty?"). – alphacapture Feb 03 '21 at 05:06
  • I found the reason. I need more reputation score point because second bounty should be more than double of the first bounty. I need 12 point more. I will get the point. – martian03 Feb 04 '21 at 02:24
  • Hi, @alphacauture, are you interested in publishing this result as a paper? Or has this idea already a reference paper? – martian03 Feb 26 '21 at 23:17
  • @martian03 I have no experience publishing papers. Do you? – alphacapture Mar 01 '21 at 00:24
  • Thank you for the bounty! – alphacapture Mar 01 '21 at 00:26
  • @alphacauture No. I haven't. I am not in the mathematical academy. I just hoped and think that, if you publish this as a paper, it would be better to understand your idea for us. – martian03 Mar 02 '21 at 22:33