1

Maybe my question is wrong or not clear so I would be grateful for any modification.

I am discovering the Gibbs-Boltzmann distribution but it seems strange for me and really hard to understand!

Generally, according to the EXPONENTIAL RANDOM GRAPHS, we can define the distribution over a graph as

$$P(G)=\frac{1}{Z} \exp \{-H(G)\},$$ where the normalizing constant $Z= \sum_{G} \exp \{-H(G) \}$ and $H(G)$ represents the graph Hamiltonian

What does it mean to define the distribution over a certain vertex? how can I also define the distribution over vertices $P(x_i)$? for example we will replace the graph Hamiltonian $H(G)$ by $H(x)$ ! but what does this mean? and also the same question for the normalizing constant $Z$?

Thanks for any correcting me if there is any misunderstanding!

md2perpe
  • 30,042
Noah16
  • 245
  • What do you mean by "the distribution over vertices" that you want to define? In a random graph, we are picking a graph at random, so $P(G)$ has a meaning. I'm not sure what it means to pick a vertex at random. – Misha Lavrov Oct 17 '19 at 18:00

1 Answers1

1

The Gibbs measure $P$ in the paper is not a distribution over a graph, but rather a distribution over the set $\mathcal{G}$ of graphs. So it simply means that each graph $G\in\mathcal{G}$ is chosen with probability

$$P(G) := \frac{e^{-H(G)}}{Z},$$

where $Z$ is simply a constant so that $P$ is a probability law, i.e., $\sum_{G\in\mathcal{G}}P(G) = 1$ holds.

Up to this point, there is no relation between different elements of $\mathcal{G}$. To make sense of talking about vertices, we fix an ambient graph $(V,E)$ and let $\mathcal{G}$ be the family of its sugbraphs. Then, for example,

$$ \mathbb{P}(\text{vertex $v$ is present}) = \mathbb{P}(v \in G) = \sum_{\substack{G \in \mathcal{G} \\ v \in G}} P(G) $$

is simply the sum of all $P(G)$'s over $G\in \mathcal{G}$ having the vertex $v$.

Sangchul Lee
  • 181,930
  • This makes sense in the abstract but I'm not sure it makes sense in practice. In the paper, $P$ is always a distribution over graphs with a fixed vertex set $v_1, \dots, v_n$, so $P(v_i \in G) = 1$ for all $i$. – Misha Lavrov Oct 17 '19 at 18:48
  • @MishaLavrov, I agree. And on top of that, many Gibbs measures are defined not on a graph itself but rather on the family of configurations of the graph (such as edge configurations, spin configurations, etc), so that the graph itself is fixed throughout. It would less make sense to talk about probabilistic property of the ambient graph itself. – Sangchul Lee Oct 17 '19 at 18:56
  • @SangchulLee thanks but also I don't see how can formulate the distribution in this case? – Noah16 Oct 17 '19 at 19:06
  • i.e we can say $P(v_i)= \sum_{\substack{G \in \mathcal{G} \ v \in G}} \frac{e^{-H(G)}}{Z}$? and how can we interpret the energy function in this case? – Noah16 Oct 17 '19 at 19:20
  • @Noah16 The formulation will not change from the general case. The Hamiltonian in the paper is formulated using graph observables $x_1,\cdots,x_r:\mathcal{G}\to\mathbb{R}$, i.e., $$H(G)=\sum_{i}\theta_{i}x_{i}(G).$$ So all you need is to encode your 'vertex-dependence' into graph observables. – Sangchul Lee Oct 17 '19 at 19:21
  • @Noah16, Perhaps I am not fully following your question. What kind of Hamiltonian do you want to consider? I am not sure what it means by $H(x)$ in your question. Or it will be quite helpful to explain the model which you want to consider. – Sangchul Lee Oct 17 '19 at 19:48
  • it means nothing, just wondering how to work with the energy function in case of vertices – Noah16 Oct 17 '19 at 19:51
  • @Noah16, You measure the energy of a system, not of a point. For instance, you may measure the energy of the system of $n$ moving particles interacting each other and subject to some potential. Each of such system can be represented by the configuration of $n$ pairs $(x_i,p_i)$ of position and momentum and then $H$ will measure the energy of the system resulting from both of interactions and the potential. Similarly, the Hamiltonian $H(G)$ measures the energy of each $G\in\mathcal{G}$, where $\mathcal{G}$ encodes the set of all possible configurations on, say, some fixed ambient graph. – Sangchul Lee Oct 17 '19 at 20:22
  • So, if $H(x)$ is a function in $x$ ranging over the vertex set of a fixed graph, then $H$ only measures the energy of a configuration representing only a single particle sitting at a vertex. And that sounds too restrictive to be an interesting model. – Sangchul Lee Oct 17 '19 at 20:22
  • A more interesting example arises by allowing multiple particles: Let $G_0=(V,E)$ be a fixed graph, and for each subset $W\subseteq V$, define $H(W)=\beta|W|$ for $\beta > 0$. In other words, $H$ is proportional to the number of particles on the graph. Then $$Z=\sum_{W\subseteq V}e^{-\beta|W|}=(1+e^{-\beta})^{|V|},\qquad P(W)=\frac{e^{-\beta|W|}}{(1+e^{-\beta})^{|V|}}=p^{|W|}(1-p)^{|V|-|W|},$$where $p=1/(e^{\beta}+1) \in (0, 1)$. Notice that the resulting $P$ is simply the Bernoulli measure, i.e., each vertex can host a particle with probability $p$ independent of all the others. – Sangchul Lee Oct 17 '19 at 20:32
  • Yes I see now, so the energy function in your definition depends only on the number of vertices in each subset. A last demand, if I assumed that each subset contains the same number of vertices and I would like to define the energy function depending 'for example' on the distance between sets. Would this be possible? – Noah16 Oct 18 '19 at 13:05