5

I am given a directed acyclic graph (DAG) with a unique source and sink. Is there an efficient way to test whether the partial order represented by this graph is a lattice?

In other words, I need to test whether any two vertices have a unique least upper bound and greatest lower bound.

From brief browsing, I found an $O(n^3)$ algorithm that explicitly computes the least upper bound of every pair of elements. Is there a better test?

Mangara
  • 238
  • 1
  • 6

2 Answers2

4

The DAG represents the covering relation $\lessdot$ of a partial order $<$, also known as its Hasse diagram. An element $y$ covers an element $x$, in symbols $x \lessdot y$, if $x < y$ and no $z$ satisfies $x < z < y$.

Suppose that $Y$ is a non-empty set of elements in the partial order which is upwards closed (if $z \in Y$ and $u > z$ then $u \in Y$), and in which any two elements have a join. Let $y \notin Y$ be such that $Y \cup \{y\}$ is also upwards closed. Then for all $z \in Y$, $$ z \lor y = \min_{y \lessdot u} z \lor u, $$ if the minimum exists. (This is Lemma 1 in Fast recognition of rings and lattices by Goralcik, Goralcikova, Koubek, and Rodl.)

This suggests the following algorithm for finding whether any two elements have a join (we can similarly determine whether any two elements have a meet):

  • Arrange the elements in decreasing topological order $x_1,\ldots,x_n$.
  • For $i=1,\ldots,n$, attempt to compute $x_j \lor x_i$ for all $j < i$.
  • Return TRUE if all attempts were successful.

In order to compute $x_j \lor x_i$, we use the following algorithm:

  • Let $u_1,\ldots,u_m$ be the elements covering $x_i$.
  • Set $a \gets x_j \lor u_1$.
  • For $i=2,\ldots,m$, check whether $x_j \lor u_i \leq a$ (i.e., if $(x_j \lor u_i) \lor a = a$), and if so, set $a \gets x_j \lor u_i$.
  • Verify that $a \leq x_j \lor u_i$ (i.e., that $a \lor (x_j \lor u_i) = x_j \lor u_i$) for all $i=1,\ldots,m$.
  • If verification was successful, return $a$.

Finding a topological ordering takes time $O(n+|E|)$. If we denote by $C(x)$ the number of elements covering $x$, then the rest of the algorithm runs in time proportional to $\sum_{i=1}^n iC(x_i) \leq n|E|$, since $\sum_i C(x_i) = |E|$. In total, we get a running time of $O(n|E|)$.

It is known that the covering relation of a semilattice contains $O(n^{3/2})$ edges (see Statement 3 in the paper mentioned above). Hence we can abort the algorithm if $|E|$ is larger, and otherwise we can assume that $|E| = O(n^{3/2})$, and so the algorithm above runs in time $O(n^{5/2})$. This is Theorem 2 in the paper mentioned above.

Concluding, we can determine whether a DAG is the covering relation of a lattice in time $O(n^{5/2})$. A recent survey by Freese, Algorithms for finite, finitely presented and free lattices mentions obtaining faster algorithms as an open problem (Problem 1 in the survey).

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
0

Although my experience involves DAGs where only one of the directions has a unique source/sink (called 'top'), perhaps my info will be relevant nonetheless.

For a DAG with a single source, we're mostly concerned with ensuring every vertex pairing has a unique GLB. Our production NLP analysis tools must quickly compute and produce the BCPO lattice from the initial, human-authored DAG which typically doesn't meet that condition. Fortunately, such a transform is guaranteed to exist. The canonical method used for computing it is given in the following famous paper:

Efficient Implementation of Lattice Operations   (.pdf)

I use the method described there to prepare the type-inheritance DAG into a lattice so that its nodes--including the added GLBs--(e.g., each a so-called 'type') can be used as node values in a further, disjoint set of DAGs called feature structures which represent the actual linguistic entities subject to graph-unification parsing. Without the step of ensuring that every 'type'-pairing has a unique result (or 'bottom', meaning incompatible types), type unification--and thus the follow-on graph unification--become non-deterministic.

Perhaps the algorithm described in the paper will give you an insight into how to detect the condition more efficiently?