6

Let $d \in \mathbb{N}$ and let $I$ be a set. Let $\omega : I^d \times I^d \to \mathbb{R}$ be a function, denoted by $(a_1,\dotsc,a_d,b_1,\dotsc,b_d) \mapsto a_1 \cdots a_d | b_1 \cdots b_d$, with the following properties:

  • It is antisymmetric in the first $d$ variables, e.g. $a_1 a_2 \cdots a_d | \cdots = - a_2 a_1 \cdots a_d | \cdots$.
  • It is also antisymmetric in the last $d$ variables.
  • For all elements $a_1,\dotsc,a_{d-1}$ and $b_0,\dotsc,b_d$ of $I$ we have $$\sum_{k=0}^{d} (-1)^k a_1 \cdots a_{d-1} b_k | b_0 \cdots \widehat{b_k} \cdots b_n=0.$$

One might call $\omega$ a Plücker function because these relations resemble the Plücker relations.

Claim. $a_1 \cdots a_d | b_1 \cdots b_d = b_1 \cdots b_d |a_1 \cdots a_d$ for all $(a,b) \in I^d \times I^d$.

For $d=1$ it is clear. Here is a proof for the case $d=2$: Using the relation $ab|cd- ac|bd + ad|bc=0$ four times, we get

$$ab|cd = ac|bd - ad|bc=da|bc-ca|bd = db|ac-dc|ab-cb|ad+cd|ab$$ $$=bc|ad-bd|ac+2 cd|ab = ba|cd + 2 cd|ab ~ \Longrightarrow ~ 2 ab|cd = 2 cd|ab ~~~\square$$

In the case $d=3$, a long calculation shows $abc|def=ade|bcf-adf|bce+aef|bcd$. So this already puts $bc$ on the right, but I don't know how to do this with $a$ without destroying this.

There is some background for the claim, coming from categorified Grassmannians, but I won't explain this here because it is not necessary to understand the question, I think. It maybe that the claim is false, but then I am pretty sure (but have no proof) that a weaker version of it holds, but it needs even more variables and relations. I will add this in case someone asks.

Perhaps one can ask a computer algebra software to do the whole work? I have tried it with SAGE, but it didn't work out because non-commutative quotient rings are only available in conjunction with a representing system.

2 Answers2

4

The claim is true.

I find it convenient to state the problem in a "Boolean hypercube" form . The variables will be the $\binom{2d}{d}$ strings of length $2d$ over the alphabet $\{L,R\}$ with exactly $d$ Rs. For any string $x\in\{L,R\}^{2d}$ with exactly $d+1$ $R$'s, let $\partial(x)$ denote the sum of the variables labelled by strings obtained by replacing one $R$ in $x$ by an $L$. For example $\partial(LRLRRR)= LLLRRR+LRLLRR+LRLRLR+LRLRRL$.

Claim 2: $L^dR^d-(-1)^d R^dL^d$ is a linear combination of expressions of the form $\partial(x)$.

To relate this to your problem, fix $2d$ elements $e_1,\dots,e_{2d}\in I$. Consider a string $x\in\{L,R\}^{2d}$ with $d$ Rs. We can label the $i$'th L as $L_i$ and the $i$'th $R$ as $R_i$, so for example we annotate $LRLLRR$ as $L_1R_1L_2L_3R_2R_3$. For each $1\leq i\leq d$ let $l_i$ be the position of $L_i$, and let $r_i$ be the position of $R_i$. (So $l_i$ and $r_i$ are integers from $1$ to $2d$.) We map $x$ to $(-1)^{\sigma(x)}e_{l_1}\dots e_{l_d}|e_{r_1}\dots e_{r_d}$, where $(-1)^{\sigma(x)}$ is the sign of the permutation $\pi_x$ of $\{L_1,\dots,L_d,R_1,\dots,R_d\}$ sending the $i$'th letter of $L_1\dots L_dR_1\dots R_d$ to the $i$'th letter of the annotated version of $x$. (Equivalently, $\sigma(x)$ is the minimum number of transpositions between $x$ and $L^dR^d$.)

For example $LRLLRR$ maps to $e_1e_3e_4|e_2e_5e_6$ while $LRLRLR$ maps to $(-1)e_1e_3e_5|e_2e_4e_6$.

$\partial (L^{d-1}R^{d+1})$ gets mapped to $$ \sum_{k=0}^{d} (-1)^k e_{1} \cdots e_{d-1} e_{d+k} | e_{d} \cdots \widehat{e_{d+k}} \cdots e_{2d} $$ The mapping for other $\partial(x)$ terms is similar but much more cumbersome to write down.

Proof of Claim 2: we will argue that $$L^dR^d-(-1)^d R^dL^d=\sum_{k=1}^d (-1)^{k+1}\frac{(k-1)!(d-k)!}{d!} \sum_{x,y} \partial (xy)\tag{*}$$ where $x$ ranges over strings of length $d$ with exactly $k$ Rs, and $y$ ranges over strings of length $d$ with exactly $d-k+1$ Rs.

As an example, consider the case $d=2$. Then $$LLRR-RRLL = (\partial(RLRR)+ \partial(LRRR) - \partial(RRLR) - \partial(RRRL))/2.$$ The coefficient of LLRR on the right-hand-side is $(1+1)/2$, and the coefficient of RRLL is $(-1-1)/2$. The coefficient of LRLR is $(1-1)/2$ - the contributions come from LRRR and RRLR. The other coefficients can also be checked directly, but the following symmetry argument shows that this is unnecessary.

The group $S_d\times S_d$ acts on strings of length $2d$ by $(\pi,\pi')*(x,x')=\pi(x)\pi'(x')$, where $x,x'\in\{L,R\}^d$. The left-hand-side and right-hand-side of (*) are manifestly invariant under this group (technically, the induced action of $S_d\times S_d$ on the vector space generated by strings of length $2d$).

So it suffices to check the coefficients of the terms $L^kR^{d-k}L^{d-k}R^k$ match, for $0\leq k\leq d$. We get contributions from $\partial(x)$ where $x$ is obtained by replacing one L in $L^kR^{d-k}L^{d-k}R^k$ by an R. So there are two types of contributions: $k$ contributions from the terms $\partial(xL^{d-k}R^k)$ where $x$ is obtained by replacing one L in $L^kR^{d-k}$ by an R, and $d-k$ contributions from the terms $\partial(L^kR^{d-k}y)$ where $y$ is obtained by replacing one L in $L^{d-k}R^k$ by an R. The coefficient of $L^kR^{d-k}L^{d-k}R^k$ is therefore $$(-1)^{k+1}k\frac{(k-1)!(d-k)!}{d!}+(-1)^{k+2}(d-k)\frac{k!(d-k-1)!}{d!}$$ with the convention $0\cdot \infty=0$, i.e. ignore the first term if $k=0$, and ignore the second term if $k=d$.

  • Thank you! "Solving ... gives ...", is this the result of a computation which you have left out, or is this obvious? And does the other direction also work, i.e. is it a solution? – Martin Brandenburg Jan 28 '13 at 23:43
  • I believe it is a solution. The computation is solving $(d-k)c_{k+1}+kc_k=0$ for $1\leq k< d$, and $c_1=1/d$ and $c_d=(-1)^{d+1}/d$. – Colin McQuillan Jan 29 '13 at 00:02
  • For me these equations come out of the blue. Can you explain them? There are many $x$ and $y$, so how can we compute the right hand side? Also, could you explain the sign issue in your notation? Does $RRLL$ mean $x_3 x_4 x_1 x_2$ or $x_4 x_3 x_1 x_2$ etc. – Martin Brandenburg Jan 29 '13 at 00:44
  • Ok, I did the computation more explicitly, and explained the signs. – Colin McQuillan Jan 29 '13 at 11:58
  • Thanks a lot. In the formula for the image of $\partial(L^{d-1} R^{d+1})$ I think it should be $d+k$ instead of $d-1+k$. And we don't use antisymmetry. – Martin Brandenburg Jan 29 '13 at 15:23
  • Thanks. In the general case you'd need antisymmetry to show that $\partial(x)$ is zero; I have no idea how to write this down cleanly though. – Colin McQuillan Jan 29 '13 at 15:46
  • Sorry, it is still not clear to me. By "By symmetry" you mean that both sides get multiplied with $(-1)^d$ when one exchanges $L$ and $R$? But why does this mean that it suffices to consider the terms $L^k R^{d-k} L^k R^{d-k}$? And why is the coefficient as you write it? I don't see it, because I cannot get rid of this huge sum. It would be very helpful when you explain it in detail. – Martin Brandenburg Jan 29 '13 at 16:01
  • Well, it should have been $L^k R^{d-k} L^{d-k} R^k$. I've added more explanation about the symmetry argument I am appealing to. – Colin McQuillan Jan 29 '13 at 16:51
  • Thank you. In order to get the coefficient as stated, shouldn't we define the sum in (*) so that x has d−k+1 Rs and y has k Rs? – Martin Brandenburg Jan 29 '13 at 20:53
  • Do you agree with that correction? – Martin Brandenburg Feb 07 '13 at 16:44
  • I'm afraid I don't see any error, even after sleeping on it. I don't see any mismatch between (*) and the last displayed equation, or any errors in the paragraph explaining the coefficient. – Colin McQuillan Feb 08 '13 at 11:20
1

You can definitely get a computer algebra system to do this for fixed $d$: It's just linear algebra! Notice that, if you can prove this for $|I|=2d$, then it follows for all larger $I$. Specifically, if you can show that $123 \cdots d|(d+1)(d+2) \cdots (2d) = (d+1)(d+2) \cdots (2d) | 12 \cdots d$, then it follows for any $2d$ elements by replacing $(1,2,3,\ldots, d, d+1, \ldots, 2d)$ by $(a_1, a_2, \ldots, a_d, b_1, b_2, \ldots, b_d)$.

For fixed $d$, you are asking whether a set of linear equations implies another one.

Here's hackish Mathematica code:

g[A_, B_] := Signature[A]* Signature[B] *f[Sort[A], Sort[B]]
rr[A_, B_] := Sum[(-1)^(k - 1)* g[Append[A, B[[k]]], Drop[B, {k}]], 
                    {k, 1, Length[B]}]
relations[d_] := Map[(rr[#, Complement[Range[2 d], #]] == 0) &, 
                         Subsets[Range[2 d], {d - 1}]]
vars[d_] := Map[f[#, Complement[Range[2 d], #]] &, Subsets[Range[2 d], {d}]]
DoStuff[d_] := f[Range[d], Range[d + 1, 2 d]] - f[Range[d + 1, 2 d], Range[d]] /.
                         Solve[relations[d], vars[d]]

If DoStuff[d] outputs {0}, the claim is true for $d$. I checked the claim for $d$ up to $6$ (at which point we are solving $792$ equations in $924$ variables). I'm sure that intelligent coding could make this much faster, but I don't care to work that hard.

  • Thanks a lot! For my real application unfortunately it is not enough to know that it is true for all values of $a_i,b_i$. Instead, I need a formal proof which is valid for all $a_i,b_i$ at once (like the one for $d=2$). That's also the reason why I wanted to use Sage (or rather the methods from Singular): lift(I,x) gives the coefficients of x when representated as a linear combination of the generators of the ideal I. But this is only implemented in the commutative case. – Martin Brandenburg Jan 28 '13 at 22:53