7

I know that B would look something like this: $$\begin{bmatrix} c & a_{12}+c &...&&a_{1n}+c \\ -a_{12}+c & c &...&&a_{2n}+c \\ . \\ . \\ . \\ -a_{1n}+c & -a_{2n}+c & &...&c \end{bmatrix}$$

And that if it was of uneven size the determinant would be $0$. I also know that $|B|=|B^t|$ and $|A|=|A^t|=|-A|=(-1)^n|A|$.

Any hint?

asaf92
  • 1,311
  • 1
    Eliminate all but the first $c$ in the first row by row operations, then eliminate all but the first $c$ in the first column by column operations and see what happens. If it's unclear, do this for a $4\times4$ example first. Next, use Laplace expansion for the first row and use the fact that the determinant of an odd-shaped antisymmetric matrix vanishes. – Jesko Hüttenhain Apr 20 '16 at 12:43
  • I need a formal proof. I can experiment on $2x2$ and $4x4$ to see that it's true but that doesn't prove anything. – asaf92 May 01 '16 at 11:10
  • 2
    Sorry, I thought you wanted a hint, because you said "Any hint?" ;). – Jesko Hüttenhain May 01 '16 at 15:03

3 Answers3

2

The matrix $B$ can be expressed using matrix $A$ and all one vector $\mathbf{1}$:

$$B=A+c\mathbf{11^{\text{T}}}$$

Thus, the both sides of determinants are:

$$ \det{(B)}=\det{(A+c\mathbf{11^{\text{T}}})} $$

Then, we use a matrix determinant lemma. Generally, this theory describes following statement using an invertible matrix $M$ and a dyadic product $\mathbf{uv^{\text{T}}}$ (1).

Lemma: $$ \det{(M+\mathbf{uv^{\text{T}}})}=(1+\mathbf{v^{\text{T}}}M^{-1}\mathbf{u})\det{(M)} $$

Therefore, we can apply above lemma, and obtain below result:

$$ \det{(B)}=(1+c\mathbf{1^{\text{T}}}A^{-1}\mathbf{1})\det{(A)} $$

By the way, skew-symmetric matrix has characteristic of several properties. Now, we show worthful properties that it is necessary for proof.

Property 1:

Generally, skew-symmetric matrix $A$ can be described using an arbitrary square matrix $R$. Then, diagonal elements must be zero:

$$A=\cfrac{1}{2}(R-R^{\text{T}})$$

Property 2:

Any quadratic form shows zero using skew-symmetric matrix $A$ and arbitrary vector $\mathbf{x}$:

$$ \begin{aligned} \mathbf{x}^{\text{T}}A\mathbf{x}=& \cfrac{1}{2}\mathbf{x}^{\text{T}}(R-R^{\text{T}})\mathbf{x} \\ =&\cfrac{1}{2}(\mathbf{x}^{\text{T}}R\mathbf{x}-\mathbf{x}^{\text{T}}R^{\text{T}}\mathbf{x}) \\ =&0 \end{aligned} $$

Property 3:

Inverse matrix $A^{-1}$ is also skew-symmetric matrix. Because:

$$ \begin{aligned} A^{\text{T}}=&-A \\ (A^{\text{T}})^{-1}=&(-A)^{-1} \\ (A^{-1})^{\text{T}}=&-(A^{-1}) \end{aligned} $$

Therefore, $\mathbf{1^{\text{T}}}A^{-1}\mathbf{1}$ must be zero. Hence, $\det{(B)}=\det{(A)}$ will be established.

Memo:

  1. Dyadic product is also called outer product. Inner product makes a scalar, on the other hand outer product generates a matrix (see detail).
1

If ${\bf A}$ is a skew-symmetric matrix of even order, then $$\left(\begin{array}{cc} 0 & {\bf 1}\\ -{\bf 1}^T & {\bf A}\end{array}\right)$$ is a skew-symmetric matrix of odd order and ${\bf 1}$ is a $1\times n$ row, therefore, its determinant vanishes. Thus, $$\det({\bf A}) = \det\left(\begin{array}{cc} 1 & {\bf 0}\\ -c\cdot {\bf 1}^T & {\bf A}\end{array}\right)+\det\left(\begin{array}{cc} 0 & {\bf 1}\\ -c\cdot {\bf 1}^T & {\bf A}\end{array}\right)=\det\left(\begin{array}{cc} 1 & {\bf 1}\\ -c\cdot {\bf 1}^T & {\bf A}\end{array}\right)$$ In the last matrix, subtracting the first column from all other columns we get the desired.

0

Let $A_c:=A+c\mathbf1$. We subtract the first column of $A_c$ from all other columns and then we subtract the first row from all other rows. This corresponds to multiplication from left and right by the matrix $S=(s_{ij})$ where $s_{ij}=\delta_{ij} - \delta_{j1} + \delta_{i1}$. Here, $\delta_{ij}$ is the Kronecker Delta.

Let $T_c:=SA_cS$, then $\det(A_c)=\det(T_c)$ because $\det(S)=1$. We calculate the entry $t_{ij}$ at position $(i,j)$ of $T_c$ for $i>1$ and $j>1$. Note that $t_{11}=c$, $t_{i1}=a_{i1}$ and $t_{1j}=a_{1j}$. \begin{align*} t_{ij} &= \sum_{k=1}^n \sum_{\ell=1}^n s_{ik} (a_{k\ell}+c) s_{\ell j} = \sum_{k=1}^n \sum_{\ell=1}^n (\delta_{ik}-\delta_{k1}+\delta_{i1})(a_{k\ell}+c) (\delta_{\ell j}-\delta_{j1}+\delta_{\ell1}) \\ & = \sum_{k=1}^n \sum_{\ell=1}^n (\delta_{ik}-\delta_{k1})(a_{k\ell}+c) (\delta_{\ell j}+\delta_{\ell1}) \\ &= \sum_{\ell=1}^n (a_{i\ell}+c) (\delta_{\ell j}+\delta_{\ell1}) - \sum_{\ell=1}^n (a_{1\ell}+c) (\delta_{\ell j}+\delta_{\ell1}) \\ &= (a_{ij}+c)+(a_{i1}+c) - (a_{1j}+c)-(a_{11}+c) \\ &= a_{ij} + a_{i1} - a_{1j} \end{align*} Now, $t_{ij}=-t_{ji}$ for all $i>1$ and $j>1$. In other words, the submatrix of $T_c$ which arises by removing the first row and the first column is antisymmetric of odd size, therefore its determinant vanishes. No entry other than $t_{11}$ depends on $c$. By Laplace expansion of the first column of $T$, this means $\det(T_c)=\det(T_0)$ for all $c$, and thereby $$ \det(A_c)=\det(T_c)=\det(T_0)=\det(A_0)=\det(A). $$