Determinant

Revision as of 20:49, 19 December 2009 by Just Beginner (talk | contribs) (Equivalence of Definitions)

The determinant is an important notion in linear algebra.

For an $n\times n$ matrix $a = (a_{ij})$, the determinant is defined by the sum \begin{align*} \det a &= \sum_{\sigma \in S_n} \text{sign} (\sigma) a_{1\sigma(1)} a_{2\sigma(2)} \dotsc a_{n\sigma(n)} \\ &= \sum_{\sigma \in S_n} \text{sign}(\sigma) \prod_{i=1}^n a_{i\sigma(i)} , \end{align*} where $S_n$ is the set of all permutations on the set $\{ 1, \dotsc, n\}$, and $\text{sign}(\sigma)$ is the parity of the permutation $\sigma$.

For example, the determinant of a $2\times 2$ matrix $\begin{pmatrix} a&b \\c&d \end{pmatrix}$ is $ad - bc$.

This quantity may seem unwieldy, but surprisingly, it is multiplicative. That is, for any $n \times n$ matrices $a,b$ (over the same commutative field), \[\det (ab) = \det (a) \cdot \det (b) .\]

More generally, if $F$ is a commutative field and $a$ is an element of a (strictly power associative) $n$-dimensional $F$-algebra $A$, then the determinant of $a$ is $(-1)^n$ times the constant term of the characteristic polynomial of $a$.

Our generalized determinants also satisfy the multiplicative property when $A$ is associative.

Matrix Determinants are Multiplicative

In this section we prove that the determinant as defined for $n\times n$ matrices is multiplicative.

We first note that \begin{align*} \det a \det b &= \sum_{\sigma \in S_n} (-1)^{\sigma} 	\prod_{i=1}^n a_{i\sigma(i)} 	\sum_{\tau \in S_n} (-1)^{\tau} \prod_{j=1}^n b_{j\tau(j)} \\ &= \sum_{\sigma, \tau \in S_n} (-1)^{\tau \sigma} 	\prod_{i=1}^n a_{i\sigma(i)} \cdot \prod_{j=1}^n b_{\sigma(j) 	\tau (\sigma(j))} \\ &= \sum_{\sigma, \tau \in S_n} (-1)^{\tau \sigma} 	\prod_{i=1}^n a_{i \sigma(i)} b_{\sigma(i) \tau\circ \sigma(i)} , \end{align*} from rearrangements of terms. If we let $\upsilon = \tau \circ \sigma$, we then have \begin{align} \nonumber \det a \det b &= \sum_{\upsilon \in S_n} (-1)^\upsilon 	\sum_{\sigma \in S_n} \prod_{i=1}^n a_{i\sigma(i)} b_{\sigma(i) 	\upsilon(i)} \\ &= \sum_{\sigma \in S_n} \sum_{\upsilon \in S_n} (-1)^\upsilon 	\prod_{i=1}^n a_{i\sigma(i)} b_{\sigma(i) \upsilon(i)} . \end{align}

On the other hand, \begin{align*} \det ab &= \sum_{\upsilon \in S_n} (-1)^\upsilon \prod_{i=1}^n (ab)_{i 	\upsilon(i)} \\ &= \sum_{\upsilon \in S_n} (-1)^\upsilon \prod_{i=1}^n \sum_{j=1}^n 	a_{ij} b_{j \upsilon(i)} \\ &= \sum_{\upsilon \in S_n} (-1)^\upsilon \sum_{\phi \in [n]^{[n]}} 	\prod_{i=1}^n a_{i \phi(i)} b_{\phi(i) \upsilon(i)} \\ &= \sum_{\phi \in [n]^{[n]}} \sum_{\upsilon \in S_n} (-1)^\upsilon 	\prod_{i=1}^n a_{i \phi(i)} b_{\phi(i) \upsilon(i)} , \end{align*} where $[n]$ is the set $\{1, \dotsc, n\}$, and $[n]^{[n]}$ is the set of functions mapping $[n]$ into itself.

From equation (1), it thus suffices to show that if $\phi \in [n]^{[n]}$ is not a permutation on $[n]$, then \[\sum_{\upsilon \in S_n} (-1)^\upsilon \prod_{i=1}^n 	a_{i\phi(i)} b_{\phi(i) \upsilon(i)} = 0 . \qquad (2)\]

To this end, suppose that $\phi$ is not a permutation. Then there exist distinct integers $a,b \in [n]$ such that $\phi(a) = \phi(b)$. Let $\psi$ be the permutation on $[n]$ that transposes $a$ and $b$ while fixing everything else. Then \[\prod_{i=1}^n a_{i \phi(i)} b_{\phi(i) \upsilon(i)} 	= \prod_{i=1}^n a_{i\phi(i)} b_{\phi(i) \upsilon(\psi(i))} ,\] as the latter product is the same as the former, with two terms switched. On the other hand $\psi$ is an odd permutation, so \[(-1)^\upsilon \prod_{i=1}^n a_{i \phi(i)} b_{\phi(i) \upsilon(i)} + (-1)^{\upsilon \psi} \prod_{i=1}^n a_{i \phi(i)} b_{\phi(i) \upsilon(\psi(i))} =0 .\] Since $\psi = \psi^{-1}$, we can partition the elements of $S_n$ into pairs $\{ \sigma, \sigma \circ \tau\}$ for which the equation above holds. Equation 2 then follows, and we are done. $\blacksquare$

Equivalence of Definitions

We now prove that our two definitions are equivalent. We first note that the definitions coincide in the case of upper-triangular matrices, as each entry in the diagonal of an upper-triangular matrix corresponds to a (generalized) eigenvalue of the matrix.

We now use the fact that every element $a$ of $M_n(F)$ is similar to an upper triangular matrix; that is, there exists an upper triangular matrix $b$ and an invertible matrix $c$ such that \[a = c b c^{-1} .\] Writing $\det\nolimits'$ for our specialized determinant for matrices and $\det$ for our generalized definition with the characteristic polynomial, we have \begin{align*} \det\nolimits'(a) = \det\nolimits'(cbc^{-1}) &= \det\nolimits'(c) \cdot \det\nolimits'(bc^{-1}) \\ &= \det\nolimits'(bc^{-1}) \det\nolimits'(c) \\ &= \det\nolimits'(b) = \det b = \det a , \end{align*} as the characteristic polynomial does not change under automorphisms of $A$ that fix $F$. Our two definitions are therefore equivalent. $\blacksquare$

References

  • Garibaldi, Skip, "The Characteristic Polynomial and Determinant Are Not Ad Hoc Constructions". American Mathematical Monthly 111 (2004), no. 9, p. 761, Nov. 2004. Preprint

See also