Vin de Silva recently gave a talk here at OSU1 about his approach to teaching algebraic topology. He wants to make sure students understand the pre-algebraic roots of the concepts: (co)cycles should come before spaces of (co)chains, etc. How did everyone think about things before Emmy Noether came in and ruined everything? So he starts by talking about winding numbers, using this to define the 1-cocycle \(d\theta\) which acts on 1-cycles, and then you’re well on your way to homology and cohomology.

This is one way of getting your hands dirty in algebraic topology: understand the geometric motivations for the abstract algebraic structures first. Work as concretely as possible with the topology.

A complementary approach is to work as concretely as possible with the algebra. This means using combinatorial models like simplicial complexes for topological spaces and looking at the vector spaces of field-valued chains and cochains. The combinatorial nature of a finite simplicial complex means that these are finite-dimensional vector spaces with a natural basis indexed by simplices. This lets us describe all the algebraic structure using little more than a matrix.

Computing (co)homology is then a matter of finding a new basis for the vector space that gives this matrix a particularly nice structure. I first learned this way of thinking about homology from Greg Henselman, but this perspective or something like it is ubiquitous in the world of computational topology. I think understanding it is too often left as something to be done purely in the privacy of one’s own home (or one’s own Julia code), but the basic idea is quite simple and, I think, rather nice.

Let \(X\) be an oriented simplicial complex with a space of \(\mathbb{k}\)-valued chains \(C_*(X;\mathbb{k})\). If \(X^k\) is the collection of \(k\)-simplices of \(X\), \(C_k(X;\mathbb{k})\) is isomorphic to \(\mathbb{k}^{X^k}\), and we will use the indicator functions \(\{\mathbf{1}_{\sigma}\}_{\sigma \in X^k}\) as a basis. Under this basis, the boundary operator \(\partial: C_*(X;\mathbb{k}) \to C_*(X;\mathbb{k})\) has a representation that looks like this:

Block boundary operator matrix

This matrix is nilpotent because \(\partial^2 = 0\). Our goal is to find a basis for \(C_*\) that makes this fact evident and reveals a basis for \(H_k\). This is a matter of performing row and column operations on the matrix.

Let’s assume that \(X\) has dimension \(k+1\). We start by performing column operations on \(\partial_{k+1}\) to find a basis for \(\ker \partial_{k+1}\). We perform Gaussian elimination on the columns to clear out all the columns possible and move them to the left. The remaining nonzero columns are a submatrix in column echelon form: the first entry in each column is a 1, and they stairstep down. This gives us a matrix that looks like this:

Step 2: After column operations on C_{k+1}

Now we perform row operations on \(\partial_{k+1}\) to clean up the block that is still nonzero. Because we’re treating this as a linear map \(C_*(X) \to C_*(X)\), we also have to perform the corresponding (inverse) column operations on \(\partial_k\). So, for instance, if we add row \(i\) to row \(j\), we also need to subtract column \(j\) from column \(i\). (Think about this in terms of multiplication by elementary matrices.)

This is actually a good thing, because it does some extra work for us for free. Our change of basis doesn’t change the fact that \(\partial^2 = 0\), so we can conclude that certain columns of \(\partial_k\) are now zero (since they get multiplied by our identity block in \(\partial_{k+1}\)). So now our matrix looks like this. The required zeros in \(\partial_k\) correspond to the fact that \(\text{im } \partial_{k+1} \subseteq \ker \partial_k\).

Step 3: After row operations on C_k

Now we perform the same sort of column operations on \(\partial_{k}\), getting that into a column echelon form. We of course need to do the corresponding inverse row operations on \(\partial_{k+1}\), but all the corresponding rows are zero, so this does nothing. Then we perform row operations in \(C_{k-1}\), leaving us with another identity block representing \(\partial_k\).

Step 4: A basis for H_k is revealed

If we look at the columns corresponding to \(C_k\), we have a basis for \(\ker \partial_k\) which contains a basis for \(\text{im } \partial_{k+1}\). The complement is a basis for \(H_k\). One way to look at this is as a pretty way of writing the Jordan canonical form of \(\partial\). Since \(\partial^2 = 0\), all its eigenvalues are 0, and it has a number of \(2\times 2\) Jordan blocks. This particular algorithm permutes the basis vectors so those blocks have a nice interpretation in terms of determining kernels and images of the graded pieces of \(\partial\).

If we keep track of the row operations we perform on \(\partial\), we can see what the representatives for \(H_k\) are in terms of the standard basis. What we have done is written \(J = S^{-1}\partial S\), and the columns of \(S\) therefore correspond to the basis vectors giving us our Jordan form of \(\partial\).

What I find neat about this is the way that abstract algebraic conditions like the fact that \(C_*(X)\) is a complex show up as concrete facts about the matrix of the operator \(\partial\) when we write it in the right basis. The gap between the abstract structure and the matrix representation isn’t too wide in this case, but it rests on some fairly deep facts about the combinatorics of bases in linear algebra. Greg’s thesis describes this in terms of matroid theory, and uses this language to understand persistent homology. I’m planning to explore this a bit more in the future, and hopefully understand how to interpret the long exact sequence of a pair and spectral sequences in terms of simple matrix operations.

  1. Well, as far as anything happens “here” anymore.