The long exact sequence of a pair of topological spaces (X,A)(X,A) is one of the consummately abstract algebraic results that end up being very useful in calculating the homology of various spaces. Here I want to explain how we can make this result much more concrete: in fact, it’s a statement about how Gaussian elimination behaves in certain types of matrices.

In the last post, I talked about how to compute the homology of a simplicial complex by performing elementary row and column operations. The long exact sequence arises from doing the same sorts of operations with a little bit more information.

When AA is a subcomplex of XX, the boundary matrix for XX can be split into blocks. The boundary of a simplex in AA is still in AA, but the boundary of a simplex not in AA may lie partially in AA and partially outside AA. So we can split the standard basis for Ck(X)C_k(X) up into two sets: simplices in AA and simplices not in AA, and with respect to this division, the boundary matrix has a block we know is zero. Since the complementary basis to the basis for Ck(A)C_k(A) serves as a basis for Ck(X,A)C_k(X,A), this zero block corresponds to the triviality of the boundary :Ck+1(A)Ck(X,A)\partial: C_{k+1}(A) \to C_{k}(X,A).

Block boundary operator matrix

The two “diagonal” blocks each in fact qualify as boundary maps of their own. That is, kAk+1A=0\partial_{k}^A\partial_{k+1}^A = 0 and k(X,A)k+1(X,A)=0\partial_k^{(X,A)}\partial_{k+1}^{(X,A)} = 0. This can be seen by evaluating 2\partial^2 and noting that the blocks corresponding to maps Ck+2(A)Ck(A)C_{k+2}(A) \to C_{k}(A) and Ck+2(X,A)Ck(X,A)C_{k+2}(X,A) \to C_{k}(X,A) are the squares of the diagonal blocks.

As a result, the full boundary map \partial also encodes the boundary map A\partial^A for the subcomplex AA and the relative boundary map (X,A)\partial^{(X,A)}. We can compute these homologies in context by performing the same operations as before, just restricted to the diagonal blocks of the full boundary \partial. This leaves us with bases for kerA\ker \partial^A containing bases for im A\text{im } \partial^A. But there’s a bit more left in the matrix: a block corresponding to a map Ck+1(X,A)Ck(A)C_{k+1}(X,A) \to C_k(A).

Step 2: Homology of A and (X,A)

What do we know about this block? Quite a bit can be deduced from the fact that this reduced matrix squares to zero. Let’s look at what happens when we multiply the two relevant blocks.

Most of the blocks of the product are automatically zero by the structure of the factors, but the upper right block, corresponding to a map Ck+1(X,A)Ck1(A)C_{k+1}(X,A) \to C_{k-1}(A), has two potentially nonzero regions. One comes from the composition of the block Ck+1(X,A)Ck(A)C_{k+1}(X,A) \to C_{k}(A) with the identity submatrix in the block Ck(A)Ck1(A)C_{k}(A) \to C_{k-1}(A), and the other comes from the composition of the identity submatrix in Ck+1(X,A)Ck(X,A)C_{k+1}(X,A) \to C_{k}(X,A) with the block Ck(X,A)Ck1(A)C_{k}(X,A) \to C_{k-1}(A).

Here’s what that looks like. There’s a vertical strip in k\partial_{k} and a horizontal strip in k+1\partial_{k+1} that contribute to the product. Where they don’t overlap in the product, these strips must be zero. Where they do overlap, they must sum to zero.

Tracking where the zeros must be

This translates to some known zeros in the full matrix, which we will be able to leverage in our reduction operations. In particular, we know that the red block and the green block sum to zero here.

Step 3: Some known zeros in the reduced matrix

We can use the identity block for Ck+1(A)Ck(A)C_{k+1}(A) \to C_{k}(A) to clear out the red block using column operations. Since this is the negative of the green block, the corresponding row operations clear out the green block.

What are we doing here? The column operations amount to subtracting chains in Ck+1(A)C_{k+1}(A) from the basis vectors for Ck+1(X,A)C_{k+1}(X,A), which makes sense when we think of the latter space as a quotient.

We continue clearing out columns to get a basis for which Hk+1(X,A)H_{k+1}(X,A) maps directly onto the basis for Hk(A)H_k(A). This is our connecting map. The corresponding row operations involve adding multiples of zero rows to other zero rows, so this manipulation has no side effects. We are in effect choosing different representatives in C(X)C_*(X) for chains in C(X,A)C_*(X,A).

Step 4: A matrix representation of the connecting map is revealed

Can we tell that the sequence this describes is exact? We need to finish reducing the matrix so we know what the rest of the maps in the sequence look like. Once we get the connecting map in Jordan form, we have in fact calculated Hk(X)H_k(X) as well.

Step 5: The fully reduced boundary matrix reveals exactness

We can now extract all the maps in the long exact sequence from this matrix. We can extract bases for Hk(A)H_k(A), Hk(X,A)H_k(X,A) and Hk(X)H_k(X) from this matrix and the column operations we performed. The map Hk(A)Hk(X)H_k(A) \to H_k(X) is given by the map sending basis vectors for Hk(A)H_k(A) to either themselves (if they are not in im\im \partial) or to zero (if they are in im\im \partial). Similarly, the map Hk(X)Hk(X,A)H_k(X) \to H_k(X,A) sends a basis vector to itself (if it is in the basis for Hk(X,A)H_k(X,A)) and zero otherwise (which implies it is in the basis for Hk(A)H_k(A)).

Exactness of the sequence is almost immediate. The bases for Hk(A)H_k(A), Hk(X)H_k(X), and Hk(X,A)H_k(X,A) factor each space into two subspaces: a piece which is mapped isomorphically onto the next term in the sequence, and a complement consisting of the image of the previous term. So we have actually gotten a bit more than just the long exact sequence—we’ve produced a decomposition of each term of the sequence. In fact, this expresses Hk(X)H_k(X) as a quotient of Hk(A)H_k(A) plus a subspace of Hk(X,A)H_k(X,A). We’ve performed the calculation involved in the spectral sequence of XX associated to the two-step filtration AXA \subset X. Without too much more effort, we should be able to extend this to a computation of the spectral sequence for a longer filtration, which I hope to write about soon.