Ultrafilters – Interlude: the Stone topology on general Tychonoff spaces

In my previous post, I stuck to the Stone-Čech compactification of discrete cases, because that was when the nice ultrafilter construction worked. I asked Prof. Calegari whether this was generalizable, and surprisingly, it’s possible to generalize this to all Tychonoff spaces (precisely the ones for which {\beta} yields an embedding), by taking a slightly different class of filter. I’d like to present this construction briefly.

I. Too many filters

Let’s get motivation from seeing how the ultrafilter construction of {\beta X} from last post fails. For starters, take a simple example of a non-discrete set – in fact, take {X} to be already compact. When we compactify, it’s clear we must get the same set back. Hence every ultrafilter needs to correspond to a point of {X}, in the sense of the canonical embedding {X\hookrightarrow \beta X}.

But it’s obvious that we have far, far too many ultrafilters for that: all the principal ultrafilters clearly correspond to their generator, and then we have the immeasurably more numerous nonprincipal ultrafilters leftover.

We can say more: given any nonprincipal ultrafilter {\mathcal{F}}, it of course converges to a single point {x\in X}, by compact Hausdorffness. Hence its image under any continuous {f:X\rightarrow Y} into a compact Hausdorff space is an ultrafilter on {Y} which then must converge to {f(x)} by continuity. It is in fact true that infinitely many ultrafilters converge to each point in {X}, so each point in {X} then corresponds to infinitely many ultrafilters in our attempted “{\beta X}” which have identical behavior on every continuous function. Since behavior of continuous functions on “{\beta X}” is still entirely specified by behavior on {X}, we then find that many ultrafilters cannot even be separated by continuous functions, so “{\beta X}” is not even completely Hausdorff, let alone compact Hausdorff.

The problem is that our “take all ultrafilters” construction never uses any data about the topology on {X}, essentially treating it as discrete. Any function on {X} under the discrete topology is continuous, so we are able to separate many more ultrafilters than when we have to restrict {f} via continuity.

So we have too many ultrafilters. How do we know how to cut them down? Well, if two of them converge to the same point, we should probably make them equivalent. To see how to do this, look back at the linked proof by Qiaochu that ultrafilters on compact sets converge to at least one point. To summarize, it’s really a translation of the finite intersection property definition of compactness: the closures of the sets of an ultrafilter satisfy the finite intersection property, so their union is nonempty, and it’s not difficult to show that the ultrafilter converges to any point in that union – exactly one, since we’re working in a purely Hausdorff setting. Specifically, any neighborhood of the point must be in the ultrafilter, since its complement couldn’t be, as a closed set not containing the union of the closures of the sets in the ultrafilter.

It thus occurs to us: what if our ultrafilters were somehow defined only on the closed sets to begin with? Let’s suppose we had an ‘ultrafilter’ {\mathcal{F}} on the closed sets of {X}, whatever this means. The closed sets still form a lattice, so the filter definitions still work: intersections are in the filter, the filter is upwards closed. (Consult the definition if unfamiliar; here ‘join’ is union and ‘meet’ is intersection.)

But now the ‘ultra’ condition of either {A} or {X\setminus A} being in it doesn’t make sense, as complements of closed sets are not closed in general. Indeed, this notion of ‘ultra’ for filters only applies to complementable bounded lattices, or Boolean algebras (wiki), which will be important in the next post. For now, however, we just observe that while we cannot have ultrafilters, we can still have maximal filters, in the sense that there is no filter {\mathcal{F}'} such that {\mathcal{F}\subset \mathcal{F}'} strictly. That such filters exist is clear from Zorn’s lemma.

Proceeding as in the original case, we observe that by the finite intersection argument, there must be at least one point {x} contained in every set of {\mathcal{F}}. Every closed set containing {x} then must be in the filter by maximality. We then have that {x} must be unique, since points are closed, so we have our desired correspondence:

Proposition. Every maximal filter on the lattice of closed sets of a compact Hausdorff space is principal.

We will just refer to these as “maximal filters” from here on for brevity.

II. From closed to zero

So this is promising. But of course it doesn’t mean anything yet. We haven’t checked whether this construction works on non-compact spaces, what the topology is, or how functions extend.

We’ll start with the last one. Let’s drop the compactness on {X}, and consider {f:X\rightarrow [0,1]}. How do we extend this to the set of maximal filters?

As before, a maximal filter converges to {x} if every neighborhood of {x} contains a filter element. Again, it is clear by the Hausdorff condition that every filter converges to at most one point. In fact, more is true.

Proposition. For every point {x\in X}, precisely one maximal filter converges to it: the principal filter generated by that point.

Proof. It of course suffices to prove that every maximal filter converging to {x} must be a subfilter of the principal filter, or equivalently that every set in such a filter much contain {x}. Indeed, suppose we had some {A\in \mathcal{F}} which does not contain {x}; then {x} must have a neighborhood disjoint from it. Inside this neighborhood there must be another set of {\mathcal{F}} disjoint from {A}, giving a contradiction. {\square}

So we have our one-to-one correspondence between points of {X} and filters converging to them.

The preimage of a closed set under a continuous function is closed, and hence given {f:X\rightarrow Y} we can define the image filter of a maximal filter {\mathcal{C}} on {X} in the normal way – {A\in f(\mathcal{C})} if and only if {f^{-1}(A)\in \mathcal{C}}. The image of a maximal filter is also a maximal filter – that is, maximal on the lattice of closed subsets of {Y} – because if we could add {B\subset Y} to the filter {f(\mathcal{C})}, then we could add {f^{-1}(B)} to {\mathcal{C}}.

Let us use the notation {\alpha X} as the set of maximal filters of {X}. (Why are we not simply using {\beta X}? Foreshadowing!) So now for any {f:X\rightarrow Y} with {Y} compact Hausdorff, we can indeed extend to {\alpha f:\alpha X\rightarrow Y}.

To rigorously show that this is unique, we need to find the “right” topology on {\alpha X} – it is sufficient to find a topology that makes it a compact Hausdorff space with the image of {X} open and dense in it such that this extension is continuous, since then {f} certainly uniquely determines {\alpha f}.

Taking the Stone topology on ultrafilters as a model, let’s focus on that last bit: how do we make sure nonprincipal maximal filters are limit points of principal maximal filters?

Naively, we take as a base for closed sets {\widehat{A}=\{\mathcal{C}|A\in \mathcal{C}\}} for closed sets {A\subset X}. We have {\widehat{A\cap B} = \widehat{A}\cap \widehat{B}}. In fact this still holds for {\cup} as well, since maximal filters are prime on general distributive lattices, not just Boolean algebras. However, we clearly don’t have complements anymore, which makes sense, since the compactification of a non-discrete set is not going to be overflowing with clopen sets.

Proposition. The image of {X} (the principal maximal filters) is dense in {\alpha X} under this topology, and the induced topology is the topology from {X}. (The {X\hookrightarrow \alpha X} is a homeomorphism onto a dense subspace.)

Proof. Certainly it is dense; finding an open set disjoint from the principal maximal filters is equivalent to finding a closed set containing all principal maximal filters. Such a set would have to be generated by a set contained every principal filter, and hence would have to be {\widehat{X}}.

That the induced topology is the same is fairly obvious from the construction of the base of closed sets. {\square}

To check that it is a compactification, we just need it to be compact (or if you prefer, quasicompact) and Hausdorff.The former is easy to check using the definition of compactness by closed sets: for any collection of basis closed sets with the finite intersection property.

Proposition. Under this topology, {\alpha X} is quasicompact.

Proof. We show that for any collection of closed basis elements {\{\widehat{A_i}\}_{i\in I}} satisfying the finite intersection property, their intersection is nonempty. That is, if every finite subcollection of {\{A_i\}} has a maximal filter containing all of them, the entire collection has such a maximal filter. But this is obvious: every finite intersection of the {\{A_i\}} is nonempty, hence they generate a filter on the closed lattice, which then extends by the usual way (something something axiom of choice; we’ll take all the theory behind lattices and filters for granted, as we have been) to a maximal filter. So just as in the original discrete case, a filter construction “translates” to topological compactness nicely. {\square}

So it all comes down to the Hausdorff condition now. But the other shoe drops: this isn’t true. (Exercise: explicitly construct a counterexample, since I can’t figure one out.)

It turns out that our construction does work for normal spaces: in sketch, for any two maximal filters {\mathcal{C},\mathcal{D}}, we claim that we can find closed sets {A,B} with {A\cup B=X} and {A\not\in \mathcal{C},B\not\in\mathcal{D}}. Indeed we can find disjoint closed sets {A'\in \mathcal{C}}, {B'\in\mathcal{D}}, by a standard argument. Then by normality, we have disjoint open sets {U,V} separating {A'B'}; let {A=X\setminus U,B=X\setminus V}. It’s not hard to show that they satisfy our condition. Then for any maximal filter {\mathcal{F}\in \alpha X}, {\mathcal{F}\in X = \widehat{A\cup B} = \widehat{A} \cup \widehat{B}}, so the complements of {\widehat{A}} and {\widehat{B}} are disjoint open sets containing {A} and {B}. The last part is to check that functions to compact Hausdorff spaces extend uniquely, but this is almost trivial – we just need to check that they extend continuously at all, and we have a canonical way of doings so that we can easily show is continuous in almost perfect analogy with the discrete case. So when {X} is normal, we have constructed our compactification.

For general Tychonoff spaces, we need a space with slightly fewer points, generated by the maximal filters on a coarser lattice. (Coarser lattices have fewer maximal filters, intuitively.) This turns out to be the lattice of zero sets, sets of the form {f^{-1}(0)} for some {f:X\rightarrow \mathbb{R}}. Every zero set is clearly a closed set, but not vice versa. The two concepts coincide for perfectly normal spaces (though seemingly there is enough similarity for normal spaces to work with the closed lattice).

It’s easy to see that zero sets form a distributive lattice, so we can apply all our constructions from above to obtain {\beta X} as the set of maximal filters on the zero lattice, with the same closed base {\widehat{A}} for closed {A\subset X}.

Proposition. {\widehat{A\cup B}=\widehat{A}\cup\widehat{B}}, {\widehat{A\cap B} = \widehat{A}\cap \widehat{B}}.

Proposition. Every maximal zero filter on a compact Hausdorff space is principal.

Defining convergence in the same way as before, we find:

Proposition. In any Hausdorff space, the principal maximal zero filter at a point is the unique such filter converging to it.

Proposition. The map {X\hookrightarrow \beta X} using the canonical identification with principal maximal zero filters is a homeomorphism onto its image, and dense.

Proposition. {\beta X} is compact Hausdorff.

Proposition. {\beta X} satisfies the universal property of the Stone-Čech compactification.

To avoid tedium, these are left as exercises for an interested reader. A majority of these follow simply from the fact that the zero sets are a coarsification of the closed sets.

We will only prove Hausdorffness, since this is where we failed last: it suffices to show that zero sets can be separated by cozero sets, by analogy with our construction for {\alpha X}. In a Tychonoff space, cozero sets form a base for the topology, so in fact we can simply prove that zero sets can be separated by open sets. Indeed, given zero sets {f^{-1}(0)} and {g^{-1}(0)} for continuous {f,g:X\rightarrow \mathbb{R}}, let

\displaystyle h(x)=\frac{f^2(g^2+1)}{\max\{f^2,g^2\}}.

This is clearly continuous, and {h^{-1}(0)=f^{-1}(0),g^{-1}(0)\subset h^{-1}(1)}. Then {h^{-1}((-1/3,1/3))} and {h^{-1}((2/3,4/3))} are disjoint open sets which separate the zero sets, as desired.

Exercise. Find a natural way to correspond the maximal zero filter construction with the {\prod [0,1]} construction from last post (because I sure can’t, and it seems like there should be one).

We didn’t present much in the way of motivation or intuition for using zero sets (compared to closed sets); for more on this subject, Gillman and Jerison’s Rings of continuous functions is a good resource which I mean to check out soon. There, the relationship of the Stone-Čech compactification to {C^*}-algebras is emphasized, something we didn’t touch on. Probably the subject of a future post.