Posted by: yanzhang | March 15, 2011

Why Hopf Algebras

I really should just start a series called “Yan finally learns simple math” because it takes me ridiculously long to stumble upon ideas that everyone else already knows. Anyway, I have finally convinced myself that I might want to care about Hopf algebras, so maybe this could help someone else.

It is annoying why this realization came so late. I think I’ve asked this question at least five times, each time getting an answer full of words that I didn’t understand. This would be okay, except I think there’s an answer that I think even an undergraduate can appreciate, so I will stick to it now until I become fancy enough to appreciate a “higher” reasoning. Consider some k-algebra A, and let E be an A-module. When we tensor A-modules it is obvious that we want A to act on them diagonally, so consider E \otimes_k E. a \in A naturally acts on this by the diagonal action a(e \otimes f) = ae \otimes af.

This is one of those lies that me, being careless, would be willing to buy without thinking much. The source of the lie is that if we’re just multiplying things this is obviously a group homomorphism, but once we allow addition we clearly don’t have enough structure:(a+b) (e \otimes f) = (a+b) e \otimes (a+b) f, which give us 4 terms; we obviously want this to be the same as a(e \otimes f) + b (e \otimes f), and we are stuck with two cross-terms that don’t cancel.

So, we’ve concluded that we can’t do this in general. What happens is that E \otimes E is not naturally an A-module; what it is is an A \otimes A-module, in which case we can easily check the obvious map a \otimes b (e \otimes f) = ae \otimes bf works perfectly.

Well, that would be end of the story, except that there are some algebraic structures that come up often enough where we seem to have this extra power. To be precise, let’s look at the group algebra of some group G. Here, for every group element g \in G, I just let g (e \otimes f) = ge \otimes gf be the diagonal action, which we can then extend linearly into a k[G]-action. Now, this is completely well-defined and quite useful. At first I stared at this for a while wondering why it works, because it seems to be the exact thing we wanted to do earlier that didn’t work. Of course, a little bit of staring gives us the answer – here we had more structure underneath. In particular, we had a group structure that we used and then linearized afterwards, which we did not have the luxury of doing before. Of course, when we study group rings we’re really studying the representations of these groups, so Hopf algebras naturally come up in representation theory.

So, thinking a bit more like a modern mathematician – what we really have here is a nontrivial map A \rightarrow A \otimes A, meaning that our natural A-action really came from such a map. This is a weird thing that is the opposite of the normal product, which is a map A \times A \rightarrow A. The structure of the previous sentence makes it retroactively intuitive that this operation, the co-product, is the extra structure we have in these situations. The Hopf algebra is just the formalism to capture these situations.

A similar situation appears in algebraic topology. Why is cohomology nicer than homology? Well, for starters we have a nice product, the cup product, which makes the cohomology ring a… ring. When our space is a Lie group, we have a group structure, and thus a group product. This group product induces a coproduct on the cohomology ring. making it a Hopf algebra as well.

This explains why a Hopf algebra may be a nice definition to have for labeling things. Now, I’m still not sure if they’re immediately useful for elucidating concepts or for doing specific things in combinatorics, but I’ll keep my mind open and see. Right now I’m just confused on why I’ve never managed to understand this before, especially given that the Wikipedia article seems to cover most of what I said; I’ve concluded it may be just because I was too mathematically immature to follow the explainer (I’m including Wikipedia as a possible explainer)’s reasoning at the time, in which case I apologize retroactively to the explainer.

Much thanks to Henry Cohn’s “Quantum Groups” article for making this click in my head, and thanks to Tiankai Liu for figuring this one out with me.




  1. Yes! And you can extend this a bit: the counit gives you a trivial representation (1-dimensional representation over the commutative ground ring) and the antipode gives you a dual representation.

    The structure of a Hopf algebra is what separates group algebras from any old algebras. In other words, it’s why group representations are so special.

  2. Note that the 1-dimensional representations of a Hopf algebra form a group. This is a basic reason Hopf algebras show up in algebraic geometry, since the ring of functions on an algebraic group naturally forms a Hopf algebra.

  3. You don’t need a full blown Hopf algebra for tensor products to make sense, technically you just want a bialgebra structure. The antipode is needed to make sense of dual representations.

    If you want to see Hopf algebras in combinatorics, take a look at Rota’s work…

  4. @ssam: yes, but it was Rota himself who said (I can’t find the quote) that the Hopf algebra formalism did nothing in his own work – they just made how he thought about things more rigorous. If I remember right, he explicitly said something like this in some paper late in his career (or it could have been one of his books), so I’ll assume it was a philosophy he didn’t regret having. If this is a bad interpretation, please correct me.

    That said, I think there have been some recent (as in the last 3 years) results where the speaker said the use of Hopf algebras was essential and the shortest way to the truth. Someone at FPSAC mentioned this earlier this year. I really should look more into those.

  5. […] the longer posts, Concrecte Nonsense shared some insight into Hopf algebras and Gaussianos had a great interview with Cilleruelo Javier about Sidon sets (translation). On a […]

  6. The really cool thing from my point of view is the (deep?) connection with topology. You can form a category C out of pictures of knots and tangles, and it turns out that this category has many of the same formal structures as the category of modules over a Hopf algebra (for example, the tensor product that you’re talking about, duals, and an extra piece of structure analogous to the swap map t: A X B -> B X A ). But what’s amazing is that the category C is the “free” category with these properties: just as for other “free” things, this means that for any object A in an appropriate category of modules, the smallest subcategory containing it (and all its tensor products, duals, etc) is just a copy of the topological category C! So this is a recipe for getting lots of knot invariants, by sticking C inside categories of modules.

    Maybe I should do a post on this to connect it all up?

  7. […] Yan Zhang: Why Hopf Algebras […]

  8. Did you mean “our natural A-action really came from such a map” rather than “our natural (A tensor A)-action”? As you mentioned before, you naturally get an (A tensor A)-module structure, so if you have a map A -> A tensor A, then you can precompose by this, and obtain an A-module structure?

  9. @mlbaker: wow! Blast from the past! At first glance it looks like you are right, so I will just change it.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: