Hey everyone, I’m back from summer vacation along with Concrete Nonsense, and I hope you all had fun summers too.
As a horrible combinatorialist, I’m fortunate to have finally learned what a Wilf-Zeilberger (W-Z) pair is after all these years of thinking that I knew something about generating functions. This is quite a crime, as I’ve always been a self-proclaimed fan of the Z in question, Doron Zeilberger, for both his ideas (which are radical and thought-provoking for everyone involved in any sort of scientific theory) and his courage to be a mathematician with an actual personality.
However, my goal in this post is not to just give an exposition; there are many better sources for that, including the W and/or Z themselves (see here or here). What I really wanted to share is an aha-moment I had while learning the foundations of WZ-theory, which will hopefully help newcomers to absorb the big idea faster and may give people already familiar with WZ-pairs another perspective. I do not claim that this observation is groundbreaking (to some experts it may seem trivial, which may explain why none of the sources I’ve read, at least at first glance, mentioned this connection), but I think it definitely helped me grok the basics of WZ much faster than I would have otherwise (while it may be true that all the readers will naturally come up with the analogy on their own when learning what a WZ-pair is, I can speak for myself that it was definitely plausible that I’d have read and understood the tool without having had this insight at all, so I’m very glad to be living in this particular universe!).
In fact, for those who already know what WZ-theory is, a better exercise before the jump for you is:
Predict my post (it really isn’t that hard given the title).
So what is WZ-theory, in its most basic form? It is a tool to do what other mathematicians think that combinatorialists do all day, which is to prove sum identities. Most of these look something like , from easier ones (like the binomial theorem) to ridiculous ones; you know, those that look like egg cartons when you squint (
) They usually have lots of binomial coefficients and powers dancing around, and the old school did them one by one by hand for quite a long time. It is no small wonder why humans would really want to do this, and this is probably why WZ was developed to outsource this labor to robots instead.: it turns out that when the function
is well-behaved (the buzzword is “hypergeometric series”), we can verify these sums easily via a computer (in fact, we can even find the sum given only the LHS with Gosper’s Algorithm). This frees up many combinatorialist-hours to do more interesting work, like counting trees or drawing graphs.
The premise can be explained very simply: suppose we want to prove . This is obviously equivalent to proving that
for a different
(namely
) so it suffices to prove that a sum is constant. This is equivalent to the following pair of conditions:
- a boundary condition, namely that this sum works for some particular
;
- an “inductive” condition, namely that
.
The hard part is usually the latter. So when do we get something like this? Well, what if we have some magical with the properties:
,
as
?
Then, just becomes
, which telescopes to zero by the second condition. The pair
is then called a “WZ-pair,” and there are very well-developed techniques (and even code) to discover them when they exist, confirm that they don’t for a particular sum, or check that a supplied pair is legitimate. I won’t go into the theory further for this introduction.
Instead, I’ve heavily loaded this presentation to remind you of calculus. Consider the following problem: suppose you wanted to prove some (or equally strong,
for a modified
). This is obviously equivalent to the following conditions:
- a boundary condition, namely that this integral works for some particular
;
- an “perturbation” condition, namely that
.
Well, suppose that we had some mythical with the properties:
,
as
?
I’ll leave the rest as an exercise.
The cute part of it all is that this is really a shadow of a huge analogy between normal (“differential”) and discrete (“difference”) calculus, where some really beautiful math happens. The key insight is that the “differential” operator does not behave that differently from the “difference” operator
, and as such you can get a lot of mileage with just a few conditions on these operators (for example, there is a completely analogous “Taylor expansion” for discrete sums). I may have more to say about this in the future as I know a bit more about it, but I am happy just to share this with you for now.
-Yan
P.S. Some questions for experts (although this may really just belong on mathoverflow) before I talk more about this stuff without really knowing what I’m doing:
- how general can this analogy go in the theory of differential equations vs. difference equations? This is a vague question, but any answer is welcome. I think what I’m mostly looking for is when the analogy breaks; i.e. some nice / deep theorem in one theory, say an existence theorem, whose analogue is simply not true or has some big obstacle on the other side.
- how general can this analogy go in mathematics in general? Derivatives, integrals, and sums are all everywhere. I first got introduced to these concepts while learning Gian-Carlo Rota’s treatment of the classical “umbral calculus,” for example.
- is there a similar analogy in harmonic analysis / representation theory? I.e. “discrete” Fourier transforms (clarification: I don’t think the right analogy is necessarily the well-known “discrete Fourier transform,” because that’s more about Fourier transform on a discrete (in fact finite) group. In fact I don’t quite know what the right analogy “should” be) or “discrete” spherical harmonics?
There are, of course, two other ways to relate sequences to functions, namely by considering the sequence as the coefficients for either an ordinary or an exponential generating function. But now derivatives act as the shift operator (for the exponential series, or shift and multiply for the ordinary ones).
By: Theo on September 14, 2010
at 3:17 AM
@Theo: of course. As you pointed out, that’s definitely one of the first things to look at if you want an analog between the two worlds. I guess I take generating functions for granted 😉 if you think of more crazy relations let me know.
I was thinking more along the lines of say, umbral calculus or Euler-MacLaurin (which actually explicitly contains both a sum and an integral!).
By: Yan on September 14, 2010
at 3:22 AM
Is this anything like a discrete analogue of differentiating under the integral sign? If not, is there one?
By: Alison on October 31, 2010
at 3:22 AM
The thing that breaks the analogy is the chain rule. There is no simple change of variables for sums. This is the reason that the limiting theorems of calculus are so much prettier.
By: Ron Maimon on April 25, 2011
at 1:20 AM