After accidentally promising to give a talk at the MIT math graduate student seminar, I was quiet on the Concrete Nonsense front, saving up for a post on umbral calculus. However, a chat today with Steven resurrected some old mental experiments with fuzzy mathematics that I thought may make for good discussion. I want to think of this post as a casual walking conversation, though I do have some specific discussion questions that I’d be very happy to have more experienced people answer.

I’ll bridge into the topic from my favorite Terry Tao buzz where our hero sketches the analogy “algebra:analysis::closed:open.” Using similar notions as the buzz did, I feel that one of the annoying things about analysis and basic point-set-topology is that the “open” nature of the fields clashes with the “closed” nature of their fundamental building blocks, namely the idea of points in a set. I’ve always wondered about what would happen if membership in a set wasn’t so strict, because in real life our adjectives and quantifiers aren’t as much as “sets” as “descriptions” whose applicability to particular objects is not as well-defined (for example, consider the applicability “light” or “heavy” to Yan’s weight as we start him as a skeleton and send the weight to infinity). I would not hear of the word “fuzzy set” until years later, though even then I didn’t pursue it as it didn’t really intersect with any of the mathematics I liked. So if any of what I say in this post is obvious to a specialist, please let me know!

In the language of Terry Tao’s buzz, the concept of fuzzy sets kind of makes the definition of points in a set “open,” which we can do by assigning, say, a probability distribution over over each point (Question 1: the last time I’ve seen someone do this was in an AI paper, where they just had nice, non-pathological functions instead of distributions. What is the “right” level of generality for this?). Now, even for something simple like we already run into plenty of questions. I don’t know what the “standard” theory does, but there seems to already be several choices each for even the most fundamental topological notions, like open / closed-ness, continuity, and some sort of “fuzzy metric” (Question 2: so what does the “standard” theory do? Is there even just one?).

For the sake of my other questions / conjectures, let’s assume we have already settled these in some satisfactory manner. At first thought, it seems such a thing is completely useless because it enlarges our already horribly complicated mathematics with more stuff, not to mention makes the computation much more difficult. Thus, I’ll go ahead and make the completely counter-intuitive statement and guess that the biggest gain we can make with this theory is killing pathology.

This seems like a crazy idea, because we keep all the pathological functions in our mathematics alive as special cases, so how can we have less crap to deal with? Well, I’d say with the right definitions, the very methods (metrics, comparisons, etc.) we use to play with these functions will be more “fuzzy” and thus more flexible, so in that sense we might not even have to think about the “bad” functions.

My hypothesis is this: the reason we have books like “Counterexamples in Topology” or “Counterexamples in Analysis,” is, I believe, that the “closed”-ness of the definition of sets makes the topologies too rigid, and thus we have a lot of boundary cases that form our pathology. However, maybe if we have this theory built up correctly, the boundaries in our new fuzzy topology would be themselves fuzzy enough to be free of requirements like “we need spaces of type ” or whatever. For example, a pseudometric may be more natural for our “fuzzy metric,” and all those continuous but nowhere differentiable functions or Devil’s Staircases may be just elements in some equivalence class, of which we can always pick nice representatives.

In other words, when our lens is fuzzier, we may end up seeing objects as more “blobby” without worrying about their irregularities.

If all this sounds vague and/or hopeless, I wish to point out that we already do “meta-topological” things like this in mathematics (Question 3: are there more examples?). A representative example of our our “smoothing objects out” is defining distributions, where we gain the delta function in the closure of our favorite functions. More generally, sheaves allow us to define objects while being “fuzzy” at particular points. I happily just realized today that the distributions form a sheaf. Anyway, it would be pretty cool if once we get past the high activation barrier of initial definitions, the “open” nature of the fundamental definition of a set will take care of issues by itself with seemingly no work.

Take care, everyone,

-YZ

Somewhat related: this reminds me of the levels of abstraction in algebraic geometry.

Historically, (and usually pedagogically), one first learns about algebraic varieties, which are fairly concrete, but a lot of things are kind of messed up. One example is Bezout’s theorem that the intersection of two distinct curves of degrees e and f in the projective plane has ef points. This is true for generic choices of curves, but sometimes they can intersect as tangents (or worse things could happen). One could either say “this usually works” or use scheme language instead and then the theorem becomes that the intersection of two distinct curves is a 0-dimensional scheme of multiplicity ef. This “fixes” the assumptions of the theorem at the cost of a weaker conclusion (we might want that the ef points are all distinct).

And then one can move further. For a group G acting on a scheme X (or even just a variety X), things like X/G don’t usually make sense unless we use things like geometric invariant theory to make alternate definitions. For example, if X is affine with coordinate ring A, one could say that X/G is affine with coordinate ring the ring of invariants . But to not lose information, we usually want something like a smash product of A and G (so that modules on X/G are A-modules with a compatible G-action). But this thing will be some weird noncommutative thing, or a stack, or whatever. The point being that in order to do things like taking quotients, we need even more “limit objects” as you put it.

By:

Steven Samon April 8, 2010at 3:39 AM

Another way of losing the awkwardness of points in topology is to study it via frames and locales. For example, see Paul Taylor’s work on “Abstract Stone Duality”.

By:

Roger Witteon April 8, 2010at 12:41 PM

@Roger: ooooh cool. I just glanced at the stuff a bit but I’ll check it out more at the office. Thanks.

Correct me if I’m wrong, but should I also expect this to be a modern algebrization of Birkhoff/Hall flavor algebra of lattices applied to topology? I randomly stumble upon some of that kind of stuff (they come up in modern algebraic combinatorics a ton) but I thought they’ve mostly fallen out of flavor in modern algebra.

By:

yanzhangon April 8, 2010at 2:23 PM