After accidentally promising to give a talk at the MIT math graduate student seminar, I was quiet on the Concrete Nonsense front, saving up for a post on umbral calculus. However, a chat today with Steven resurrected some old mental experiments with fuzzy mathematics that I thought may make for good discussion. I want to think of this post as a casual walking conversation, though I do have some specific discussion questions that I’d be very happy to have more experienced people answer.
I’ll bridge into the topic from my favorite Terry Tao buzz where our hero sketches the analogy “algebra:analysis::closed:open.” Using similar notions as the buzz did, I feel that one of the annoying things about analysis and basic point-set-topology is that the “open” nature of the fields clashes with the “closed” nature of their fundamental building blocks, namely the idea of points in a set. I’ve always wondered about what would happen if membership in a set wasn’t so strict, because in real life our adjectives and quantifiers aren’t as much as “sets” as “descriptions” whose applicability to particular objects is not as well-defined (for example, consider the applicability “light” or “heavy” to Yan’s weight as we start him as a skeleton and send the weight to infinity). I would not hear of the word “fuzzy set” until years later, though even then I didn’t pursue it as it didn’t really intersect with any of the mathematics I liked. So if any of what I say in this post is obvious to a specialist, please let me know!
In the language of Terry Tao’s buzz, the concept of fuzzy sets kind of makes the definition of points in a set “open,” which we can do by assigning, say, a probability distribution over over each point (Question 1: the last time I’ve seen someone do this was in an AI paper, where they just had nice, non-pathological functions instead of distributions. What is the “right” level of generality for this?). Now, even for something simple like we already run into plenty of questions. I don’t know what the “standard” theory does, but there seems to already be several choices each for even the most fundamental topological notions, like open / closed-ness, continuity, and some sort of “fuzzy metric” (Question 2: so what does the “standard” theory do? Is there even just one?).
For the sake of my other questions / conjectures, let’s assume we have already settled these in some satisfactory manner. At first thought, it seems such a thing is completely useless because it enlarges our already horribly complicated mathematics with more stuff, not to mention makes the computation much more difficult. Thus, I’ll go ahead and make the completely counter-intuitive statement and guess that the biggest gain we can make with this theory is killing pathology.
This seems like a crazy idea, because we keep all the pathological functions in our mathematics alive as special cases, so how can we have less crap to deal with? Well, I’d say with the right definitions, the very methods (metrics, comparisons, etc.) we use to play with these functions will be more “fuzzy” and thus more flexible, so in that sense we might not even have to think about the “bad” functions.
My hypothesis is this: the reason we have books like “Counterexamples in Topology” or “Counterexamples in Analysis,” is, I believe, that the “closed”-ness of the definition of sets makes the topologies too rigid, and thus we have a lot of boundary cases that form our pathology. However, maybe if we have this theory built up correctly, the boundaries in our new fuzzy topology would be themselves fuzzy enough to be free of requirements like “we need spaces of type ” or whatever. For example, a pseudometric may be more natural for our “fuzzy metric,” and all those continuous but nowhere differentiable functions or Devil’s Staircases may be just elements in some equivalence class, of which we can always pick nice representatives.
In other words, when our lens is fuzzier, we may end up seeing objects as more “blobby” without worrying about their irregularities.
If all this sounds vague and/or hopeless, I wish to point out that we already do “meta-topological” things like this in mathematics (Question 3: are there more examples?). A representative example of our our “smoothing objects out” is defining distributions, where we gain the delta function in the closure of our favorite functions. More generally, sheaves allow us to define objects while being “fuzzy” at particular points. I happily just realized today that the distributions form a sheaf. Anyway, it would be pretty cool if once we get past the high activation barrier of initial definitions, the “open” nature of the fundamental definition of a set will take care of issues by itself with seemingly no work.
Take care, everyone,