Why Rationalists Shouldn't be Interested in Topos Theory
I spent a lot of the last two years getting really into categorical logic (as in, using category theory to study logic), because I'm really into logic, and category theory seemed to be able to provide cool alternate foundations of mathematics. Turns out it doesn't really. Don't get me wrong, I still think it's interesting and useful, and it did provide me with a very cosmopolitan view of logical systems (more on that later). But category theory is not suitable for foundations or even meant to be foundational. Most category theorists use an extended version of set theory as foundations! In fact, its purpose is best seen as exactly dual to that of foundations: while set theory allows you to build things from the ground up, category theory allows you to organize things from high above. A category by itself is not so interesting; one often studies a category in terms of how it maps from and into other categories (including itself!), with functors, and, most usefully, adjunctions. Ahem. This wasn't even on topic. I want to talk about a particular subject in categorical logic, perhaps the most well-studied one, which is topos theory, and why I believe it be to useless for rationality, so that others may avoid retreading my path. The thesis of this post is that probabilities aren't (intuitionistic) truth values. Topoï and toposes A topos is perhaps best seen not even as category, but as an alternate mathematical universe. They are, essentially, "weird set theories". Case in point: Set itself is a topos, and other toposes are often constructed as categories of functors F:C→Set, for C an arbitrary category. (Functors assemble into categories if you take natural transformations between them. That basically means that you have maps F(c)→G(c), such that if you compare the images of a path under F and G, all the little squares commute.) Consider that natural numbers, with their usual ordering like 4≤5, can form a category if you take instead 4→5. So one simple example i
Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don't know how to formalize it.
It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.