What makes certain axioms “true” beyond mere consistency?
Axioms are only "true" or "false" relative to a model. In some cases the model is obvious, e.g. the intended model of Peano arithmetic is the natural numbers. The intended model of ZFC is a bit harder to get your head around. Usually it is taken to be defined as the union of the von Neumann hierarchy over all "ordinals", but this definition depends on taking the concept of an ordinal as pretheoretic rather than defined in the usual way as a well-founded totally ordered set.
Is there a meaningful distinction between mathematical existence and consistency?
An axiom system is consistent if and only if it has some model, which may not be the intended model. So there is a meaningful distinction, but the only way you can interact with that distinction is by finding some way of distinguishing the intended model from other models. This is difficult.
Can we maintain mathematical realism while acknowledging the practical utility of the multiverse approach?
The models that appear in the multiverse approach are indeed models of your axiom system, so it makes perfect sense to talk about them. I don't see why this would generate any contradiction with also being able to talk about a canonical model.
How do we reconcile Platonism with independence results?
Independence results are only about what you can prove (or equivalently what is true in non-canonical models), not about what is true in a canonical model. So I don't see any difficulty to be reconciled.
The following is probably just an ELI5 version of Dacyn's answer:
Just because you use a word (such as "set"), it doesn't mean that it has an unambiguous meaning.
Imagine the same discussion about "numbers". Can you subtract 5 from 3? In the universe of natural numbers, the answer is no. In the universe of all integers, the answer is yes. Is there a number such that if you multiply it by itself, the result is 2? In the universe of rational numbers, the answer is no; in the universe of real numbers, the answer is yes.
Here you probably don't see any problem. Some statements can be true about real numbers and false about rational numbers, because those are two different things. A person who talks about "numbers" in general, needs to be more specific. As we see here, defining addition, subtraction, multiplication, and division is still not enough to allow us to figure out the answer to "∃a: a × a = 2".
It's similar with the sets. The ZF(C) axioms are simply not enough to pinpoint what you actually mean by a "set". They reduce the space of possible meanings, sufficiently to let us prove many interesting things, but there are still (infinitely) many possible meanings compatible with all of the axioms. For some of those meanings, CH is true, for other meanings, CH is false.
Is there a "set" greater than ℵ0 but smaller than 2ℵ0? It depends.
Is there a "number" that is greater than 2 but smaller than 3? It depends.
What makes certain axioms "true" beyond mere consistency?
Nothing. What do you mean by "true" here? Matching our physical universe? That in general is not what math does. The natural numbers may already include some that exceed the number of particles in our universe. The real numbers are inspired by measuring actual things, but do we really need an infinite number of decimal places? On top of all that, sets are merely mental constructs. A set of {2, 8, 33897798} does not imply anything about our world.
Is there a meaningful distinction between mathematical existence and consistency?
No.
Can we maintain mathematical realism while acknowledging the practical utility of the multiverse approach?
Maybe I am missing some important aspect, but the "multiverse" seems to me just like a intuitively helpful metaphor, but the actual problem is more like this: is the natural number "2" the same object as the integer "2", the real number "2.0", the Gaussian integer "2+0i", the complex number "2.0+0.0i", etc.?
One possible approach is to say: those are different domains of discourse... uhm, let's call them parallel universes to make it intuitive for the sci-fi fans. The object in a parallel universe is a different object, but also in some sense the captain Picard from the parallel universe is a natural counterpart to our captain Picard. They are generally the same unless specified otherwise for the plot relevant reasons, just like "2.0" from the real number universe is the natural counterpart to "2" from the integer universe, except that the former can be divided by three and the latter cannot. (Some things do not have a counterpart in the other universe, etc.) This feels like a natural approach for real vs complex numbers, and probably like an overkill for natural numbers vs integers.
The assumption of different universes kinda goes against the Occam's razor; we could simply move all these objects into the same universe (different planets perhaps) and make a story about a spaceship captain from Earth and a spaceship captain from Mars. Now we don't have the concept of a natural counterpart, and the analogies need to make explicitly: the horses on Earth correspond to the giant six-legged lizards on Mars. There is a set of natural numbers, the set of real numbers, and a function N -> R which maps the object "2" to the object "2.0". More importantly, there is no such thing as "addition"; there are actually two different things, "natural number addition" and "real number addition", and we call the latter the extension of the former, if for each pair of natural numbers, the counterpart of their sum is the same as the sum of their counterparts. The question whether "2" and "2.0" are intrinsically the same object can become kinda meaningless, if we always talk about numbers qua members of one or the other set. They could be the same object, or they could be different objects; the important thing is what they do, i.e. how they participate in various functions and relations.
(This kinda reminds me of Korzybski's "Aristotelian" vs "Non-Aristotelian" thinking, where the former is about what things are, while the latter is about how things are related to each other. Is "2" the same as "2.0"? A meaningless question, from the non-A perspective. The important thing is what they do; how are they related to other numbers. The important facts about "2" are that "1+1=2" and "2+2=4" etc. We can show that we can map N to R in a way that preserves all existing addition and multiplication, and whenever we do so, "2.0" is the image of "2". And that's all there is.)
With sets, I guess it is similar. If we have different definitions of what a "set" means, is the empty set according to definition X the same mathematical object as the empty set according to definition Y? The question is meaningless, from the non-A perspective; but to avoid all the complicated philosophy, it is easier to say that one lives in the universe X, and the other lives in the universe Y, so they are "kinda the same, but not the same". But to be precise, there is no such thing as an "empty set", only something that plays the role of an empty set in a certain system. Some systems could not even have such role, or they could have multiple distinct empty sets -- for example, we could imagine a system where each set has a type, and the "empty set of integers" is different from the "empty set of reals", because it has a different content type.
(Now I suspect I have opened a new can of worms, like how to reconcile Platonism with Korzybski's non-A thinking, and... that would be a long debate that I would prefer to avoid. My quick opinion is that perhaps we should aim for some kind of "Platonism of function" rather than "Platonism of essence", i.e. what the abstract objects do rather than what they are. The question is whether we should still call this approach "Platonism", perhaps some other name would be better.)
[note: I dabble, at best. This is likely wrong in some ways, so I look forward to corrections. ]
I find myself appealing to basic logical principles like the law of non-contradiction. Even if we can't currently prove certain axioms, doesn't this just reflect our epistemological limitations
It's REALLY hard to distinguish between "unprovable" and "unknown truth value". In fact, this is recursively hard - there are lots of things that are not proven, but it's not known if they're provable. And so on.
Mathematical truth is very much about provability from axioms.
rather than implying all axioms are equally "true"?
"true" is hard to apply to axioms. There's the common-sense version of "can't find a counterexample, and have REALLY tried", which is unsatisfying but pretty effective for practical use. The formal version is just not to use "true", but "chosen" for axioms. Some are more USEFUL than others. Some are more easily justified than others. It's not clear how to know which (if any) are true, but that doesn't make them equally true.
Not a correction (because this is all philosophy) but the problem with this "hard formalism" stance:
Mathematical truth is very much about provability from axioms.
is that statements of the form "statement x follows from axiom set S" are themselves arithmetical statements that may or may not even be provable from a given standard axiom system. I would guess that you're implicitly taking for granted that and statements in the arithmetic hierarchy have inherent truth in order to at least establish a truth value for such stateme...
I assume you're familiar with the case of the parallel postulate in classical geometry as being independent of other axioms? Where that independence corresponds with the existence of spherical/hyperbolic geometries (i.e. actual models in which the axiom is false) versus normal flat Euclidean geometry (i.e. actual models in which it is true).
To me, this is a clear example of there being no such thing as an "objective" truth about the the validity of the parallel postulate - you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it's just those theories are applicable to different models, and those models are each useful in different situations, so the only thing it comes down to is which models you happen to be wanting to use or explore or prove things about on a given day.
Similarly for the huge variety of different algebraic or topological structures (groups, ordered fields, manifolds, etc) - it is extremely common to have statements that are independent of the axioms, e.g. in a ring it is independent of the axioms whether multiplication is commutative or not. And both choices are valid. We have commutative rings, and we have noncommutative rings, and both are self-consistent mathematical structures that one might wish to study.
Loosely analogous to how one can write a compiler/interpreter for a programming language within other programming languages, some theories can easily simulate other theories. Set theories are particularly good and convenient for simulating other theories, but one can also simulate set theories within other seemingly more "primitive" theories (e.g. simulating it in theories of basic arithmetic via Godel numbering). This might be analogous to e.g. someone writing a C compiler in Brainfuck. Just like how it's meaningless to talk about whether a programming language or a given sub-version or feature extension of a programming language is more "objectively true" than another, there are many who take the position that the same holds for different set theories.
When you say you're "leaning towards a view that maintains objective mathematical truth" with respect to certain axioms, is there some fundamental principle by which you're discriminating the axioms that you want to assign objective truth from axioms like the parallel postulate or the commutativity of rings, which obviously have no objective truth? Or do you think that even in these latter cases there is still an objective truth?
To me, this is a clear example of there being no such thing as an "objective" truth about the the validity of the parallel postulate - you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it's just those theories are applicable to different models
This is true, but there's an important caveat: Mathematicians accepted Euclidean geometry long before they accepted non-Euclidean geometry, because they took it to be intuitively evident that a model of Euclid's axioms existed, whereas the existence of mod...
Even if we can’t currently prove certain axioms, doesn’t this just reflect our epistemological limitations rather than implying all axioms are equally “true”?
It doesn't and they are fundamentally equal. The only reality is the physical one - there is no reason to complicate your ontology with platonically existing math. Math is just a collection of useful templates that may help you predict reality and that it works is always just a physical fact. Best case is that we'll know true laws of physics and they will work like some subset of math and then axioms of physics would be actually true. You can make a guess about what axioms are compatible with true physics.
Also there is Shoenfield's absoluteness theorem, which I don't understand, but which maybe prevents empirical grounding of CH?
This is an appealingly parsimonious account of mathematical knowledge, but I feel like it leaves an annoying hole in our understanding of the subject, because it doesn't explain why practicing math as if Platonism were correct is so ridiculously reliable and so much easier and more intuitive than other ways of thinking about math.
For example, I have very high credence that no one will ever discover a deduction of 0=1 from the ZFC axioms, and I guess I could just treat that as an empirical hypothesis about what kinds of physical instantiations of ZFC proofs...
I will give an eminently practical answer to the philosophical discussion.
It's too hard to get anywhere close to the mathematical truth, so people have given up on mathematical truth for good reasons, IMO.
I will also say that this is why I wish computability theory had finer distinctions beyond decidable/undecidable, and have more of a complexity theoretic flavor in distinguishing between the hardness of problems, where you cand was more explicit about the fact that many of the impossibility results are relative to a certain set of capabilities, because in a different world, I could see computability theory having much more relevance to this debate.
In essence, I'm arguing for pinning down exactly how hard a problem actually is, closer to the style of how complexity theorists not only care whether a problem is hard, but also how hard the problem exactly is.
I'll give an example, just to show everyone my position on the matter.
If you could decide the totality problem, you could also decide whether P equaled NP or not, which requires much weaker oracles.
Some examples of such computational models include Turing Machines with the ability to have an infinity of states, Blum Shub Smale machines with arbitrary real constants, and probabilistic Turing Machines with non-recursive biases for the coin.
See these links for more (You can ignore all details about physical plausibility, since it's not necessary for the complexity and computability results above:)
https://arxiv.org/abs/math/0209332
https://arxiv.org/abs/cs/0401019
http://www.amirrorclear.net/files/the-many-forms-of-hypercomputation.pdf
So I'm pretty firmly on the side of mathematical truth, but with an enormous caveat here, in that what independence proofs show us is that we need new computational capabilities, not new axioms, and thus we can either accept that it's too hard to get at the truth and move on, or if someone does have a plan to get the computational capacity needed that is actually viable, then we can talk and have more productive discussions on whether the discovery is real and how to exploit it, if applicable.
So the most useful way to say things is that whenever an independence result is published, we should immediately try to figure out how complicated it is to determine the statement is true according to our standard computability hierarchies, and ideally have completeness results along the lines of NP-complete or RE-complete problems, and then give up on proving or disproving it, unless someone proposes a plausible method for getting that computational power.
Edit: I've edited out the continuum hypothesis example, since I had misinterpreted what exactly it was saying, and it's saying that it's a universal 2 1 conservative extension, not that it actually has that complexity.
See here for more:
https://en.wikipedia.org/wiki/Conservative_extension#Examples
Intuitively, the computational models you were suggesting seem like they would only decide statements that are at most second-order; they can really answer third-order arithmetical questions?
However, I find myself appealing to basic logical principles like the law of non-contradiction.
The law of non contradiction isn't true in all "universes" , either. It's not true in paraconsistent logic, specifically.
Arguably, "basic logical principles" are those that are true in natural language. Otherwise nothing stops us from considering absurd logical systems where "true and true" is false, or the like. Likewise, "one plus one is two" seems to be a "basic mathematical principle" in natural language. Any axiomatization which produces "one plus one is three" can be dismissed on grounds of contradicting the meanings of terms like "one" or "plus" in natural language.
The trouble with set theory is that, unlike logic or arithmetic, it often doesn't involve strong intuiti...
I have spent a long time looking in vain for any reason to think ZFC is consistent, other than that it holds in the One True Universe of Sets (OTUS, henceforth). So far I haven't found anything compelling, and I am quite doubtful at this point that any such justification exists.
Just believing in the OTUS seems to provide a perfectly satisfactory account of independence and nonstandard models, though: They are just epiphenomenal shadows of the OTUS, which we have deduced from our axioms about the OTUS. They may be interesting and useful (I rather like nonstandard analysis), but they don't have any foundational significance except as a counterexample showing the limitations of what formal systems can express. I take it that this is more or less what you have in mind when you say
a view that maintains objective mathematical truth while explaining why we need to work with multiple models pragmatically.
It's disappointing that we apparently can't answer some natural questions about the OTUS, like the continuum hypothesis, but Gödel showed that our knowledge of the OTUS was always bound to be incomplete 🤷♂️.
Having said that, I still don't find the Platonist view entirely satisfactory. How do humans come to have knowledge that the OTUS exists and satisfies the ZFC axioms? Supposing that we do have such knowledge, what is it that distinguishes mathematical propositions whose truth we can directly perceive (which we call axioms) from other mathematical propositions (which we call conjectures, theorems, etc.)?
An objection more specific to set theory, as opposed to Platonism more generally, would be, given a supposed "universe" of "all" sets, the proper classes of are set-like objects, so why can't we extend the cumulative hierarchy another level higher to include them, and continue that process transfinitely? Or, if we can do that, then we can't claim to ever really be quantifying over all sets. But if that's so, then why should we believe that the power set axiom holds, i.e. that any of these partial universes of sets that we can quantify over is ever large enough to contain all subsets of ?
But every alternative to Platonism seems to entail skepticism about the consistency of ZFC (or even much weaker foundational theories), which is pragmatically inconvenient, and philosophically unsatisfactory, inasmuch as the ZFC axioms do seem intuitively pretty compelling. So I'm just left with an uneasy agnosticism about the nature of mathematical knowledge.
Getting back to the question of the multiverse view, my take on it is that it all seems to presuppose the consistency of ZFC, and realism about the OTUS is the only good reason to make that presupposition. In his writings on the multiverse (e.g. here), Joel Hamkins seems to be expressing skepticism that there is even a unique (up to isomorphism) standard model of that embeds into all the nonstandard ones. I would say that if he thinks that, he should first of all be skeptical that the Peano axioms are consistent, to say nothing of ZFC, because the induction principle rests on the assumption that "well-founded" means what we think it means and is a property possessed by . I have never seen an answer to this objection from Hamkins or another multiverse advocate, but if anyone claims to have one I'd be interested to see it.
I've been thinking about the set theory multiverse and its philosophical implications, particularly regarding mathematical truth. While I understand the pragmatic benefits of the multiverse view, I'm struggling with its philosophical implications.
The multiverse view suggests that statements like the Continuum Hypothesis aren't absolutely true or false, but rather true in some set-theoretic universes and false in others. We have:
However, I find myself appealing to basic logical principles like the law of non-contradiction. Even if we can't currently prove certain axioms, doesn't this just reflect our epistemological limitations rather than implying all axioms are equally "true"?
To make an analogy: physical theories being underdetermined by evidence doesn't mean reality itself is underdetermined. Similarly, our inability to prove CH doesn't necessarily mean it lacks a definite truth value.
Questions I'm wrestling with:
I'm leaning towards a view that maintains objective mathematical truth while explaining why we need to work with multiple models pragmatically. But I'm very interested in hearing other perspectives, especially from those who work in set theory or mathematical logic.