A post in a series of things I think would be fun to discuss on LW. Part one is here.


I

As it turns out, I asked my leading questions in precisely the reverse order I'd like to answer them in. I'll start with a simple picture of how we evaluate the truth of mathematical statements, then defend that this makes sense in terms of how we understand "truth," and only last mention existence.

Back to the comparison between "There exists a city larger than Paris" and "There exists a number greater than 17." When we evaluate the statement about Paris we check our map of the world, find that Paris doesn't seem extremely big, and maybe think of some larger cities.

We can use exactly the same thought process on the statement about 17: check our map, quickly recognize that 17 isn't very big, and maybe think of some bigger numbers or the stored principle that there is no largest integer. A large chunk of our issue now collapses into the question "Why does the map containing 17 seem so similar to the map containing Paris?"

<Digression>

We use the metaphor of map and territory a lot, but let's take a moment to delve a little deeper. My "map" is really more like a huge collection of names, images, memories, scents, impressions, etcetera, all associated with each other in a big web. When I see the word "Paris" I can very quickly figure out how strongly that thing is associated with "city size," and by thinking about "city size" I can tell you some city names that seem more closely-associated with that than "Paris."
"17" is a little trickier, because to explain how I can have associations with "17" in my big web of association, I also need to explain why I don't need a planet-sized brain to hold my impressions of all possible numbers you could have shown me.
The answer is that there's not really a separate token in my head for "17," and not for "Paris" either. My brain doesn't keep a discrete label for everything, instead it stores and manipulates mental representations that are the collective pattern of lots of neurons, and therefore inhabit some high-dimensional space. For example, 17 and 18 might have mental representations that are close together in representation-space. And I can easily represent 87438 despite never having thought about that number before, because I can map the symbols to the right point in representation-space.

</Digression>

If we really do evaluate mathematical statements the same way we evaluate statements about our map of the external world, then that would explain why both evaluations seem to return the same type of "true" or "false." It's also convenient for evaluating the truth of mixed mathematical and empirical statements like "The number of pens on my table is less than 3 factorial." But we still need to fit this apparent-truth of mathematical statements with our conception of truth as a correspondence between map and territory.

II

An important fact about our models of the world is that they're capable of modeling things that aren't real. Suppose our world contains a red ball. We might hypothesize many different world-models and variations on models, each with a different past and future trajectory for the red ball. Psychologically, this feels like we are imagining different possible worlds, at most one of which can be real.

To make a statement like "The ball is in the box" is to imply that we are in one specific fraction of the possible worlds. This statement is false in some possible worlds and true in others, but we should only endorse that the ball is in the box if, in our one true world, the ball is actually in the box.

Each statement about the red ball that we can evaluate as true or false can be thought of as defining a set of the possible worlds where that statement is true. "The volume of the ball contains a neutrino" is true in almost every world, while "The ball is in a volcano" is true in almost none. Knowing true statements gives us helps us narrow down which possible world we're actually in.

<Digression> More technically, knowing true statements helps us pick models that predict the world well. All this talk of possible worlds is a convenient metaphor. </Digression>

Moving closer to the point: "The ball has bounced a prime number of times" also defines a perfectly valid set of possible worlds. So. Does "3 is a prime number" define a set of possible worlds?

If we were really committed to answering "no" to this, we would have to undergo strange contortions, like being able to evaluate "The ball has bounced three times and the ball has bounced a prime number of times," but not "The ball has bounced three times and three is a prime number." Being able to compare the empirical with the abstract suggests the ability to compare the abstract with the abstract.

If we answer "yes," the set of possible worlds where 3 is a prime number seems like "all of them." (Or perhaps only almost all of them.) Math is then a bunch of tautologies.

But this raises an important problem: if mathematical truths are tautologous, then that would seem to render having a mental map of mathematics unnecessary - you can just evaluate statements purely on whether they obey their axioms. Conversely, if mathematical statements are always true or always false, then they're not useful, because learning them doesn't refine our predictions of the world. To resolve this apparent problem, we'll need a very powerful force: human ignorance.

Even though mathematical statements are theoretically evaluable from a small set of axioms, in practice that is much, much too hard for humans to do at runtime. Instead, we have to build up our knowledge of math slowly, associate important results with each other and with their real-world applications, and be able to place new knowledge in context of the old.

So it is precisely human badness at math that makes us keep a mental map of mathematics that's structured like our map of the world. The fact that our map doesn't start completely filled in also means that we can learn new things about math. It also leads directly into my last leading question from part one: why might we think numbers exist?

III

The reasons to feel like numbers exist are pretty similar to the reasons to feel like the physical world exists. For starters, our observations don't always turn out how we'd predict. The stuff that generates the predictions, we call belief, and the stuff that generates the observations, we call reality.

Sometimes, you have beliefs about mathematical statements even if you can't prove them. You might think, say, P!=NP, not by reasoning from the axioms, but by reasoning from the shape of your map. And when this heuristic reasoning fails, as it occasionally does, it feels like you're encountering an external reality, even if there's no causal thing that could be providing the feedback.

We also feel more like things exist when we model them as objective, rather than subjective. When we use our model of the world to imagine changing peoples' opinions about an objective thing, our model says that the objective thing doesn't change. Mathematical truths fulfil this property nicely - details left to the reader.

Lastly, things that we think exist have relationships with other elements in our map of the world. Things are associated with properties, like color and size - numbers definitely have properties. And although numbers are not connected to rocks in a causal model of the world, it seems like we say "2+2=4" because 2+2=4. But the "because" back there is not a causal relationship - rather it's an association our brain makes that's something like logical implication.

So maybe I do understand those mysterious links in LDT (artist's representation above) better than I did before. They're a toy-model representation of a connection that seems very natural in our brains, between different things that we have in the same map of the world.


Epilogue

I played a bit coy in this post - I talk a big game about understanding numbers, but here we are at the end and rather than telling you whether numbers really exist or not, I've just harped on what makes people feel like things exist.

To give away the game completely: I avoided the question because whether numbers "really exist" can end up getting stuck in the center node of the classic blegg/rube classifier. When faced with a red egg, the solution is usually not to figure out if it's "really a blegg or a rube." The solution is to be able to think about it as a red egg. And the even better solution is to understand the function of sorting these objects so that we can use categorizations in contexts where it's useful.

Understanding why we feel the way we do about numbers is really an exercise in looking at the surrounding nodes. The core claim of this article is that two things that normally agree - "should be a basic object in a parsimonous causal model of the world" and "can usefully be thought about using certain expectations and habits developed for physical objects" - diverge here, and so we should strive to replace tension about whether numbers "really exist" with understanding of how we think about numbers.


My aim was for a standard LW-ian view of numbers. I feel like I learned a lot writing this, and hopefully some of that feeling rubs off on the reader. (Thank you for reading, by the way.) I'll be back with something completely different next week.

New Comment
10 comments, sorted by Click to highlight new comments since:

I just wanted to nitpick on one point: it's not true that all mathematical statements are theoretically evaluable from a small set of axioms. That's the point of Gödel's theorem. Maybe what you meant to say is that the truth-values of all mathematical statements are determined once you fix the axioms? This is closer to being correct, but still not quite right. The right way to say it is that the truth-value of a mathematical statement is determined once you fix the interpretation of the statement with sufficient precision. The axioms of e.g. Peano arithmetic can be suggestive of a certain interpretation of addition, multiplication, and the class of natural numbers, but in fact the interpretation resides in our minds and not in the axioms.

Of course, your main point that even if the truth-value of a mathematical statement has been determined, it doesn't mean that we know what its truth-value is, is still correct.

Good points. I'm not sure that there is a sense in which the Gödel sentence is true that doesn't rely on human reasoning (or an analogue thereof) filling in the gaps in a very similar way to how we fill in the gaps for P!=NP. Even though P!=NP is probably simple ignorance, while for Gödel we know there are models of the axioms with both truth values. But you're definitely right that saying "you could just evaluate all mathematical sentences" sweeps some important stuff under the rug.

By the way, you can actually make these into a sequence by going to https://www.lesserwrong.com/library and clicking on the "New Sequence" button next to "Community Sequences" (the sequence creation UI is still somewhat janky, but it should work).

Thanks! The creation works great. Only issue was dropping the sequence navigation thingie (sequence name and forward and back buttons) upon editing the post, fixed by removing and re-adding the post to the sequence.

Do unicorns exist? It seems to me that your arguments are fully general. You can, in fact, make true statements about unicorns ("every unicorn has a horn") and perhaps some of them might not even seem trivial. It's just that numbers are more precise, so we can make more claims about them, and more concise, so we can assume that my numbers and your numbers are the same.

You might note that I made no argument that numbers exist :) The arguments in the bit on existence were all for what factors I think are important in peoples' feeling that they exist. If you take the arguments and apply them to unicorns, what I'd hope that they explain is not whether or not unicorns exist, but why people might not believe unicorns exist.

Do you see some difference between saying "numbers exist" and "I think/feel that numbers exist"? I sure don't.

Regarding unicorns, how do your arguments support their non-existence? I'm seeing the opposite. I think with your arguments every idea and concept could be said to exist.

The difference I see between the statements is that they suggest different courses of inquiry. Suppose I start from the naive view of thinking that numbers exist. If I think of this as "numbers exist," then I'll start asking question like "where did numbers come from?" and "what's a good necessary and sufficient definition of numbers?" I think these are bad questions to ask and mostly get you nowhere. In fact, the badness of these questions is a great pragmatic argument for saying that numbers don't exist.

But if you think of your belief as "I feel like numbers exist," you might ask things more like "why do I feel like numbers exist?" which I quite like, because, as I am shamelessly copying from Eliezer, this is the sort of question that gets you sensible information whether or not your naive view was correct. And once you understand where your belief comes from, I think you actually end up caring less about whether numbers "exist" or not, because once you know what properties of numbers are important to you, you can let your thoughts dictate the word you choose to use, rather than letting the label dictate your thoughts.

Anyhow, the key thing from this post that doesn't apply to unicorns is that there's no experience of having separate things cause our hypotheses and our updates about unicorns. This might help explain why we think it's obvious that unicorns don't exist.

If I think of this as "numbers exist," then I'll start asking question like "where did numbers come from?"

That's not an experience I can relate to, but ok.

And once you understand where your belief comes from, I think you actually end up caring less about whether numbers "exist" or not

I see where you're coming from, however I'm a big believer in the concept that words should mean things. If you find the word "exist" too vague for your purposes, you should propose a more precise definition, or use a different word.

Anyhow, the key thing from this post that doesn't apply to unicorns is that there's no experience of having separate things cause our hypotheses and our updates about unicorns.

I'm saying that there is. For now, instead of unicorns, consider god. There is the entire field of theology focused on reasoning about god, creating hypotheses about it and finding them wrong. But hopefully we don't feel that god exists (or if we do feel it, that's not thanks to theology). Or consider the Star Wars universe. Likewise there are many fans who reason what belongs to this universe and what does not, and where there is reason, there is a chance to find our hypotheses wrong. The same is true for every idea, it's only that unicorns are degenerate - the reasoning is too trivial to find yourself wrong. But if we were morons, perhaps we'd find the hypothesis "unicorns have one horn" to be novel and profound.

Fair points. I think that this sort of game-playing might contribute to people feeling like god exists, but it's definitely a bad reason. But in that case, perhaps we might say that god-the-concept 'exists' (concepts and numbers are in pretty much the same boat re: existence) but god-the-being-with-causal-effects doesn't exist, and people are trying to smuggle properties from one to the other by using the same name for both.

This is sort of a reverse of the ontological argument.