In modern times, many philosophers have dreamt of a perfect language, which would force us to speak precisely and allow us to resolve philosophical disputes by calculation rather than by argumentation. Whilst this dream appears impossible, I hope to offer a method of talking about these problems that is far more precise than would be possible otherwise by continuing in the the spirit of How An Algorithm Feels From the Inside and explaining why we say the things we say about numbers. It is not my intent to drive all the way to a conclusive solution in this post, but rather to help clarify the terms of the debate.

By the end of this post, I hope to have convinced you that I've made progress on the question of whether infinity exists and whether numbers have an existence independent of physical reality and that these two problems are strongly linked.

We want to pick a starting point that introduces as little doubt as possible, so we'll begin by examining our own minds. We quickly notice that we have a model of a thing we call "the world"; a model of a thing we call our "mind", which can mean many things, but which for our purposes we will simplify to mean our mental models and a model of some kind of "relationship" existing between the two. Since this way of describing things is rather wordy, we will shorten this to simply saying that we have a model of the relationship between our model and the world. 

Two points:

  • "Mind" can be defined many different ways. Regardless of our precise definition, our mind contains a model which consists of all the things we can talk about precisely.
  • Deflationists might object to the use of Correspondence Theory, but we can avoid the metaphysical issue since we are only claiming that this relationship exists inside our mind, not in any actual sense. Undoubtedly, our mind performs some kind of translation between "the concept three" and "all groups of three objects in the world" and we can label this translation the "relationship".

Continuing on, this is already sufficient to produce a basic concept of number. For example, we might notice that according to our model our past experience was as follows: Whenever we modelled a collection of standard-weight packages as numerically being three, then we also modelled our attempt to carry these packages as having been successful afterwards, but that when we modelled the packages as numerically being four, we modelled our attempt as having been unsuccessful. After many different interactions involving many different numbers and objects and activities, we would have a basic conception of what a number is.

Three more points:

  • The claim is not that this basic conception exhausts what it is to be a number - in fact, we'll even further extend this definition further ourselves - but rather that if we have good reason to believe that numbers exist independently, we should be able to reach that conclusion by starting from this basic conception and examining what can be said.
  • The claim is not that all people have this conception as many people might not be able to talk you through the details or might insist that "three properly refers to the abstract number and shouldn't be applied to the above model". Humans have all kinds of messy conceptions. Instead, I am sugging a path for producing a more deeply thought out conception, much like Decartes wasn't trying to explain why people believe in an external world, but atttempting to put it on firmer footing. I am just suggesting that if we want to say something about numbers, whilst making as few assumptions as possible that we should begin by noticing this pattern, regardless of whether we end up using the label "number".
  • I haven't attempted to elaborate on the details of the learning process. To be honest, this isn't particularly relevant to this argument. I think we have a reasonable broad-strokes understanding, even if we don't know all the details.
  • Our mind has a finite capacity, so this initial conception of numbers only includes finitely many numbers. Note that it's not that we can do whatever we want with lower numbers - a agent with maximum imaginable number of 1000 would not be able to load 1000 and 998 into its memory in order to calculate 1000-998. So the "maximum number" depends on how much storage is required for other operations. Further, more complicated agents will undoubtedly allow compressed representations of some numbers, much like I can write  without writing out all of the digits.
  • We say that a mind/computer can directly represent a positive integer if it can reach that number by counting (ie. representing every number in-between).

We will now describe how to produce a few long chains of meta-models similar to my proposed Extended Picture Theory (see that post for diagram, feel free to skip the sections after an expression of the lesson).

We will start by noting what is missing. One aspect that is missing above is that our model isn't embedded; that is there's no assumption that our model is contained in the world. If we had a sufficiently good understanding of neuroscience, we would be able to produce a model of how the world relates to our model.

Another possible criticism is that our initial conception of numbers only exists in our mind. Most people would say that this isn't an accurate description of numbers as different people seem to be able to come to the same conclusion as us and that this gives us good reason to think of mathematical truths as objective.

We'll address this by precisely stating the evidence that there is for this conclusion: In addition to what we said before about how we have a model of our model, the world and the relation; our model of the world contains other physical agents. We have a model of these physical agents (or parts of the world-model) having corresponding models. Then we have a model of those agents who model the world in a particular ways being more effective in physical tasks where maths is relevant. On top of this we have a model of these agents typically modelling maths in the same way. This is a large part of why mathematical rules appear to us to be abstract entities, though of course what we've said so far doesn't require this.

It might be worthwhile breaking down the process of concluding that agents handle maths in the same way. For a start we'll note that different well-functioning agents have different capacities, so they won't handle maths in exactly the same way, but within these capacities limits they generally will.

Let's consider the Peano Axioms:

  • The claim that "0 is a natural number" is basically the claim that there's some number that serves particular general functions such as - you have no more tasks left to do today or you can't see any tigers in your visual field. Note that for a long time 0 wasn't considered a number. This isn't an issue because once we get any form of maths up and running we can use it to demonstrate the similarities of arithmetic with and without the zero.
  • "For every natural number x, x = x" means things along the lines of: if I'm thinking of a number and I count out that many pennies and then I count how many pennies I have in front of me, I'll end up with the same number I started with. Or rather, this is true in my model.
  • "For all natural numbers x and y, if x = y, then y = x" means that if I tell my brain to compare x to y then I tell it to compare y to x, I'll get the same result assuming nothing went wrong. Similarly if I count how many eggs there are and how many coins and conclude they are equal, then subject to nothing changing, I should get the same result. Again, to be precise, this is what my model claims.
  • "For every natural number n, S(n) is a natural number" - this axiom is interesting as we're utilising finite frames. What we can say is that this holds until we start running into capacity limits. Later we'll say that we can abstractly model this axiom as holding above this limit, but we'll leave it for now. Just note that it's not just this axiom that runs into capacity limits, but the other ones as well (just because we can load x into memory doesn't mean we can load it twice to check "x=x").

The other axioms can be broken down similarly. We also require rules about how to combine these axioms using logic. The axioms of logic are very similar to the axioms of maths. Again, we notice that agents who function a particular way perform well on a particular set of tasks that involve logic. The rule "A ∨ B ⇒ A" basically means that these well-functioning agents tend to operate as follows: we have an OR operation in our minds and whether we run the sequence <check A, check B, A ∨ B> or <check A> we never seem to get the former being true without the later also being true.

Having explored our belief that well-functioning agents generally model maths in the same way as us, now address our belief that they also model maths as shared among most well-functioning agents. We handle this similarly to how we handled beliefs about maths being shared - that is to note that we model other agents as having similar beliefs and then climbing up higher meta-level to show that it's not just that we believe that this belief is our own personal opinion.

However, we can't directly model ourselves as climbing up infinite meta-levels as our (current) conception of number is finitely limited. We can understand the notion of an agent having a finite capacity by modelling agents smaller than us. We understand by analogy that there could be agents that stand in a similar position to us. Even though we can't model the complete operation of agents larger than us, we seem to be able model them on an abstract level.

We'll come back to this in a second. However, first we note that we can abstractly model agents larger than us and then can abstractly model agents larger than them and so on. This produces another chain, although again it only goes up finitely many levels as we can only represent finitely many numbers.

We'll try to climb even higher, but first we'll try to understand how we can abstractly model an agent larger than us. For example, I seem to be able to say that an agent with maximum storage of  will eventually be full if at each step we fill a memory the next location with a one.

In order to explain this, we need to discuss compression.  can be roughly modelled as an a reference to the number that would be produced if we ran a program that began with the number one and doubled it 100 times and then took the result and doubled two that many times(obviously there are multiple programs that can calculate this number). Even if we can't run such a program due to capacity limits, we can run smaller programs and notice the patterns of how it behaves.

However, it's science that uses induction, whilst maths uses proofs. How do we generate proofs? This requires two elements - axioms and rules for combining mathematical knowledge. We've discussed these elements before, we also noted that we can't immediately assert these axioms above capacity limits.

Well, we can notice that as far as we can see, smaller agents would be correct if they assumed that these axioms held above their capacity. So, it doesn't seem unreasonable for us to assert the same. If we accept this assumption, then this provides a good reason to trust the proofs of claims that relate to numbers we wouldn't be able to directly represent without this compression.

Note that we can use this ability to abstractly represent numbers as the result of computations that we never actually run to climb higher then we did before. For example, we can abstractly represent a program that climbs  levels of meta-model, even if we can't directly represent that many levels of meta-model.

Again, this doesn't get us to infinity, as even with abstract representations there's still a highest number we can represent. You might think that we could represent infinity by asking how long "while true: continue" runs for, but we can't model this as running forever as we don't yet have a concept of infinity and perhaps may never. 

One thing we can do though is model the existence of a number higher than that which represent in any abstract sense. The way that we do this is by imagining a really small agent, which say, might be so memory constrained that it can't model over a million no matter how many abstraction levels it is allowed to climb. Since we can imagine an agent smaller than us in that position, we can analogously imagine ourselves being in such as position as well, although we can't say anything specific about such a number other than it is too large for us to model.

At the end of this process, we never ended up in a position where it was necessary to claim the existence of infinity or that numbers have an objective non-mind-dependent, non-physical existence. This is not quite the same was claiming that they don't exist, but may very well be suggestive. Although the fact that the model suggested numbers too large for us to model in any way undermines this to an extent.

On the other hand, in Extended Picture Theory we never reached the point of having an explanation of exactly what a model corresponding to reality means, but that doesn't mean it doesn't have a reality that exists outside the model. Similarly, it might be argued that our failure to reach either infinity or mind-independent + physically-independent numbers is just a limitation of what we can express and nothing more.

Another approach to attempt to define infinity might be to start from the existence of time and say, for example, that the number of one second intervals is infinite. We could similarly take all the 1 meter interverals along a particular direction in space and define infinity this way. This problem with this is that we typically say space is infinite or time is infinite, but if we define infinity in terms of time say, well, we wouldn't be able to say time is infinite without it being a tautology (although we might be able to say that space is infinite).

Related:

New Comment
2 comments, sorted by Click to highlight new comments since:

Could an agent be abloe to manage picking up of four items without invoking the concept four? (Also Wittgenstein and apple store)

When one concludes that arithmetic with or without zero does not run into problems that can be understood as a statement about infinite amount of numbers. If there is no need to adhere to a particular ritual of cognition and one manages to do an "infinity task" then surely other agents would model such a agent to be proficient in infinity?

This sounds a lot like taking that reals don't exist but floats too and rather talking about doubles and ever higher precision things instead of precision agnostic things.

[-]TAG10

At the end of this process, we never ended up in a position where it was necessary to claim the existence of infinity or

You generally "must" assume the "existence" of numbers behind finite positive integers in order that solve certain kinds of problem. By that's a hypothetical "must", not a categorical one. You can sacrifice the ability to solve those problems as an alternative.

that numbers have an objective non-mind-dependent, non-physical existence.

None of the above requires existence in a serious ontological sense.