I'm a mathematics undergraduate in the UK, in my final year, and I have been thinking a lot about Platonism. I largely find myself completely unconvinced by attempts to naturalise the metaphysics behind the ontology of mathematical objects, though I am not unsympathetic to those attempts, and of those attempts I know, I can see their point.
I will state plainly here that as of this post I am a Platonist (by which, I believe that mathematical objects are abstract objects, but obviously that I am unsure of what this entails), and i want to sort of articulate why. I will mostly be focusing on the notion of 'mathematics is just a useful tool/language, and it has no abstract independent existence outside of this viewpoint'.
Most of the posts I've seen on lesswrong talk about natural numbers as the object of our attention when we discuss mathematics, though i could have missed some. To be sure, the natural numbers are about as quintessentially 'MATHEMATICS' as you can get, but that's not what i want to really discuss. I want to talk about abstractions. Hilbert space, Banach space, Measure space, Galois symmetries, Categories, Topoi, etc. are just some examples of extremely abstract notions that offer powerful and, from my perspective, necessary insight into applied mathematics.
Something that has almost plagued me over this final year is the notion of how the abstract notions of modern day mathematics has applicability to the real world. It is not that I am saying 'mathematics fits the world so well, we are so lucky to be able to have such a versatile tool'. But i am saying 'Why should it be the case that an abstract notion even has such powerful applicability at all? To the point where you can derive iff statements about applied structures.' I don't buy that we make mathematics fit the world, per se, although we certainly do that insofar as we are trying to model something. But its more, when we uncover structures built into the general areas of applied math, the mathematics says that 'if we assume such and such holds about, say partial differential equations, then this implies, through a chain of abstract reasoning, we get something like a concrete result within the real world.' Its essentially this process that disturbs me. This process can fail in the real world, especially when we try to model specific phenomena, but those models have to obey PDE conditions (from our example), and this includes both right and wrong models.
So, here is the crux of my question to you, as I am interested in responses to this question:
'Why does the role of abstraction hold permanent and seemingly irrefutable sway over the domain of modelling the real world with mathematics?' If mathematics were just a tool, with no independent reality, then shouldn't it be the case that the probability that we encounter this phenomenon (that is, purely abstract reasoning tells us something objective about modelling process as a whole, and what's more provides direct utility and insight into how to find solutions to these models) be exceedingly low? If we just made a bunch of stuff up through logically following semantics and syntax through to the end, the idea that this stuff, which contains no ontological independence, has direct and immediate applicability to the world seems completely the opposite of what you should expect.
I'll appreciate it if i've missed things posting this, if its unclear then I'll re-edit and make it much clearer with examples, feel free to tear me apart in the comments.
One account is that the particular grouping of features into a definition is "invented", in the same way that the concept of a "tree" is invented; but there is still a pattern in the world corresponding to tree. But from your original post I think we're in agreement on this point?
I believe Pattern's reasoning above could be summed-up by saying that abstraction is a way for us to model the real world, and the process of reasoning abstractly a way for us to run some sort of efficient simulation with our models. (@Pattern Is that a fair one-line summary?)
In which case, my understanding of your original question is one of the two: why is it the case that the world could be *efficiently* simulated? Perhaps your question is even one level deeper: why is it the case the the world could *simulated* at all? After all, it is possible that the only way to predict the outcome of a physical process is to observe the physical process. (Is this a fair summary of what disturbs you about the PDEs example?)
This could be rephrased slightly more concretely as a question about the Church-Turing thesis: how come there is such a thing as a *universal* Turing machine. Made even more concrete, it turns into a deep physics problem: what kind of laws of physics permit the existence of a universal Turing machine. That's a deep (and technical!) question, which in this particular form was popularized by David Deutsch. This blog post by Michael Nielsen is a good general-audience introduction.