"As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."
-- Albert Einstein
You seem to be trying to, somewhat independently of the way it is done in science and mathematics typically, arrive at distinct concepts of accurate, precise, and predictive. At least how I'm using them, these terms can each describe a theory.
Precise - well defined and repeatable, with little or no error.
Accurate - descriptive of reality, as far as we know.
Predictive - we can extend this theory to describe new results accurately.
A theory can be any of these things, on different domains, to different degrees. A mathematical theory is precise*, but that need not make it accurate or predictive. It's of course a dangerous mistake to conflate these properties, or even to extend them beyond the domains on which they apply, when using a theory. Which is why these form part of the foundation of scientific method.
*There's a caveat here, but that would be an entire different discussion.
I have not read Kuhn's work, but I have read some Ptolemy, and if I recall correctly he is pretty careful not to claim that the circles in his astronomy are present in some mechanical sense. (Copernicus, on the other hand, literally claims that the planets are moved by giant transparent spheres centered around the sun!)
In his discussion of his hypothesis that the planets' motions are simple, Ptolemy emphasizes that what seems simple to us may be complex to the gods, and vice versa. (This seems to me to be very similar to the distinction between concepts ...
Please see this previous comment of mine.
The point here is that it "1+1=2" should not be taken as a statement about physical reality, unless and until we have agreed (explicitly!) on a specific model of the world -- that is, a specific physical interpretation of those mathematical terms. If that model later turns out not to correspond to reality, that's what we say; we don't say that the mathematics was incorrect.
Thus, examples of things not to say:
"Relativity disproves Euclidean geometry."
"Quantum mechanics disproves classica
As the other commenters have indicated, I think that your distinction is really just the distinction between physics and mathematics.
I agree that mathematical assertions have different meanings in different contexts, though. Here's my attempt at a definition of mathematics:
Mathematics is the study of very precise concepts, especially of how they behave under very precise operations.
I prefer to say that mathematics is about concepts, not terms. There seems to me to be a gap between, on the one hand, having a precise concept in one's mind and, on the other...
Grahams point s straightforward if expressed as 'pure maths is the study of terms with precise meannigs'.
Which begs the question of whether it really matters if the conceptualisation is wrong, as long as the numbers are right? Isn’t instrumental correctness all that really matters?
I'm not in the business of telling people what values to have, but if you are a physcalist, you are comited to more than instrumental.
The fact that predictiveness has almost nothing to do with accuracy, in the sense of correspondence is one of the outstanding problems with physicalism
Relativity teaches us that "the earth goes around the sun" and "the sun goes around the earth, and the other planets move in complicated curves" are both right. So to say, "Those positions [calculated by epicycles] were right but they had it conceptualised all wrong," makes no sense.
Hence, when you say the epicycles are wrong, all you can mean that they are more complicated and harder to work with. This is a radical redefinition of the word wrong.
So, basically, I disagree completely with your conclusion. You can't say that a representation gives the right answers, but lies.
Isn’t instrumental correctness all that really matters? We might think so, but this is not true. How would Pluto’s existence been predicted under an epicycles conceptualisation? How would we have thought about space travel under such a conceptualisation?
Your counterexamples don't seem apposite to me. Out of sample predictive ability strikes me as an instrumental good.
To add to what others have already commented...
It is theoretically possible to accurately describe the motions of celestial bodies using epicycles, though one might need infinite epicycles, and epicycles would themselves need to be on epicycles. If you think there's something wrong with the math, it won't be in its inability to describe the motion of celestial bodies. Rather, feasibility, simplicity, usefulness, and other such concerns will likely be factors in it.
While 'accurate' and 'precise' are used as synonyms in ordinary language, please never use ...
Okay. I have several sources of skepticism about infinite sets. One has to do with my never having observed a large cardinal. One has to do with the inability of first-order logic to discriminate different sizes of infinite set (any countably infinite set of first-order statements that has an infinite model has a countably infinite model - i.e. a first-order theory of e.g. the real numbers has countable models as well as the canonical uncountable model) and that higher-order logic proves exactly what a many-sorted first-order logic proves, no more and no less. One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two "infinite" collections of observers to determine anthropic probabilities.
The chief argument against my skepticism has to do with the apparent physical existence of minimal closures and continuous quantities, two things that cannot be defined in first-order logic but that would, apparently, if you take higher-order logic at face value, suffice respectively to specify the existence of a unique infinite collection of natural numbers and a unique infinite collection of points on a line.
Another point against my skepticism is that first-order set theory proper and not just first-order Peano Arithmetic is useful to prove e.g. the totalness of the Goodstein function, but while a convenient proof uses infinite ordinals, it's not clear that you couldn't build an AI that got by just as well on computable functions without having to think about infinite sets.
My position can be summed up as follows: I suspect that an AI does not have to reason about large infinities, or possibly any infinities at all, in order to deal with reality.
I reject infinity as anything more than "a number that is big enough for its smallness to be negligible for the purpose at hand."
My reason for rejecting infinity in it's usual sense is very simple: it doesn't communicate anything. Here you said (about communication) "When you each understand what is in the other's mind, you are done." In order to communicate, there has to be something in your mind in the first place, but don't we all agree infinity can't ever be in your mind? If so, how can it be communicated?
Edit to clarify: I worded t...
[edit: sorry, the formatting of links and italics in this is all screwy. I've tried editing both the rich-text and the HTML and either way it looks ok while i'm editing it but the formatted terms either come out with no surrounding spaces or two surrounding spaces]
In the latest Rationality Quotes thread, CronoDAS quoted Paul Graham:
It would not be a bad definition of math to call it the study of terms that have precise meanings.