Followup to: Logical Pinpointing, Causal Reference
Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.
- Death, in Hogfather by Terry Pratchett
Meditation: So far we've talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical validity by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?
...
...
...
Suppose that I pointed at a couple of piles of apples on a table, a pile of two apples and a pile of three apples.
And lo, I said: "If we took the number of apples in each pile, and multiplied those numbers together, we'd get six."
Nowhere in the physical universe is that 'six' written - there's nowhere in the laws of physics where you'll find a floating six. Even on the table itself there's only five apples, and apples aren't fundamental. Or to put it another way:
Take the apples and grind them down to the finest powder and sieve them through the finest sieve and then show me one atom of sixness, one molecule of multiplication.
Nor can the statement be true as a matter of pure math, comparing to some Platonic six within a mathematical model, because we could physically take one apple off the table and make the statement false, and you can't do that with math.
This question doesn't feel like it should be very hard. And indeed the answer is not very difficult, but it is worth spelling out; because cases like "justice" or "mercy" will turn out to proceed in a similar fashion.
Navigating to the six requires a mixture of physical and logical reference. This case begins with a physical reference, when we navigate to the physical apples on the table by talking about the cause of our apple-seeing experiences:
Next we have to call the stuff on the table 'apples'. But how, oh how can we do this, when grinding the universe and running it through a sieve will reveal not a single particle of appleness?
This part was covered at some length in the Reductionism sequence. Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider. Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.
We also use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC. A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark. (Or a quantum field, really; but you get the idea.)
So is the 747 made of something other than quarks? And is the statement "this 747 has wings" meaningless or false? No, we're just modeling the 747 with representational elements that do not have a one-to-one correspondence with individual quarks.
Similarly with apples. To compare a mental image of high-level apple-objects to physical reality, for it to be true under a correspondence theory of truth, doesn't require that apples be fundamental in physical law. A single discrete element of fundamental physics is not the only thing that a statement can ever be compared-to. We just need truth conditions that categorize the low-level states of the universe, so that different low-level physical states are inside or outside the mental image of "some apples on the table" or alternatively "a kitten on the table".
Now we can draw a correspondence from our image of discrete high-level apple objects, to reality.
Next we need to count the apple-objects in each pile, using some procedure along the lines of going from apple to apple, marking those already counted and not counting them a second time, and continuing until all the apples in each heap have been counted. And then, having counted two numbers, we'll multiply them together. You can imagine this as taking the physical state of the universe (or a high-level representation of it) and running it through a series of functions leading to a final output:
And of course operations like "counting" and "multiplication" are pinned down by the number-axioms of Peano Arithmetic:
And we shouldn't forget that the image of the table, is being calculated from eyes which are in causal contact with the real table-made-of-particles out there in physical reality:
And then there's also the point that the Peano axioms themselves are being quoted inside your brain in order to pin down the ideal multiplicative result - after all, you can get multiplications wrong - but I'm not going to draw the image for that one. (We tried, and it came out too crowded.)
So long as the math is pinned down, any table of two apple piles should yield a single output when we run the math over it. Constraining this output constrains the possible states of the original, physical input universe:
And thus "The product of the apple numbers is six" is meaningful, constraining the possible worlds. It has a truth-condition, fulfilled by a mixture of physical reality and logical validity; and the correspondence is nailed down by a mixture of causal reference and axiomatic pinpointing.
I usually simplify this to the idea of "running a logical function over the physical universe", but of course the small picture doesn't work unless the big picture works.
The Great Reductionist Project can be seen as figuring out how to express meaningful sentences in terms of a combination of physical references (statements whose truth-value is determined by a truth-condition directly correspnding to the real universe we're embedded in) and logical references (valid implications of premises, or elements of models pinned down by axioms); where both physical references and logical references are to be described 'effectively' or 'formally', in computable or logical form. (I haven't had time to go into this last part but it's an already-popular idea in philosophy of computation.)
And the Great Reductionist Thesis can be seen as the proposition that everything meaningful can be expressed this way eventually.
But it sometimes takes a whole bunch of work.
And to notice when somebody has subtly violated the Great Reductionist Thesis - to see when a current solution is not decomposable to physical and logical reference - requires a fair amount of self-sensitization before the transgressions become obvious.
Example: Counterfactuals.
Consider the following pair of sentences, widely used to introduce the idea of "counterfactual conditioning":
- (A) If Lee Harvey Oswald didn't shoot John F. Kennedy, someone else did.
- (B) If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would've.
The first sentence seems agreeable - John F. Kennedy definitely was shot, historically speaking, so if it wasn't Lee Harvey Oswald it was someone. On the other hand, unless you believe the Illuminati planned it all, it doesn't seem particularly likely that if Lee Harvey Oswald had been removed from the equation, somebody else would've shot Kennedy instead.
Which is to say that sentence (A) appears true, and sentence (B) appears false.
One of the historical questions about the meaning of causal models - in fact, of causal assertions in general - is, "How does this so-called 'causal' model of yours, differ from asserting a bunch of statistical relations? Okay, sure, these statistical dependencies have a nice neighborhood-structure, but why not just call them correlations with a nice neighborhood-structure; why use fancy terms like 'cause and effect'?"
And one of the most widely endorsed answers, including nowadays, is that causal models carry an extra meaning because they tell us about counterfactual outcomes, which ordinary statistical models don't. For example, suppose this is our causal model of how John F. Kennedy got shot:
Roughly this is intended to convey the idea that there are no Illuminati: Kennedy causes Oswald to shoot him, does not cause anybody else to shoot him, and causes the Moon landing; but once you know that Kennedy was elected, there's no correlation between his probability of causing Oswald to shoot him and his probability of causing anyone else to shoot him. In particular, there's no Illuminati who monitor Oswald and send another shooter if Oswald fails.
In any case, this diagram also implies that if Oswald hadn't shot Kennedy, nobody else would've, which is modified by a counterfactual surgery a.k.a. the do(.) operator, in which a node is severed from its former parents, set to a particular value, and its descendants then recomputed:
And so it was claimed that the meaning of the first diagram is embodied in its implicit claim (as made explicit in the second diagram) that "if Oswald hadn't shot Kennedy, nobody else would've". This statement is true, and if all the other implicit counterfactual statements are also true, the first causal model as a whole is a true causal model.
What's wrong with this picture?
Well... if you're strict about that whole combination-of-physics-and-logic business... the problem is that there are no counterfactual universes for a counterfactual statement to correspond-to. "There's apples on the table" can be true when the particles in the universe are arranged into a configuration where there's some clumps of organic molecules on the table. What arrangement of the particles in this universe could directly make true the statement "If Oswald hadn't shot Kennedy, nobody else would've"? In this universe, Oswald did shoot Kennedy and Kennedy did end up shot.
But it's a subtle sort of thing, to notice when you're trying to establish the truth-condition of a sentence by comparison to counterfactual universes that are not measurable, are never observed, and do not in fact actually exist.
Because our own brains carry out the same sort of 'counterfactual surgery' automatically and natively - so natively that it's embedded in the syntax of language. We don't say, "What if we perform counterfactual surgery on our models to set 'Oswald shoots Kennedy' to false?" We say, "What if Oswald hadn't shot Kennedy?" So there's this counterfactual-supposition operation which our brain does very quickly and invisibly to imagine a hypothetical non-existent universe where Oswald doesn't shoot Kennedy, and our brain very rapidly returns the supposition that Kennedy doesn't get shot, and this seems to be a fact like any other fact; and so why couldn't you just compare the causal model to this fact like any other fact?
And in one sense, "If Oswald hadn't shot Kennedy, nobody else would've" is a fact; it's a mixed reference that starts with the causal model of the actual universe where there are actually no Illuminati, and proceeds from there to the logical operation of counterfactual surgery to yield an answer which, like 'six' for the product of apples on the table, is not actually present anywhere in the universe. But you can't say that the causal model is true because the counterfactuals are true. The truth of the counterfactuals has to be calculated from the truth of the causal model, followed by the implications of the counterfactual-surgery axioms. If the causal model couldn't be 'true' or 'false' on its own, by direct comparison to the actual real universe, there'd be no way for the counterfactuals to be true or false either, since no actual counterfactual universes exist.
So that business of counterfactuals may sound like a relatively obscure example (though it's going to play a large role in decision theory later on, and I expect to revisit it then) but it sets up some even larger points.
For example, the Born probabilities in quantum mechanics seem to talk about a 'degree of realness' that different parts of the configuration space have (proportional to the integral over squared modulus of that 'world').
Could the Born probabilities be basic - could there just be a basic law of physics which just says directly that to find out how likely you are to be in any quantum world, the integral over squared modulus gives you the answer? And the same law could've just as easily have said that you're likely to find yourself in a world that goes over the integral of modulus to the power 1.99999?
But then we would have 'mixed references' that mixed together three kinds of stuff - the Schrodinger Equation, a deterministic causal equation relating complex amplitudes inside a configuration space; logical validities and models; and a law which assigned fundamental-degree-of-realness a.k.a. magical-reality-fluid. Meaningful statements would talk about some mixture of physical laws over particle fields in our own universe, logical validities, and degree-of-realness.
This is just the same sort of problem if you say that causal models are meaningful and true relative to a mixture of three kinds of stuff, actual worlds, logical validities, and counterfactuals, and logical validities. You're only supposed to have two kinds of stuff.
People who think qualia are fundamental are also trying to build references out of at least three different kinds of stuff: physical laws, logic, and experiences.
Anthropic problems similarly revolve around a mysterious degree-of-realness, since presumably when you make more copies of people, you make their experiences more anticipate-able somehow. But this doesn't say that anthropic questions are meaningless or incoherent. It says that since we can only talk about anthropic problems using three kinds of stuff, we haven't finished Doing Reductionism to it yet. (I have not yet encountered a claim to have finished Reducing anthropics which (a) ends up with only two kinds of stuff and (b) does not seem to imply that I should expect my experiences to dissolve into Boltzmann-brain chaos in the next instant, given that if all this talk of 'degree of realness' is nonsense, there is no way to say that physically-lawful copies of me are more common than Boltzmann brain copies of me.)
Or to take it down a notch, naive theories of free will can be seen as obviously not-completed Reductions when you consider that they now contain physics, logic, and this third sort of thingy called 'choices'.
And - alas - modern philosophy is full of 'new sorts of stuff'; we have modal realism that makes possibility a real sort of thing, and then other philosophers appeal to the truth of statements about conceivability without any attempt to reduce conceivability into some mixture of the actually-physically-real-in-our-universe and logical axioms; and so on, and so on.
But lest you be tempted to think that the correct course is always to just envision a simpler universe without the extra stuff, consider that we do not live in the 'naive un-free universe' in which all our choices are constrained by the malevolent outside hand of physics, leaving us as slaves - reducing choices to physics is not the same as taking a naive model with three kinds of stuff, and deleting all the 'choices' from it. This is confusing the project of getting the gnomes out of the haunted mine, with trying to unmake the rainbow. Counterfactual surgery was eventually given a formal and logical definition, but it was a lot of work to get that far - causal models had to be invented first, and before then, people could only wave their hands frantically in the air when asked what it meant for something to be a 'cause'. The overall moral I'm trying convey is that the Great Reductionist Project is difficult; it's not a matter of just proclaiming that there's no gnomes in the mine, or that rainbows couldn't possibly be 'supernatural'. There are all sorts of statement that were not originally, or are presently not obviously decomposable into physical law plus logic; but that doesn't mean you just give up immediately. The Great Reductionist Thesis is that reduction is always possible eventually. It is nowhere written that it is easy, or that your prior efforts were enough to find a solution if one existed.
Continued next time with justice and mercy (or rather, fairness and goodness). Because clearly, if we end up with meaningful moral statements, they're not going to correspond to a combination of physics and logic plus morality.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "By Which It May Be Judged"
Previous post: "Causal Universes"
This is a reply to the long conversation below between Esar and RobbBB.
Let me first say that I am grateful to Esar and RobbBB for having this discussion, and double-grateful to RobbBB for steelmanning my arguments in a very proper and reasonable fashion, especially considering that I was in fact careless in talking about "meaningful propositions" when I should've remembered that a proposition, as a term of art in philosophy, is held to be a meaning-bearer by definition.
I'm also sorry about that "is meaningless is false" phrase, which I'm certain was a typo (and a very UNFORTUNATE typo) - I'm not quite sure what I meant by it originally, but I'm guessing it was supposed to be "is meaningless or false", though in the context of the larger debate now that I've read it, I would just say "colorless green ideas sleep furiously" is "meaningless" rather than false. In a strict sense, meaningless utterances aren't propositions so they can't be false. In a looser sense, an utterance like "Maybe we're living in an inconsistent set of axioms!" might be impossible to render coherent under strict standards of meaning, while also being colloquially called 'false' meaning 'not actually true' or 'mistaken'.
I'm coming at this from a rather different angle than a lot of existing philosophy, so let me do my best to clarify. First, I would like to distinguish the questions:
R1) What sort of things can be real?
R2) What thoughts do we want an AI to be able to represent, given that we're not certain about R1?
A (subjectively uncertain probabilistic) answer to R1 may be something like, "I'm guessing that only causal universes can be real, but they can be continuous rather than discrete, and in that sense aren't limited to mathematical models containing a finite number of elements, like finite Life boards."
The answer to R2 may be something like, "However, since I'm not sure about R1, I would also like my AI to be able to represent the possibility of a universe with Time-Turners, even though, in this case, the AI would have to use some generalization of causal reference to refer to the things around it, since it wouldn't live in a universe that runs on Pearl-style causal links."
In the standard sense of philosophy, question R2 is probably the one about 'meaning' or which assertions can be 'meaningful', although actually the amount of philosophy done around this is so voluminous I'm not sure there is a standard sense of 'meaning'. Philosophers sometimes try to get mileage out of claiming things are 'conceivable', e.g., the philosophical catastrophe of the supposed conceivability of P-zombies, and I would emphasize even at this level that we're not trying to get R1-mileage out of things being in R2. For example, there's no rule following from anything we've said so far that an R2-meaningful statement must be R1-possible, and to be particular and specific, wanting to conservatively build an AI that can represent Conway's Game of Life + Time-Turners, still allows us to say things like, "But really, a universe like that might be impossible in some basic sense, wihch is why we don't live there - to speak of our possibly living there may even have some deeply buried incoherence relative to the real rules for how things really have to work - but since I don't know this to be true, as a matter of my own mere mental state, I want my AI to be able to represent the possibility of time-travel." We might also imagine that a non-logically-omniscient AI needs to have an R2 which can contain inconsistent axiom sets the AI doesn't know to be inconsistent.
For things to be in R2, we want to show how a self-modifying AI could carry out its functions while having such a representation, which includes, in particular, being able to build an offspring with similar representations, while being able to keep track of the correspondence between those offspring's quoted representations and reality. For example, in the traditional version of P-zombies, there's a problem with 'if that was true, how could you possibly know it?' or 'How can you believe your offspring's representation is conjugate to that part of reality, when there's no way for it to maintain a correspondence using causal references?' This is the problem of a SNEEZE_VAR in the Matrix where we can't talk about whether its value is 0 or 1 because we have no way to make "0" or "1" refer to one binary state rather than the other.
Since the problems of R2 are the AI-conjugates of problems of reference, designation, maintainance of a coherent correspondence, etcetera, they fall within the realm of problems that I think traditional philosophy considers to be problems of meaning.
I would say that in human philosophy there should be a third issue R3 which arises from our dual desire to:
In other words, we want to avoid the twin errors of (1) preemptively shooting down somebody who is making an honest effort to talk to us by claiming that all their words are meaningless noises, and (2) trying to extract info about reality just by virtue of having an utterance admitted into a debate, turning a given inch into a taken mile.
This leads me to think that human philosophers should also have a third category R3:
R3) What sort of utterances can we argue about in English?
which would roughly represent what sort of things 'feel meaningful' to a flawed human brain, including things like P-zombies or "I say that God can make a rock so heavy He can't lift it, and then He can lift it!" - admitting something into R3 doesn't mean it's logically possible, coherent, or 'conceivable' in some rigorous sense that you could then extract mileage from, it just means that we can go on having a conversation about it for a while longer.
When somebody comes to us with the P-zombie story, and claims that it's "conceivable" and they know this on account of their brain feeling able to conceive it, we want to reply, "That's what I would call 'arguable' (R3) and if you try to treat your intuitions about arguability as data, they're only directly data about which English sentences human brains can affirm. If you want to establish any stronger sense of coherence that you could get mileage from, such as coherence or logical possibility or reference-ability, you'll have to argue that separately from your brain's direct access to the mere affirmability of a mere English utterance."
At the same time, you're not shoving them away from the table like you would "colorless green ideas sleep up without clam any"; you're actually going to have a conversation about P-zombies, even though you think that in stricter senses of meaning like R2, the conversation is not just false but meaningless. After all, you could've been wrong about that nonmembership-in-R2 part, and they might be about to explain that to you.
The Great Reductionist Thesis is about R1 - the question of what is actually real - but it's difficult to have something that lies in a reductionist's concept of a strict R2, turn out to be real, such that the Great Reductionist Thesis is falsified. For example, if we think R1 is about causal universes, and then it turns out we're in Timetravel Life, the Great Reductionist Thesis has been confirmed, because Timetravel Life still has a formal logical description. Just about anything I can imagine making a Turing-computable AI refer to will, if real, confirm the Great Reductionist Thesis.
So is GRT philosophically vacuous from being philosophically unfalsifiable? No: to take an extreme case, suppose we have an uncomputable and non-logically-axiomatizable sensus divinatus enabling us to directly know God's existence, and by baptizing an AI we could give it this sensus divinatus in some way integrated into the rest of its mind, meaning that R2, R1, and our own universe all include things referrable-to only by a sensus divinatus. Then arguable utterances along the lines of, "Some things are inherently mysterious", would have turned out, not just to be in R2, but to actually be true; and the Great Reductionist Thesis would be false - contrary to my current belief that such utterances are not only colloquially false, but even meaningless for strict senses of meaning. But one is not licensed to conclude anything from my having allowed a sensus divinatus to be a brief topic of conversation, for by that I am not committing to admitting that it was strictly meaningful under strong criteria such as might be proposed for R2, but only that it stayed in R3 long enough for a human brain to say some informal English sentences about it.
Does this mean that GRT itself is merely arguable - that it talks about an argument which is only in R3? But tautologies can be meaningful in GRT, since logic is within "physics + logic". It looks to me like a completed theory of R2 should be something like a logical description of a class of universes and a class of representations corresponding to them, which would itself be in R2 as pure math; and the theory-of-R1 "Reality falls within this class of universes" could then be physically true. However, many informal 'negations' of R2 like "What about a sensus divinatus?" will only be 'arguable' in a human R3, rather than themselves being in R2 (as one would expect!).
R3) "What sort of utterances can we argue about in English?" is (perhaps deliberately) vague. We can argue about colorless green ideas, if nothing else at the linguistic level. Perhaps R3 is not about meaning, but about debate etiquette: What are the minimum standards for an assertion to be taken seriously as an assertion (i.e., not as a question, interjection, imperative, glossolalia, etc.). In that case, we may want to break R3 down into a number of sub-questions, since in different contexts there will be different standards for the admissibili... (read more)