Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Useful Idea of Truth

77 Post author: Eliezer_Yudkowsky 02 October 2012 06:16PM

(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI.  For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows.  And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation.  Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)


I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan

I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico

What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche


The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.

  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.

  3. Anne leaves the room, and Sally returns.

  4. The experimenter asks the child where Sally will look for her marble.

Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.

(Attributed to:  Baron-Cohen, S., Leslie, L. and Frith, U. (1985) ‘Does the autistic child have a “theory of mind”?’, Cognition, vol. 21, pp. 37–46.)

Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality. A three-year-old has a model only of where the marble is. A four-year old is developing a theory of mind; they separately model where the marble is and where Sally believes the marble is, so they can notice when the two conflict - when Sally has a false belief.

Any meaningful belief has a truth-condition, some way reality can be which can make that belief true, or alternatively false. If Sally's brain holds a mental image of a marble inside the basket, then, in reality itself, the marble can actually be inside the basket - in which case Sally's belief is called 'true', since reality falls inside its truth-condition. Or alternatively, Anne may have taken out the marble and hidden it in the box, in which case Sally's belief is termed 'false', since reality falls outside the belief's truth-condition.

The mathematician Alfred Tarski once described the notion of 'truth' via an infinite family of truth-conditions:

  • The sentence 'snow is white' is true if and only if snow is white.

  • The sentence 'the sky is blue' is true if and only if the sky is blue.

When you write it out that way, it looks like the distinction might be trivial - indeed, why bother talking about sentences at all, if the sentence looks so much like reality when both are written out as English?

But when we go back to the Sally-Anne task, the difference looks much clearer: Sally's belief is embodied in a pattern of neurons and neural firings inside Sally's brain, three pounds of wet and extremely complicated tissue inside Sally's skull. The marble itself is a small simple plastic sphere, moving between the basket and the box. When we compare Sally's belief to the marble, we are comparing two quite different things.

(Then why talk about these abstract 'sentences' instead of just neurally embodied beliefs? Maybe Sally and Fred believe "the same thing", i.e., their brains both have internal models of the marble inside the basket - two brain-bound beliefs with the same truth condition - in which case the thing these two beliefs have in common, the shared truth condition, is abstracted into the form of a sentence or proposition that we imagine being true or false apart from any brains that believe it.)

Some pundits have panicked over the point that any judgment of truth - any comparison of belief to reality - takes place inside some particular person's mind; and indeed seems to just compare someone else's belief to your belief:

So is all this talk of truth just comparing other people's beliefs to our own beliefs, and trying to assert privilege? Is the word 'truth' just a weapon in a power struggle?

For that matter, you can't even directly compare other people's beliefs to our own beliefs. You can only internally compare your beliefs about someone else's belief to your own belief - compare your map of their map, to your map of the territory.

Similarly, to say of your own beliefs, that the belief is 'true', just means you're comparing your map of your map, to your map of the territory. People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

And so saying 'I believe the sky is blue, and that's true!' typically conveys the same information as 'I believe the sky is blue' or just saying 'The sky is blue' - namely, that your mental model of the world contains a blue sky.

Meditation:

If the above is true, aren't the postmodernists right? Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?

(A 'meditation' is a puzzle that the reader is meant to attempt to solve before continuing. It's my somewhat awkward attempt to reflect the research which shows that you're much more likely to remember a fact or solution if you try to solve the problem yourself before reading the solution; succeed or fail, the important thing is to have tried first . This also reflects a problem Michael Vassar thinks is occurring, which is that since LW posts often sound obvious in retrospect, it's hard for people to visualize the diff between 'before' and 'after'; and this diff is also useful to have for learning purposes. So please try to say your own answer to the meditation - ideally whispering it to yourself, or moving your lips as you pretend to say it, so as to make sure it's fully explicit and available for memory - before continuing; and try to consciously note the difference between your reply and the post's reply, including any extra details present or missing, without trying to minimize or maximize the difference.)

...
...
...

Reply:

The reply I gave to Dale Carrico - who declaimed to me that he knew what it meant for a belief to be falsifiable, but not what it meant for beliefs to be true - was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results. If I believe very strongly that I can fly, then this belief may lead me to step off a cliff, expecting to be safe; but only the truth of this belief can possibly save me from plummeting to the ground and ending my experiences with a splat.

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

You won't get a direct collision between belief and reality - or between someone else's beliefs and reality - by sitting in your living-room with your eyes closed. But the situation is different if you open your eyes!

Consider how your brain ends up knowing that its shoelaces are untied:

  • A photon departs from the Sun, and flies to the Earth and through Earth's atmosphere.
  • Your shoelace absorbs and re-emits the photon.
  • The reflected photon passes through your eye's pupil and toward your retina.
  • The photon strikes a rod cell or cone cell, or to be more precise, it strikes a photoreceptor, a form of vitamin-A known as retinal, which undergoes a change in its molecular shape (rotating around a double bond) powered by absorption of the photon's energy. A bound protein called an opsin undergoes a conformational change in response, and this further propagates to a neural cell body which pumps a proton and increases its polarization.
  • The gradual polarization change is propagated to a bipolar cell and then a ganglion cell. If the ganglion cell's polarization goes over a threshold, it sends out a nerve impulse, a propagating electrochemical phenomenon of polarization-depolarization that travels through the brain at between 1 and 100 meters per second. Now the incoming light from the outside world has been transduced to neural information, commensurate with the substrate of other thoughts.
  • The neural signal is preprocessed by other neurons in the retina, further preprocessed by the lateral geniculate nucleus in the middle of the brain, and then, in the visual cortex located at the back of your head, reconstructed into an actual little tiny picture of the surrounding world - a picture embodied in the firing frequencies of the neurons making up the visual field. (A distorted picture, since the center of the visual field is processed in much greater detail - i.e. spread across more neurons and more cortical area - than the edges.)
  • Information from the visual cortex is then routed to the temporal lobes, which handle object recognition.
  • Your brain recognizes the form of an untied shoelace.

And so your brain updates its map of the world to include the fact that your shoelaces are untied. Even if, previously, it expected them to be tied!  There's no reason for your brain not to update if politics aren't involved. Once photons heading into the eye are turned into neural firings, they're commensurate with other mind-information and can be compared to previous beliefs.

Belief and reality interact all the time. If the environment and the brain never touched in any way, we wouldn't need eyes - or hands - and the brain could afford to be a whole lot simpler. In fact, organisms wouldn't need brains at all.

So, fine, belief and reality are distinct entities which do intersect and interact. But to say that we need separate concepts for 'beliefs' and 'reality' doesn't get us to needing the concept of 'truth', a comparison between them. Maybe we can just separately (a) talk about an agent's belief that the sky is blue and (b) talk about the sky itself. Instead of saying, "Jane believes the sky is blue, and she's right", we could say, "Jane believes 'the sky is blue'; also, the sky is blue" and convey the same information about what (a) we believe about the sky and (b) what we believe Jane believes. We could always apply Tarski's schema - "The sentence 'X' is true iff X" - and replace every instance of alleged truth by talking directly about the truth-condition, the corresponding state of reality (i.e. the sky or whatever). Thus we could eliminate that bothersome word, 'truth', which is so controversial to philosophers, and misused by various annoying people.

Suppose you had a rational agent, or for concreteness, an Artificial Intelligence, which was carrying out its work in isolation and certainly never needed to argue politics with anyone. The AI knows that "My model assigns 90% probability that the sky is blue"; it is quite sure that this probability is the exact statement stored in its RAM. Separately, the AI models that "The probability that my optical sensors will detect blue out the window is 99%, given that the sky is blue"; and it doesn't confuse this proposition with the quite different proposition that the optical sensors will detect blue whenever it believes the sky is blue. So the AI can definitely differentiate the map and the territory; it knows that the possible states of its RAM storage do not have the same consequences and causal powers as the possible states of sky.

But does this AI ever need a concept for the notion of truth in general - does it ever need to invent the word 'truth'? Why would it work better if it did?

Meditation: If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?

...
...
...

Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as:

  • Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.

  • To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

  • True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

This is the main benefit of talking and thinking about 'truth' - that we can generalize rules about how to make maps match territories in general; we can learn lessons that transfer beyond particular skies being blue.


Next in main sequence:

Complete philosophical panic has turned out not to be justified (it never is). But there is a key practical problem that results from our internal evaluation of 'truth' being a comparison of a map of a map, to a map of reality: On this schema it is very easy for the brain to end up believing that a completely meaningless statement is 'true'.

Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all 'post-utopians', which you can tell because their writings exhibit signs of 'colonial alienation'. For most college students the typical result will be that their brain's version of an object-attribute list will assign the attribute 'post-utopian' to the authors Carol, Danny, and Elaine. When the subsequent test asks for "an example of a post-utopian author", the student will write down "Elaine". What if the student writes down, "I think Elaine is not a post-utopian"? Then the professor models thusly...

...and marks the answer false.

After all...

  • The sentence "Elaine is a post-utopian" is true if and only if Elaine is a post-utopian.

...right?

Now of course it could be that this term does mean something (even though I made it up).  It might even be that, although the professor can't give a good explicit answer to "What is post-utopianism, anyway?", you can nonetheless take many literary professors and separately show them new pieces of writing by unknown authors and they'll all independently arrive at the same answer, in which case they're clearly detecting some sensory-visible feature of the writing.  We don't always know how our brains work, and we don't always know what we see, and the sky was seen as blue long before the word "blue" was invented; for a part of your brain's world-model to be meaningful doesn't require that you can explain it in words.

On the other hand, it could also be the case that the professor learned about "colonial alienation" by memorizing what to say to his professor.  It could be that the only person whose brain assigned a real meaning to the word is dead.  So that by the time the students are learning that "post-utopian" is the password when hit with the query "colonial alienation?", both phrases are just verbal responses to be rehearsed, nothing but an answer on a test.

The two phrases don't feel "disconnected" individually because they're connected to each other - post-utopianism has the apparent consequence of colonial alienation, and if you ask what colonial alienation implies, it means the author is probably a post-utopian.  But if you draw a circle around both phrases, they don't connect to anything else.  They're floating beliefs not connected with the rest of the model. And yet there's no internal alarm that goes off when this happens. Just as "being wrong feels like being right" - just as having a false belief feels the same internally as having a true belief, at least until you run an experiment - having a meaningless belief can feel just like having a meaningful belief.

(You can even have fights over completely meaningless beliefs.  If someone says "Is Elaine a post-utopian?" and one group shouts "Yes!" and the other group shouts "No!", they can fight over having shouted different things; it's not necessary for the words to mean anything for the battle to get started.  Heck, you could have a battle over one group shouting "Mun!" and the other shouting "Fleem!"  More generally, it's important to distinguish the visible consequences of the professor-brain's quoted belief (students had better write down a certain thing on his test, or they'll be marked wrong) from the proposition that there's an unquoted state of reality (Elaine actually being a post-utopian in the territory) which has visible consquences.)

One classic response to this problem was verificationism, which held that the sentence "Elaine is a post-utopian" is meaningless if it doesn't tell us which sensory experiences we should expect to see if the sentence is true, and how those experiences differ from the case if the sentence is false.

But then suppose that I transmit a photon aimed at the void between galaxies - heading far off into space, away into the night. In an expanding universe, this photon will eventually cross the cosmological horizon where, even if the photon hit a mirror reflecting it squarely back toward Earth, the photon would never get here because the universe would expand too fast in the meanwhile. Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true. And this sort of question can have important policy consequences: suppose we were thinking of sending off a near-light-speed colonization vessel as far away as possible, so that it would be over the cosmological horizon before it slowed down to colonize some distant supercluster. If we thought the colonization ship would just blink out of existence before it arrived, we wouldn't bother sending it.

It is both useful and wise to ask after the sensory consequences of our beliefs. But it's not quite the fundamental definition of meaningful statements. It's an excellent hint that something might be a disconnected 'floating belief', but it's not a hard-and-fast rule.

You might next try the answer that for a statement to be meaningful, there must be some way reality can be which makes the statement true or false; and that since the universe is made of atoms, there must be some way to arrange the atoms in the universe that would make a statement true or false. E.g. to make the statement "I am in Paris" true, we would have to move the atoms comprising myself to Paris. A literateur claims that Elaine has an attribute called post-utopianism, but there's no way to translate this claim into a way to arrange the atoms in the universe so as to make the claim true, or alternatively false; so it has no truth-condition, and must be meaningless.

Indeed there are claims where, if you pause and ask, "How could a universe be arranged so as to make this claim true, or alternatively false?", you'll suddenly realize that you didn't have as strong a grasp on the claim's truth-condition as you believed. "Suffering builds character", say, or "All depressions result from bad monetary policy." These claims aren't necessarily meaningless, but they're a lot easier to say, than to visualize the universe that makes them true or false. Just like asking after sensory consequences is an important hint to meaning or meaninglessness, so is asking how to configure the universe.

But if you say there has to be some arrangement of atoms that makes a meaningful claim true or false...

Then the theory of quantum mechanics would be meaningless a priori, because there's no way to arrange atoms to make the theory of quantum mechanics true.

And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false - since there'd be no atoms arranged to fulfill their truth-conditions.

Meditation: What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?


  • Meditation Answers - (A central comment for readers who want to try answering the above meditation (before reading whatever post in the Sequence answers it) or read contributed answers.)
  • Mainstream Status - (A central comment where I say what I think the status of the post is relative to mainstream modern epistemology or other fields, and people can post summaries or excerpts of any papers they think are relevant.)

 

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Skill: The Map is Not the Territory"

Comments (515)

Comment author: Eliezer_Yudkowsky 02 October 2012 05:26:28AM 5 points [-]

Koan answers here for:

What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?

Comment author: Alex_Altair 02 October 2012 05:47:01AM *  0 points [-]

Solomonoff induction! Just kidding.

Comment author: katydee 02 October 2012 05:50:10AM *  1 point [-]
Comment author: Nisan 02 October 2012 06:31:35AM 0 points [-]

Only propositions that constrain our sensory experience are meaningful.

If it turns out that the cosmologists are wrong and the universe begins to contract, we will have the opportunity to make contact with the civilization that the colonization starship spawns. The proposition "The starship exists" entails that the probability of the universe contracting and us making contact with the descendants of the passengers of the starship is substantial compared to the probability of the universe contracting.

Comment author: Yvain 02 October 2012 06:55:57AM 4 points [-]

Least convenient possible world - we discover the universe will definitely expand forever. Now what?

Or what about the past? If I tell you an alien living three million years ago threw either a red or a blue ball into the black hole at the center of the galaxy but destroyed all evidence as to which, is there a fact of the matter as to which color ball it was?

Comment author: Nisan 02 October 2012 03:42:11PM 0 points [-]

we discover the universe will definitely expand forever. Now what?

You're right, my principle doesn't work if there's something we believe with absolute certainty.

If I tell you an alien living three million years ago threw either a red or a blue ball into the black hole at the center of the galaxy but destroyed all evidence as to which, is there a fact of the matter as to which color ball it was?

If we later find out that the alien did in fact leave some evidence, and recover that evidence, we'll have an opinion about the color of the ball.

Comment author: thomblake 02 October 2012 03:43:42PM 0 points [-]

Least convenient possible world - we discover the universe will definitely expand forever. Now what?

"Possible" is an important qualifier there. Since 0 and 1 are not probabilities, you are not describing a possible world.

Comment author: wedrifid 02 October 2012 03:52:30PM 0 points [-]

"Possible" is an important qualifier there. Since 0 and 1 are not probabilities, you are not describing a possible world.

The comment doesn't lose too much if we take 'definite' to mean 0.99999 instead of 1. (I would tend to write 'almost certainly' in such contexts to avoid this kind of problem.)

Comment author: thomblake 02 October 2012 04:04:29PM 1 point [-]

I think it loses its force entirely in that case. Nisan's proposal was a counterfactual, and Yvain's counter was a possible world where that counterfactual cannot obtain. Since there is no such possible world, the objection falls flat.

Comment author: [deleted] 02 October 2012 04:09:36PM 2 points [-]

Since there is no such possible world

If this claim is meaningful, isn't Nisan's proposal false?

Comment author: thomblake 02 October 2012 06:01:43PM 0 points [-]

No. Why would that be?

Comment author: Nisan 02 October 2012 04:09:58PM 1 point [-]

Yvain's objection fails if "definitely" means "with probability 0.99999". In that case the conditional probability P( encounter civilization | universe contracts) is well-defined.

Comment author: wedrifid 02 October 2012 04:49:10PM 0 points [-]

Yvain's objection fails if "definitely" means "with probability 0.99999". In that case the conditional probability P( encounter civilization | universe contracts) is well-defined.

Oh, I thought I retracted the grandparent. Nevermind---it does need more caveats in the expression for it to return to being meaningful.

Comment author: dankane 02 October 2012 07:00:40AM 4 points [-]

What about the proposition "the universe will cease to exist when I die" (using some definition of "die" that precludes any future experiences, for example, "die for the last time")? Then the truth of this proposition does not constrain sensory input (because it only makes claims about times after which you have no sensory input), but does have behavioral ramifications if you are, for example, deciding whether or not to write a will.

Comment author: Furslid 02 October 2012 07:53:36AM 9 points [-]

Counter-example. "There exists at least one entity capable of sensory experience." What constraints on sensory experience does this statement impose? If not, do you reject it as meaningless?

Comment author: Nisan 02 October 2012 03:52:41PM 1 point [-]

Heh. Okay, this and dankane's similar proposition are good counterexamples.

Comment author: Yvain 02 October 2012 06:46:20AM *  8 points [-]

If a person with access to the computer simulating whichever universe (or set of universes) a belief is about could in principle write a program that takes as input the current state of the universe (as represented in the computer) and outputs whether the belief is true, then the belief is meaningful.

(if the universe in question does not run on a computer, begin by digitizing your universe, then proceed as above)

Comment author: Nisan 02 October 2012 06:51:20AM 1 point [-]

That doesn't help us decide whether there are stars outside the cosmological horizon.

Comment author: Yvain 02 October 2012 06:52:44AM *  4 points [-]

I feel like writing a more intelligent reply than "Yes it does", so could you explain this further?

Comment author: Nisan 02 October 2012 03:39:19PM 1 point [-]

Suppose we are not living in a simulation. We are to digitize our universe. Do we make our digitization include stars outside the cosmological horizon? By what principle do we decide?

(I suppose you could be asking us to actually digitize the universe, but we want a principle we can use today.)

Comment author: Yvain 02 October 2012 10:17:49PM *  0 points [-]

Well, if the universe actually runs on a computer, then presumably that computer includes data for all stars, not just the ones that are visible to us.

If the universe doesn't run on a computer, then you have to actually digitize the universe so that your model is identical to the real universe as if it were on a computer, not stop halfway when it gets too hard or physically impossible.

I don't think any of these principles will actually be practical. Even the sense-experience principle isn't useful. It would classify "a particle accelerator the size of the Milky Way would generate evidence of photinos" as meaningful, but no one is going to build a particle accelerator the size of the Milky Way any more than they are going to digitize the universe. The goal is to have a philosophical tool, not a practical plan of action.

Comment author: Kawoomba 02 October 2012 06:58:03AM *  -2 points [-]

A variant of Löb's theorem, isn't it?

Edit: Downvoted because the parallels are too obvious, or because the comparison seems too contrived? "E"nquiring minds want to know ...

Comment author: MixedNuts 02 October 2012 09:34:25AM 15 points [-]

That has the same problem as atomic-level specifications that become false when you discover QM. If the Church-Turing thesis is false, all statements you have specified thus become meaningless or false. Even using a hierarchy of oracles until you hit a sufficient one might not be enough if the universe is even more magical than that.

Comment author: Salutator 02 October 2012 12:27:41PM 4 points [-]

But that's only useful if you make it circular.

Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.

You actually want it to depend on the state of the universe in the right way, but that's just another way to say it should depend on whether the belief is true.

Comment author: Yvain 02 October 2012 10:14:03PM 2 points [-]

That's a problem with all theories of truth, though. "Elaine is a post-utopian author" is trivially true if you interpret "post-utopian" to mean "whatever professors say is post-utopian", or "a thing that is always true of all authors" or "is made out of mass".

To do this with programs rather than philosophy doesn't make it any worse.

I'm suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn't tell you how to match the right statement to the right computer program. If you match the statement "snow is white" to the computer program that is a bunch of random characters, the program will return no result and you'll conclude that "snow is white" is meaningless. But that's just the same problem as the philosopher who refuses to accept any definition of "snow", or who claims that snow is obviously black because "snow" means that liquid fossil fuel you drill for and then turn into gasoline.

If your closest match to "post-utopian" is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means "something people call post-utopian" - which would probably be a weird and nonstandard word use the same way using "snow" to mean "oil" would be nonstandard - or that post-utopianism isn't meaningful.

Comment author: siodine 02 October 2012 05:03:40PM 0 points [-]

Input->Black box->Desired output. "Black box" could be replaced with"magic." How would your black box work in practice?

Comment author: Dolores1984 02 October 2012 06:47:35AM 1 point [-]

When we try to build a model of the underlying universe, what we're really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).

So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less than the amount of information in the observable universe (or else we couldn't reason about it).

So the question to ask is really "can I imagine a program state that would make this proposition true, given my current beliefs about my organization of the program?"

This is resilient to the atoms / QM thing, at least, as you can always change the underlying program description to better fit the evidence.

Although, in practice, most of what intelligent entities do can more precisely be described as 'grammar fitting' than 'program induction.' We reason probabalistically, essentially by throwing heuristics at a wall to see what offers marginal returns on predicting future sense impressions, since trying to guess the next word in a sentence by reverse-deriving the original state of the universe-program and iterating it forwards is not practical for most people. That massive mess of semi-rational, anticipatorially-justified rules of thumb is what allows us to reason in the day to day.

So a more pragmatic question is 'how does this change my anticipation of future events?' or 'What sense experiences do I expect to have differently as a result of this belief?'

It is only when we seek to understand more deeply and generally, or when dealing with problems of things not directly observable, that it is practical to try to reason about the actual program underlying the universe.

Comment author: Furslid 02 October 2012 07:50:27AM 1 point [-]

Internal consistency. Propositions must be non self-contradictory. If a proposition is a conjunction of multiple propositions, then those propositions must not contradict each other.

Comment author: RobinZ 02 October 2012 03:00:57PM 1 point [-]

I think the condition is necessary but not sufficient. How would it deal with the post-utopian example in the article text?

Comment author: dankane 02 October 2012 07:52:37AM 0 points [-]

OK. Here's my best shot at it.

Firstly, I don't really like the wording of the Koan. I feel like a more accurate statement of the fundamental problem here is "What rule could restrict our beliefs to propositions that we can usefully discuss whether or not they are true without excluding any statements for which we would like be base our behavior on whether or not they are true." Unfortunately, on some level I do not believe that there is a satisfactory answer here. Though it is quite possible that the problem is with my wanting to base my behavior on the truth of statements whose truth cannot be meaningfully discussed.

To start with, let's talk about the restriction about restricting to statements for which we can meaningfully discuss whether or not they are true. Given the context of the post this is relatively straightforward. If truth is an agreement between our beliefs and reality, and if reality is the thing that determines our experiences, then it is only meaningful to talk about beliefs being true if there are some sequences of possible experiences that could cause the belief to be either true or false. This is perhaps too restrictive a use of "reality", but certainly such beliefs can be meaningfully discussed.

Unfortunately, I would like to base my actions upon beliefs that do not fall into this category. Things like "the universe will continue to exist after I die" does not have any direct implications on my lifetime experiences, and thus would be considered meaningless. Fortunately, I have found a general transformation that turns such beliefs into beliefs that often have meaning. The basic idea is to instead of asking directly about my experiences to instead use Solomonoff induction to ask the question indirectly. For example, the question above becomes (roughly) "will the simplest model of my lifetime experiences have things corresponding to objects existing at times later than anything correspond to me?" This new statement could be true (as it is with my current set of experiences), or false (if for example, I expected to die in a big crunch). Now on every statement I can think of, the above rule transforms the statement A to a statement T(A) so that my naive beliefs about A are the same as my beliefs about T(A) (if they exist). Furthermore, it seems that T(A) is still meaningless in the above sense only in cases where I naively believe A to actually be meaningless and thus not useful for determining my behavior. So in some sense, this transformation seems to work really well.

Unfortunately, things are still not quite adding up to normality for me. The thing that I actually care about is whether or not people will exist after my death, not whether certain models contain people after my death. Thus even though this hack seems to be consistently giving me the right answers to questions about whether statements are true or meaningful, it does not seem to be doing so for the right reasons.

Comment author: RichardKennaway 02 October 2012 08:06:48AM 7 points [-]

A set of beliefs is not like a bag of sand, individual beliefs unconnected with each other, about individual things. They are connected to each other by logical reasoning, like a lump of sandstone. Not all beliefs need to have a direct connection with experience, but as long as pulling on the belief pulls, perhaps indirectly, on anticipated experience, the belief is meaningful.

When a pebble of beliefs is completely disconnected from experience, or when the connection is so loose that it can be pulled around arbitrarily without feeling the tug of experience, then we can pronounce it meaningless. The pebble may make an attractive paperweight, with an intricate structure made of elements that also occur in meaningful beliefs, but that's all it can be. Music of the mind, conveying a subjective impression of deep meaning, without having any.

For the hypothetical photon disappearing in the far-far-away, no observation can be made on that photon, but we have other observations leading to beliefs about photons in general, according to which they cannot decay. That makes it meaningful to say that the far away photon acts in the same way. If we discovered processes of photon decay, it would still be meaningful, but then we would believe it could be false.

Comment author: dankane 02 October 2012 08:38:58AM 2 points [-]

Interesting idea. But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as "photons obey Maxwell's equations up to an event horizon and case to exist outside of it". You could then add other beliefs like "nothing exists outside of the event horizon" which are incompatible with the photon continuing to exist.

In other words, your beliefs cannot afford to be independent of one another, but you could build two different belief systems, one in which the photon continues to exist and one in which it does not, that make identical predictions about experiences. Is it meaningful to ask which of these belief systems is true?

Comment author: RichardKennaway 02 October 2012 09:19:43AM 3 points [-]

But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as "photons obey Maxwell's equations up to an event horizon and case to exist outside of it".

Systems of belief are more like a lump of sandstone than a pile of sand, but they are also more like a lump of sandstone, a rather friable lump, than a lump of marble. They are not indissoluble structures that can be made in arbitrary shapes, the whole edifice supported by an attachment at one point to experience.

Experience never brought hypotheses such as you suggest to physicists' attention. The edifice as built has no need of it, and it cannot be bolted on: it will just fall off again.

Comment author: dankane 02 October 2012 03:28:50PM 1 point [-]

But these hypotheses have just be brought to our attention - just now. In fact the claim that these hypotheses produce indistinguishable physics might even be useful. If I want to simulate my experiences, I can save on computational power by knowing that I no longer have to keep track of things that have gone behind an event horizon. The real question is why the standard set of beliefs should be more true or meaningful than this new one. A simple appeal to what physicists have so far conjectured is not in general sufficient.

Comment author: RichardKennaway 02 October 2012 04:42:38PM *  0 points [-]

Which meaningful beliefs to consider seriously is an issue separate from the original koan, which asks which possible beliefs are meaningful. I think we are all agreeing that a belief about the remote photon's extinction or not is a meaningful one.

Comment author: dankane 02 October 2012 06:38:13PM 0 points [-]

I don't see how you can claim that the belief that the photon continues to exist is a meaningful belief without also allowing the belief that the photon does not continue to exist to be a meaningful belief. Unless you do something along the lines of taking Kolmogorov complexity into account, these beliefs seem to be completely analogous to each other. Perhaps to phrase things more neutrally, we should be asking if the question "does the photon continue to exist?" is meaningful. On the one hand, you might want to say "no" because the outcome of the question is epiphenomenal. On the other hand, you would like this question to be meaningful since it may have behavioral implications.

Comment author: RichardKennaway 02 October 2012 07:21:30PM 0 points [-]

I don't see how you can claim that the belief that the photon continues to exist is a meaningful belief without also allowing the belief that the photon does not continue to exist to be a meaningful belief.

They're both meaningful. There are reasons to reject one of them as false, but that's a separate issue.

Comment author: dankane 02 October 2012 09:16:24PM 0 points [-]

OK. I think that I had been misreading some of your previous posts. Allow me the rephrase my objection.

Suppose that our beliefs about photons were rewritten as "photons not beyond an event horizon obey Maxwell's Equations". Making this change to my belief structure now leaves beliefs about whether or not photons still exist beyond an event horizon unconnected from my experiences. Does the meaningfulness of this belief depend on how I phrase my other beliefs?

Also if one can equally easily produce belief systems which predict the same sets of experiences but disagree on whether or not the photon exists beyond the event horizon, how does this belief differ from the belief that Carol is a post-utopian?

Comment author: [deleted] 02 October 2012 05:48:30PM 1 point [-]

In other words, your beliefs cannot afford to be independent of one another, but you could build two different belief systems, one in which the photon continues to exist and one in which it does not, that make identical predictions about experiences. Is it meaningful to ask which of these belief systems is true?

Dunno about “meaningful”, but the model with lower Kolmogorov complexity will give you more bang for the buck.

Comment author: [deleted] 02 October 2012 09:36:27AM 43 points [-]

I dislike the "post utopian" example, and here's why:

Language is pretty much a set of labels. When we call something "white", we are saying it has some property of "whiteness." NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label "white" ends, and grey or yellow begins.

Say I'm carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they're going to end up breaking music space into much smaller pieces. For example, dub step and house.

Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn't necessarily say "It's the one that goes, like, WOPWOPWOPWOP iiinnnnnggg" if I'm a learned professor, so I'll use jargon like "synthetic rhythm," or something.

But not having a complete explainable System 2 algorithm for "How to Tell if it's Dubstep" doesn't mean that my System 1 can't readily identify it. In fact, it's probably easier to just listen to a bunch of music until your System 1 can identify the various genres, even if your System 2 can't codify it. The example is treating the fact that your professor can't really codify "post utopianism" to mean that it's not "true". (this example has been used in other sequence posts, and I disagreed with it then too)

Have someone write a bunch of short stories. Give them to English Literature professors. If they tend to agree which ones are post utopian, and which ones aren't, then they ARE in fact carving up literature-space in a meaningful way. The fact that they can't quite articulate the distinction doesn't make it any less true than knowing that snow was white before you knew about wavelengths. They're both labels, we just understand one better.

Anyways, I know it's just an example, but without a better example, i can't really understand the question well enough to think of a relevant answer.

Comment author: Manfred 02 October 2012 11:00:03AM 4 points [-]

Example: an irishman arguing with a mongolian over what dragons look like.

Comment author: Vaniver 02 October 2012 07:39:35PM 6 points [-]

When the Irishman is a painter and the Mongolian a dissatisfied customer, does their disagreement have meaning?

Comment author: RichardKennaway 02 October 2012 11:19:51AM 13 points [-]

I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:21:18PM 0 points [-]

In particular, "post-utopian" is not a real term so far as I know, and I'm using it as a stand-in for literary terms that do in fact have no meaning. If you think there are none of those, Alan Sokal would like to have a word with you.

Comment author: [deleted] 02 October 2012 06:25:20PM 9 points [-]

What would he have to say? The Sokal Hoax was about social engineering, not semantics.

Comment author: Kaj_Sotala 02 October 2012 06:55:54PM *  13 points [-]

If that's your criteria, you could use some stand-in for computer science terms that have no meaning.

WMSCI, the World Multiconference on Systemics, Cybernetics and Informatics, is a computer science and engineering conference that has occurred annually since 1995. [...] WMSCI attracted publicity of a less favorable sort in 2005 when three graduate students at MIT succeeded in getting a paper accepted as a "non-reviewed paper" to the conference that had been randomly generated by a computer program called SCIgen

Comment author: RichardKennaway 02 October 2012 07:35:57PM *  10 points [-]

I'm sure there's a lot of nonsense, but "post-utopian" appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.

We're all utopians here.

Comment author: TheOtherDave 02 October 2012 08:05:36PM 1 point [-]

Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.

By this definition, wouldn't the belief that science will not lead to perfection but we can still look forward to more of what we already have (rather than ruin and destruction) be equally post-utopian?

Comment author: RichardKennaway 02 October 2012 08:33:44PM 2 points [-]

Not as I see the word used, which appears to involve the sense of not merely less enthusiastic than, but turning away from. You can't make a movement on the basis of "yes, but not as sparkly".

Comment author: TheOtherDave 02 October 2012 10:12:43PM 5 points [-]

Pity. "It will be kind of like it is now" is an under-utilized prediction.

Comment author: JulianMorrison 02 October 2012 08:09:39PM 13 points [-]

I think you are playing to what you assume are our prejudices.

Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it's actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren't really expecting X to be meaningless.

The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.

Comment author: loup-vaillant 02 October 2012 01:40:54PM *  9 points [-]

There is the literature professor's belief, the student's belief, and the sentence "Carol is 'post-utopian'". While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor's belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student's belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label ("colonial alienation"), and then it stops. From Eliezer's main post (emphasis mine) :

Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all 'post-utopians', which you can tell because their writings exhibit signs of 'colonial alienation'. For most college students the typical result will be that their brain's version of an object-attribute list will assign the attribute 'post-utopian' to the authors Carol, Danny, and Elaine.

  1. The professor have a meaningful belief.
  2. Unable to express it properly (it may not be his fault), gives a mysterious explanation.
  3. That mysterious explanation generates a floating belief in the student's mind.

Well, not that floating. The student definitely expects a sensory experience: grades. The problem isn't the lack of expectations, but that they're based on an overly simplified model of the professor's beliefs, with no direct ties to the writing themselves –only to the authors' names. Remove professors and authors' names, and the students' beliefs are really floating: they will have no way to tie them to reality –the writing. And if they try anyway, I bet their carvings won't agree.

Now when the professor grades an answer, only a label will be available ("post-utopian", or whatever). This label probably reflects the student's belief directly. That answer will indeed be quickly patterned matched against a label inside the professor's brain, generating a quick "right" or "wrong" response (and the corresponding motion in the hand that wield the red pen). Just as drawn in the picture actually.

However, the label in the professor's head is not a floating belief like the student's. It's a cached thought, based on a much more meaningful belief (or so I hope).

Okay, now that I recognize your name, I see you're not exactly a newcomer here. Sorry if I didn't told anything you don't know. But it did seem like you conflated mysterious answers (like "phlogiston") and floating beliefs (actual neural constructs). Hope this helped.

Comment author: Alejandro1 02 October 2012 02:37:59PM 4 points [-]

If that is what Eliezer meant, then it was confusing to use an example for which many people suspect that the concept itself is not meaningful. It just generates distraction, like the "Is Nixon a pacifist?" example in the original Politics is the mind-killer post (and actually,the meaningfulness of post-colonialism as a category might be a political example in the wide sense of the word). He could have used something from physics like "Heat is transmitted by convention", or really any other topic that a student can learn by rot without real understanding.

Comment author: loup-vaillant 02 October 2012 03:21:06PM *  3 points [-]

I don't think Eliezer meant all what I have written (edit: yep, he didn't). I was mainly analysing (and defending) the example to death, under Daenerys' proposed assumption that the belief in the professor's head is not floating. More likely, he picked something familiar that would make us think something like "yeah, if those are just labels, that's no use".¹

By the way is there any good example? Something that (i) clearly is meaningful, and (ii) let us empathise with those who nevertheless extract a floating belief out of it? I'm not sure. I for one don't empathise with the students who merely learn by rot, for I myself don't like loosely connected belief networks: I always wanted to understand.

Also, Eliezer wasn't very explicit about the distinction between a statement, embodied in text, images, or whatever our senses can process, and belief, embodied in a heap of neurons. But this post is introductory. It is probably not very useful to make the distinction so soon. More important is to realize that ideas are not floating in the void, but are embodied in a medium: paper, computers… and of course brains.

[1] We're not familiar to "post-utopianism" and "colonial alienation" specifically, but we do know the feeling generated by such literary mumbo jumbo.

Comment author: evand 02 October 2012 03:18:47PM 3 points [-]

If the teacher does not have a precise codification of what makes a writer "post-utopian", then how should he teach it to students?

I would say the best way is a mix of demonstrating examples ("Alice is not a post-utopian; Carol is a post-utopian"), and offering generalizations that are correlated with whether the author is a post-utopian ("colonial alienation"). This is a fairly slow method of instruction, at least in some cases where the things being studied are complicated, but it can be effective. While the student's belief may not yet be as well-formed as the professor's, I would hesitate to call it meaningless. (More specifically, I would agree denotatively but object connotatively to such a classification.) I would definitely not call the belief useless, since it forms the basis for a later belief that will be meaningful. If a route to meaningful, useful belief B goes through "meaningless" belief A, then I would say that A is useful, and that calling A meaningless produces all the wrong sorts of connotations.

Comment author: loup-vaillant 02 October 2012 03:56:48PM *  1 point [-]

The example assumed bad teaching based on rote learning. Your idea might actually work.

(Edit: oops, you're probably aware of that. Sorry for the noise)

Comment author: Patrick 02 October 2012 10:55:33AM 3 points [-]

I don't think there can be any such rule.

Comment author: somervta 02 October 2012 12:11:23PM 1 point [-]

Propositions must be able in principle to be connected to a state of how the world could-be, and this connection must be durable over alternate states of basic world identity. That is to say, it should be possible to simulate both states in which the proposition is true, and states in which it is not.

Comment author: JulianMorrison 02 October 2012 12:24:12PM 5 points [-]

For a belief to be meaningful you have to be able to describe evidence that would move your posterior probability of it being true after a Bayesian update.

This is a generalization of falsifiability that allows, for example, indirect evidence pertaining to universal laws.

Comment author: Benquo 02 October 2012 12:30:16PM *  1 point [-]

It seems to me that we at least have to admit two different classes of proposition:

1) Propositions that reflect or imply an expectation of some experiences over others. Examples include the belief that the sky is blue, and the belief that we experience the blueness of the sky mediated by photons, eyes, nerves, and the brain itself.

2) Propositions that do not imply a prediction, but that we must believe in order to keep our model of the world simple incomprehensible. An example of this would be the belief that the photon continues to exist after taxes outside of our light cone.

Comment author: ArisKatsaris 02 October 2012 01:06:36PM 3 points [-]

For every meaningful proposition P, an author should (in theory) be able to write coherently about a fictional universe U where P is true and a fictional universe U' where P is false.

Comment author: Eugine_Nier 02 October 2012 05:24:52PM 7 points [-]

So my belief that 2+2=4 isn't meaningful?

Comment author: khafra 02 October 2012 06:32:57PM *  2 points [-]

I thought Eliezer's story about waking up in a universe where 2+2 seems to equal 3 felt pretty coherent.

edit: It seems like the story would be less coherent if it involved detailed descriptions of re-deriving mathematics from first principles. So perhaps ArisKatsaris' definition leaves too much to the author's judgement in what to leave out of the story.

Comment author: dankane 02 October 2012 06:56:12PM 4 points [-]

I think that it's a good deal more subtle than this. Eliezer described a universe in which he had evidence that 2+2=3, not a universe in which 2 plus 2 was actually equal to 3. If we talk about the mathematical statement that 2+2=4, there is actually no universe in which this can be false. On the other hand in order to know this fact we need to acquire evidence of it, which, because it is a mathematical truth, we can do without any interaction with the outside world. On the other hand if someone messed with your head, you could acquire evidence that 2 plus 2 was 3 instead, but seeing this evidence would not cause 2 plus 2 to actually equal 3.

Comment author: Bobertron 02 October 2012 06:43:26PM 0 points [-]

I suppose it depends on how stict you are about what "coherently" means. A fictional universe is not the same as a possible universe and you pobably could write about a universe where you put two apples next to two other apples and then count five apples.

Comment author: RobinZ 02 October 2012 02:47:52PM 9 points [-]

Before reading other answers, I would guess that a statement is meaningful if it is either implied or refuted by a useful model of the universe - the more useful the model, the more meaningful the statement.

Comment author: RobinZ 02 October 2012 02:59:31PM 0 points [-]

Looking at Furslid's answer, I discovered that my definition is somewhat ambiguous - a statement may be implied or refuted by quite a lot of different kinds of models, some of which are nearly useless and some of which are anything but, and my definition offers no guidance on the question of which model's usefulness reflects the statement's meaningfulness.

Plus, I'm not entirely sure how it works with regards to logical contradictions.

Comment author: [deleted] 02 October 2012 08:45:10PM *  1 point [-]

Where Recursive Justification Hits Bottom and its comment thread should be interesting to you.

In the end, we have to rely on the logical theory of probability (as well as standard logical laws, such as the law of noncontradiction). There is no better choice.

Using Bayes' theorem (beginning with priors set by Occam's Razor) tells you how useful your model is.

Comment author: [deleted] 02 October 2012 08:33:16PM *  0 points [-]

This is incontrovertibly the best answer given so far. My answer was that a proposition is meaningful iff an oracle machine exists that takes as input the proposition and the universe, outputs 0 if the proposition is true and outputs 1 if the proposition is false. However, this begs the question, because an oracle machine is defined in terms of a "black box".

Comment author: selylindi 02 October 2012 03:15:16PM *  1 point [-]

"God's-eye-view" verificationism

A proposition P is meaningful if and only if P and not-P would imply different perceptions for a hypothetical entity which perceives all existing things.

(This is not any kind of argument for the actual existence of a god. Downvote if you wish, but please not due to that potential misunderstanding.)

Comment author: MixedNuts 02 October 2012 04:22:46PM 0 points [-]

Doesn't that require such an entity to be logically possible?

Comment author: selylindi 02 October 2012 05:10:57PM *  0 points [-]

No, in fact it works better on the assumption that there is no such entity.

If it could be an existing entity, then we could construct a paradoxical proposition, such as P="There exists an object unperceived by anything.", which could not be consistently evaluated as meaningful or unmeaningful. Treating a "perceiver of all existing things" as a purely hypothetical entity--a cognitive tool, not a reality--avoids such paradoxes.

Comment author: MixedNuts 02 October 2012 05:35:59PM 1 point [-]

Huh? We're talking past each other here.

If there's an all-seeing deity, P is well-formed, meaningful, and false. Every object is perceived by the deity, including the deity itself. If there's no all-seeing deity, the deity pops into hypothetical existence outside the real world, and evaluates P for possible perceiving anythings inside the real world; P is meaningful and likely true.

But that's not what I was talking about. I'm talking about logical possibility, not existence. It's okay to have a theory that talks about squares even though you haven't built any perfect squares, and even if the laws of physics forbid it, because you have formal systems where squares exist. So you can ask "What is the smallest square that encompasses this shape?", with a hypothetical square. But you can't ask "What is the smallest square circle that encompasses this shape?", because square circles are logically impossible.

I'm having a hard time finding an example of an impossible deity, not just a Turing-underpowered one, or one that doesn't look at enough branches of a forking system. Maybe a universe where libertarian free will is true, and the deity must predict at 6AM what any agent will do at 7AM - but of course I snuck in the logical impossibility by assuming libertarian free will.

Comment author: Vaniver 02 October 2012 07:41:26PM *  3 points [-]

Meaningful seems like a odd word to choose, as it contains the answer itself. What rule restricts our beliefs to just propositions that can be meaningful? Why, we could ask ourselves if the proposition has meaning.

The "atoms" rule seems fine, if one takes out the word "atoms" and replaces it with "state of the universe," with the understanding that "state" includes both statics and dynamics. Thus, we could imagine a world where QM was not true, and other physics held sway- and the state of that world, including its dynamics, would be noticeably different than ours.

And, like daenerys, I think the statement that "Elaine is a post-utopian" can be meaningful, and the implied expanded version of it can be concordant with reality.

[edit] I also wrote my koan answers as I was going through the post, so here's 1:

Supposing that knowledge only exists in minds, then truth judgments- that is, knowledge that a belief corresponds to reality- will only exist in heads, because it is knowledge.

The postmodernists are wrong if they seek to have material implications from this definitional argument. What makes truth judgments special compared to other judgments is that we have access to the same reality. If Sally believes that the marble is in the basket and Anne believes the marble is in the box, the straw postmodernist might claim that both have their own truth- but two beliefs do not generate two marbles. Sally and Anne will both see the marble in the same container when they go looking for it.

Again, the bare facts agree with the postmodernists- Sally and Anne would need to look to see where the marble is, which they can hardly do without their heads! But the lack of an unthinking truth oracle does not make "the concordance of beliefs with reality"- what I would submit as a short definition of truth- a useful and testable concept.

And 2:

Quite probably, as it would want to have beliefs about the potential pasts and futures, or counterfactuals, or beliefs in the minds of others.

Comment author: mbrubeck 02 October 2012 08:31:21PM 2 points [-]

I think this one gets more complicated when you include beliefs about things like theorems of logic, e.g., "Any consistent formal system powerful enough to describe Peano arithmetic is incomplete." It seems to me that this belief is meaningful, yet independent of any sensory experience or physical law. That is, it's not really a belief about "the universe" of atoms or quantum fields or whatnot. Perhaps it would be better to talk about these "beliefs" as a separate category.

Comment author: DuncanS 02 October 2012 09:09:38PM 0 points [-]

They are truisms - in principle they are statements that are entirely redundant as one could in principle work out the truth of them without being told anything. However, principle and practice are rather different here - just because we could in principle reinvent mathematics from scratch doesn't mean that in practice we could. Consequently these beliefs are presented to us as external information rather than as the inevitable truisms they actually are.

Comment author: DuncanS 02 October 2012 08:49:02PM 0 points [-]

Maps are models of the territory. And the usefulness of them is often that they make predictions about parts of the territory I haven't actually seen yet, and may have trouble getting to at all. The Sun will come up in the morning. There isn't a leprachaun colony living a mile beneath my house. There aren't any parts of the moon that are made of cheese.

I have no problem saying that these things are true, but they are in fact extrapolations of my current map into areas which I haven't seen and may never see. These statements don't meaningfully stand alone, they arise out of extrapolating a map that checks out in all sorts of other locations which I can check. One can then have meaningful certainty about the zones that haven't yet been seen.

How does one extrapolate a map? In principle I'd say that you should find the most compressible form - the form that describes the territory without adding extra 'information' that I've assumed from someplace else. The compressed form then leads to predictions over and above the bald facts that go into it.

The map should match the territory in the places you can check. When I then make statements that something is "true", I'm making assertions about what the world is like, based on my map. As far as English is concerned, I don't need absolute certainty to say something is true, merely reasonable likelihood.

Hence the photon. The most compressible form of our description of the universe is that the parts of space that are just beyond visibility aren't inherently different from the parts we can see. So the photon doesn't blink out over there, because we don't see any such blinking out over here.

Comment author: TheOtherDave 02 October 2012 09:30:46PM 0 points [-]

My $0.02:

A proposition P is meaningful to an observer O to the extent that O can alter its expectations about the world based on P.

This doesn't a priori exclude anything that could be true, although for any given observer it might do so. As it should. Not every true proposition is meaningful to me, for example, and some true propositions that are meaningful to me aren't meaningful to my mom.

Of course, it doesn't necessarily exclude things that are false, either. (Nor should it. Propositions can be meaningful and false.)

For clarity, it's also perhaps worth distinguishing between propositions and utterances, although the above is also true of meaningful utterances.

Comment author: Eliezer_Yudkowsky 02 October 2012 05:28:00AM 24 points [-]

(The 'Mainstream Status' comment is intended to provide a quick overview of what the status of the post's ideas are within contemporary academia, at least so far as the poster knows. Anyone claiming a particular paper precedents the post should try to describe the exact relevant idea as presented in the paper, ideally with a quote or excerpt, especially if the paper is locked behind a paywall. Do not represent large complicated ideas as standard if only a part is accepted; do not represent a complicated idea as precedented if only a part is described. With those caveats, all relevant papers and citations are much solicited! Hopefully comment-collections like these can serve as a standard link between LW presentations and academic ones.)

The correspondence theory of truth is the first position listed in the Stanford Encyclopedia of Philosophy, which is my usual criterion for saying that something is a solved problem in philosophy. Clear-cut simple visual illustration inspired by the Sally-Anne experimental paradigm is not something I have previously seen associated with it, so the explanation in this post is - I hope - an improvement over what's standard.

Alfred Tarski is a famous mathematician whose theory of truth is widely known.

The notion of possible worlds is very standard and popular in philosophy; some of them even ascribe much more realism to them than I would (since I regard them as imaginary constructs, not thingies that can potentially explain real events as opposed to epistemic puzzles).

I haven't particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to "There are causal processes producing map-territory correspondences" to "You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions". I would not be surprised to find out it existed, especially on the second clause.

Added: The term "post-utopian" was intended to be a made-up word that had no existing standardized meaning in literature, though it's simple enough that somebody has probably used it somewhere. It operates as a stand-in for more complicated postmodern literary terms that sound significant but mean nothing. If you think there are none of those, Alan Sokal would like to have a word with you. (Beating up on postmodernism is also pretty mainstream among Traditional Rationalists.)

You might also be interested in checking out what Mohandas Gandhi had to say about "the meaning of truth", just in case you were wondering what things are like in the rest of the world outside the halls of philosophy departments.

Comment author: lukeprog 02 October 2012 06:24:00AM *  11 points [-]

Speaking as the author of Eliezer's Sequences and Mainstream Academia...

Off the top of my head, I also can't think of a philosopher who has made an explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."

But if this connection has been made explicitly, I would expect it to be made by someone who accepts both the correspondence theory and "naturalized epistemology", often summed up in a quote from Quine:

The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology? ...Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.

(Originally, Quine's naturalized epistemology accounted only for this descriptive part of epistemology, and neglected the normative part, e.g. truth conditions. In the 80s Quine started saying that the normative part entered into naturalized epistemology through "the technology of truth-seeking," but he was pretty vague about this.)

Edit: Another relevant discussion of embodiment and theories of truth can be found in chapter 7 of Philosophy in the Flesh.

Comment author: ciphergoth 02 October 2012 07:50:54AM 11 points [-]

Off the top of my head, I also can't think of a philosopher who has made an explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."

OK, I defended the tweet that got this response from Eliezer as the sort of rhetorical flourish that gets people to actually click on the link. However, it looks like I also underestimated how original the sequences are - I had really expected this sort of thing to mirror work in mainstream philosophy.

Comment author: MichaelVassar 02 October 2012 07:16:36PM 13 points [-]

It's not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as "what fields are legitimate". Saying that something is known in mainstream academia seems suspiciously like saying that "something is encoded in the matter in my shoelace, given the right decryption schema. OTOH, it's highly meaningful to say that something is discoverable by someone with competent 'google-fu"

Comment author: pragmatist 02 October 2012 09:36:29PM *  31 points [-]

This is a great post. I think the presentation of the ideas is clearer and more engaging than the sequences, and the cartoons are really nice. Wild applause for the artist.

I have a few things to say about the status of these ideas in mainstream philosophy, since I'm somewhat familiar with the mainstream literature (although admittedly it's not the area of my expertise). I'll split up my individual points into separate comments.

Alfred Tarski is a famous mathematician whose theory of truth is widely known.

Summary of my point: Tarski's biconditionals are not supposed to be a definition of truth. They are supposed to be a test of the adequacy of a proposed definition of truth. Proponents of many different theories claim that their theory passes this test of adequacy, so to identify Tarski's criterion with the correspondence theory is incorrect, or at the very least, a highly controversial claim that requires defense. What follows is a detailed account of why the biconditionals can't be an adequate definition of truth, and of what Tarski's actual theory of truth is.

Describing Tarski's biconditionals as a definition of truth or a theory of truth is misleading. The relevant paper is The Semantic Conception of Truth. Let's call sentences of the form 'p' is true iff p T-sentences. Tarski's claim in the paper is that the T-sentences constitute a criterion of adequacy for any proposed theory of truth. Specifically, a theory of truth is only adequate if all the T-sentences follow from it. This basically amounts to the claim that any adequate theory of truth must get the extension of the truth-predicate right -- it must assign the truth-predicate to all and only those sentences that are in fact true.

I admit that the conjunction of all the T-sentences does in fact satisfy this criterion of adequacy. All the individual T-sentences do follow from this conjunction (assuming we've solved the subtle problem of dealing with infinitely long sentences). So if we are measuring by this criterion alone, I guess this conjunction would qualify as an adequate theory of truth. But there are other plausible criteria according to which it is inadequate. First, it's a frickin' infinite conjunction. We usually prefer our definitions to be shorter. More significantly, we usually demand more than mere extensional adequacy from our definitions. We also demand intensional adequacy.

If you ask someone for a definition of "Emperor of Rome" and she responds "X is an Emperor of Rome iff X is one of these..." and then proceeds to list every actual Emperor of Rome, I suspect you would find this definition inadequate. There are possible worlds in which Julius Caesar was an Emperor of Rome, even though he wasn't in the actual world. If your friend is right, then those worlds are ruled out by definition. Surely that's not satisfactory. The definition is extensionally adequate but not intensionally adequate. The T-sentence criterion only tests for extensional adequacy of a definition. It is satisfied by any theory that assigns the correct truth predicates in our world, whether or not that theory limns the account of truth in a way that is adequate for other possible worlds. Remember, the biconditionals here are material, not subjunctive. The T-sentences don't tell us that an adequate theory would assign "Snow is green" as true if snow were green. But surely we want an adequate theory to do just that. If you regard the T-sentences themselves as the definition of truth, all that the definition gives us is a scheme for determining which truth ascriptions are true and false in our world. It tells us nothing about how to make these determinations in other possible worlds.

To make the problem more explicit, suppose I speak a language in which the sentence "Snow is white" means that grass is green. It will still be true that, for my language, "Snow is white" is true iff snow is white. Yet we don't want to say this biconditional captures what it means for "Snow is white" to be true in my language. After all, in a possible world where snow remained white but grass was red, the sentence would be false.

Tarski was a smart guy, and I'm pretty sure he realized all this (or at least some of it). He constantly refers to the T-sentences as material criteria of adequacy for a definition of truth. He says (speaking about the T-sentences), "... we shall call a definition of truth 'adequate' if all these equivalences follow from it." (although this seems to ignore the fact that there are other important criteria of adequacy) When discussing a particular objection to his view late in the paper, he says, "The author of this objection mistakenly regards scheme (T)... as a definition of truth." Unfortunately, he also says stuff that might lead one to think he does think of the conjunction of all T-sentences as a definition: "We can only say that every equivalence of the form (T)... may be considered a partial definition of truth, which explains wherein the truth of this one individual sentence consists. The general definition has to be, in a certain sense, a logical conjunction of all these partial definitions."

I read the "in a certain sense" there as a subtle concession that we will need more than just a conjunction of the T-sentences for an adequate definition of truth. As support for my reading, I appeal to the fact that Tarski explicitly offers a definition of truth in his paper (in section 11), one that is more than just a conjunction of T-sentences. He defines truth in terms of satisfaction, which in turn is defined recursively using rules like: The objects a and b satisfy the sentential function "P(x, y) or Q(x, y)" iff they satisfy at least one of the functions "P(x, y)" or "Q(x, y)". His definition of truth is basically that a sentence is true iff it is satisfied by all objects and false otherwise. This works because a sentence, unlike a general sentential function, has no free variables to which objects can be bound.

This definition is clearly distinct from the logical conjunction of all T-sentences. Tarski claims it entails all the T-sentences, and therefore satisfies his criterion of adequacy. Now, I think Tarski's actual definition of truth isn't all that helpful. He defines truth in terms of satisfaction, but satisfaction is hardly a more perspicuous concept. True, he provides a recursive procedure for determining satisfaction, but this only tells us when compound sentential functions are satisfied once we know when simple ones are satisfied. His account doesn't explain what it means for a simple sentential function to be satisfied by an object. This is just left as a primitive in the theory. So, yeah, Tarski's actual theory of truth kind of sucks.

His criterion of adequacy, though, has been very influential. But it is not a theory of truth, and that is not the way it is treated by philosophers. It is used as a test of adequacy, and proponents of most theories of truth (not just the correspondence theory) claim that their theory satisfies this test. So to identify Tarski's definition/criterion/whatever with the correspondence theory misrepresents the state of play. There are, incidentally, a group of philosophers who do take the T-sentences to be a full definition of truth, or at least to be all that we can say about truth. But these are not correspondence theorists. They are deflationists.

Comment author: shminux 02 October 2012 06:20:54AM -2 points [-]

Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

Probably because your definition of existence is no good. Try a better one.

Comment author: ArisKatsaris 02 October 2012 07:08:35AM 5 points [-]

That's an attempt to dismiss epistemic rationality by arguing that only instrumental rationality matters.

I suppose that's true by certain definitions of "matters", but it ignores those of us who do assign some utility to understanding the universe itself, and therefore at least partially incorporate the epistemic in the instrumental....

Also, if I die tomorrow of a heart attack, I think it's still meaningful to say that the rest of the planet will still exist afterwards, even though there won't exist any experimental prediction I can make and personally verify to that effect. I find solipsism rather uninteresting.

Comment author: shminux 02 October 2012 05:24:37PM -2 points [-]

That's an attempt to dismiss epistemic rationality by arguing that only instrumental rationality matters.

No, that's the statement that epistemic rationality is based on instrumental rationality.

Also, if I die tomorrow of a heart attack, I think it's still meaningful to say that the rest of the planet will still exist afterwards, even though there won't exist any experimental prediction I can make and personally verify to that effect.

Indeed, no good model predicts that a death of one individual result in the cessation of all experiences for everyone else. Not sure what strawman you are fighting here.

I find solipsism rather uninteresting.

Except as a psychological phenomenon, maybe.

Comment author: khafra 02 October 2012 06:18:19PM 0 points [-]

Epistemic rationality is a subset of instrumental rationality, to the extent that you value the truth.

Compartmentalization protects us from seeing reality for what it really is: defined only up to the instrumental theories which trouble themselves with certain otherwise insignificant portions of it.

-- Sark

(this allows the universe to keep existing after I die).

Comment author: shminux 02 October 2012 06:31:53AM 1 point [-]

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

You ought to admit that the statement 'there is "the thingy that determines my experimental results"' is a belief. A useful belief, but still a belief. And forgetting that sometimes leads to meaningless questions like "Which interpretation of QM is true?" or "Is wave function a real thing?"

Comment author: Peterdjones 02 October 2012 08:31:55PM *  1 point [-]

You ought to admit that the statement 'there is "the thingy that determines my experimental results"' is a belief.

Why? Didn;t anyone ever see results that conflict with their beliefs?

Comment author: shminux 02 October 2012 09:15:56PM 0 points [-]

Yes... and...? Feel free to explicate the missing steps between what I wrote and what you did.

Comment author: fubarobfusco 02 October 2012 08:38:44AM 14 points [-]

If the above is true, aren't the postmodernists right?

I do wish that you would say "relativists" or the like here. Many of your readers will know the word "postmodernist" solely as a slur against a rival tribe.

Comment author: TimS 02 October 2012 01:12:38PM 3 points [-]

Particularly since many LWers believe things like:

The progress of science is measured as much by deaths among the Old Guard as by discoveries from the Young Idealists.

or

Psychological diagnosis (like those listed in the DSM) function to separate the socially acceptable from the unacceptable and do not even try to cut the world at its joints.

Comment author: MixedNuts 02 October 2012 04:21:39PM 0 points [-]

Why is the former false?

Comment author: [deleted] 02 October 2012 04:34:42PM 1 point [-]

Systematic execution of the old guard doesn't count as scientific progress? Hmm, or does it?

Comment author: TimS 02 October 2012 04:39:38PM 2 points [-]

Someone is trying to set up a strawman. Kuhn didn't advocate violent overthrow of the scientific establishment - he simply noted that generational change was an under-appreciated part of the change of scientific orthodoxy.

Comment author: [deleted] 02 October 2012 06:59:07PM 0 points [-]

Someone is just trying to make a joke.

Comment author: DaFranker 02 October 2012 08:50:09PM *  0 points [-]

The prose wasn't quite as good as the joke's intent, so part of the effect was lost. Still, it made me smile, FWIW :P

Comment author: TimS 02 October 2012 04:38:10PM 3 points [-]

Hrm?

Who said those were false? My point was that these are ideas that are popular in LW and basically true, but that most LWers don't acknowledge are post-modern in origin.

The first statement is a basic takeaway from Kuhn and Feyerabend. The second is basic History of Sexuality from Foucault.

Comment author: MixedNuts 02 October 2012 05:11:33PM 1 point [-]

Oh, sorry, didn't get your point. I think the first statement has been reinvented often, by people who read enough Kelvin quotes.

The second statement is just bizarre. Clearly many people are helped by their meds. Does feeding random psych meds to random freaks produce an increase in quality of life, or at least a wide enough spread that there's a large group that gets a stable improvement? Or are you just claiming the weaker version: symptoms make sense and are treated, but all statements of the form "patients with this set of symptoms form a cluster, and shall be labeled Noun Phrase Disorder" are false? I would claim some diagnoses are reasonable, e.g. Borderline Personality with clearly forms a cluster among bloggers who talk about their mental health. And those that aren't (a whole lotta paraphilias, and ways to cut up umbrella terms) tend to change fast anyway.

Comment author: TimS 02 October 2012 05:52:22PM 12 points [-]

Psychology has made significant strides in response to criticism from the post-modernists. The post-modern criticism of mental health treatment is much less biting than it once was.

Still, for halo effect reasons, we should be careful.


The larger point is that Eliezer's reference to post-modernism is simply a Boo Light and deserves to be called out as such.

Comment author: MichaelVassar 02 October 2012 07:19:42PM 5 points [-]

Many people can effectively be kept out of trouble and made easier for caretakers or relatives to care for via mild sedation. This is fairly clearly the function of at least a significant portion of psychiatric medication.

Comment author: Eugine_Nier 02 October 2012 05:31:04PM *  0 points [-]

Psychological diagnosis (like those listed in the DSM) function to separate the socially acceptable from the unacceptable and do not even try to cut the world at its joints.

The difference is that post-modernists believe that something like this is true for all science and use this to justify this state of affairs in psychology, whereas LWers believe that this is not an acceptable state of affairs and should be fixed.

Edit: Also as MizedNuts pointed out, the diagnoses do try to cut reality at the joints, they just frequently fail due to social signaling interfering with seeking truth.

Comment author: [deleted] 02 October 2012 05:35:59PM 3 points [-]

Citation appreciated. Foucault was specifically trying to improve the standards of psychiatric care.

Comment author: TimS 02 October 2012 05:44:49PM 3 points [-]

First, if physical anti-realism is true to some extent, then it is true to that extent. By contrast, if Kuhn and Feyerabend messed up the history, then physical anti-realists have no leg to stand on. People can stand what is true, for they are already enduring it.

Second, folks like Foucault were at the forefront of the argument that unstated social norm enforcement via psychological diagnosis was far worse than explicit social norm enforcement. They certainly don't argue that the current state of affairs in psychology was (or is) justifiable.

Comment author: jbash 02 October 2012 08:42:35PM 6 points [-]

Actually, "relativist" isn't a lot better, because it's still pretty clear who's meant, and it's a very charged term in some political discussions.

I think it's a bad rhetorical strategy to mock the cognitive style of a particular academic discipline, or of a particular school within a discipline, even if you know all about that discipline. That's not because you'll convert people who are steeped in the way of thinking you're trying to counter, but because you can end up pushing the "undecided" to their side.

Let's say we have a bright young student who is, to oversimplify, on the cusp of going down either the path of Good ("parsimony counts", "there's an objective way to determine what hypothesis is simpler", "it looks like there's an exterior, shared reality", "we can improve our maps"...) or the path of Evil ("all concepts start out equal", "we can make arbitrary maps", "truth is determined by politics" ...). Well, that bright young student isn't a perfectly rational being. If the advocates for Good look like they're being jerks and mocking the advocates for Evil, that may be enough to push that person down the path of Evil.

Wulky Wilkinson is the mind killer. Or so it seems to me.

Comment author: Sniffnoy 02 October 2012 08:59:24AM 10 points [-]

The quantum-field-theory-and-atoms thing seems to be not very relevant, or at least not well-stated. I mean, why the focus on atoms in the first place? To someone who doesn't already know, it sounds like you're just saying "Yes, elementary particles are smaller than atoms!" or more generally "Yes, atoms are not fundamental!"; it's tempting to instead say "OK, so instead of taking a possible state of configurations of atoms, take a possible state of whatever is fundamental."

I'm guessing the problem you're getting at is that is that when you actually try to do this you encounter the problem that you quickly find that you're talking about not the state of the universe but the state of a whole notional multiverse, and you're not talking about one present state of it but its entire evolution over time as one big block, which makes our original this-universe-focused, present-focused notion a little harder to make sense of -- or if not this particular problem then something similar -- but it sounds like you're just making a stupid verbal trick.

Comment author: DuncanS 02 October 2012 10:08:32PM *  4 points [-]

I agree - atoms and so forth are what our universe happens to consist of. But I can't see why that's relevant to the question of what truth is at all - I'd say that the definition of truth and how to determine it are not a function of the physics of the universe one happens to inhabit. Adding physics into the mix tends therefore to distract from the main thrust of the argument - making me think about two complex things instead of just one.

Comment author: fubarobfusco 02 October 2012 09:09:04AM *  0 points [-]

Koan: If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?

I'm not sure what this has to do with politics? The lead-up discusses "an Artificial Intelligence, which was carrying out its work in isolation" — the relevant part seems to be that it doesn't interact with other agents at all, not that it doesn't do politics specifically. Even without politics, other agents can still be mistaken, biased, misinformed, or deceitful; and one use of the concept of "truth" has to do with predicting the accuracy of others' statements and those people's intentions in making them.

Comment author: pleeppleep 02 October 2012 12:50:28PM 1 point [-]

I think politics is used to refer to social manipulation, status, and signaling here. The example is used to designate an agent that has no concern for asserting social privilege over others.

Comment author: JackV 02 October 2012 09:49:16AM *  0 points [-]

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true.

I remember when you drew this analogy to different interpretations of QM and was thinking it over.

The way I put it to myself was that the difference between "laws of physics apply" and "everything acts AS IF the laws of physics apply, but the photon blinks out of existence" is not falsifiable, so for our current physics, the two theories are actually just different reformulations of the same theory.

However, Occam's razor says that, of the two theories, the right one to use is "laws of physics apply" for two reasons: firstly, that it's a lot simpler to calculate, and secondly, if we ever DO find any way of testing it, we're 99.9% sure that we'll discover that the theory consistent with conservation of energy will apply.

And this sort of belief can have behavioral consequences! ... If we thought the colonization ship would just blink out of existence before it arrived, we wouldn't bother sending it.

Excellent point!

Comment author: [deleted] 02 October 2012 06:55:00PM *  0 points [-]

And this sort of belief can have behavioral consequences!

If I understand it correctly, (and I am not sure, feel free to correct me.) it occurs to me that belief may have a very unusual consequence indeed, which seems to be believing that "The photon continues to exist, heading off to nowhere." is true implies that you should also believe the Probability of world P1 is greater than Probability of world P2 below.

P1: "You are being simulated on a supercomputer which does not delete anything past your cosmological horizon."

P2: "You are being simulated on a supercomputer which deletes anything past your cosmological horizon."

Which sounds like a very odd consequence of believing "The photon continues to exist, heading off to nowhere." is true, but as far as I can tell, it appears to be the case.

Comment author: evand 02 October 2012 07:11:30PM 1 point [-]

Non-conditional probabilities are not the sole determinants of conditional probabilities. You're conflating P(photon exists) with P(photon exists|simulated universe).

Your conclusion does not logically follow from your premise. You need to separate out your conditional probabilities.

I'm also not sure the belief is particularly odd: why should you be at the center of the simulation? What makes your horizon more special than someone else's, or the union of all observer's horizons?

Comment author: [deleted] 02 October 2012 08:51:39PM 0 points [-]

Thanks, I suspected that idea needed more processing.

Non-conditional probabilities are not the sole determinants of conditional probabilities. You're conflating P(photon exists) with P(photon exists|simulated universe).

Your conclusion does not logically follow from your premise. You need to separate out your conditional probabilities.

I'm going to be honest and admit that I do not actually know how to write in a P(photon exists|simulated universe) style manner, and when I tried to find out how, I failed that as well because didn't know the name and it didn't appear under any of the names I guessed. Otherwise, I would try to rewrite my idea in that format and doublecheck the notation.

I'm also not sure the belief is particularly odd: why should you be at the center of the simulation? What makes your horizon more special than someone else's, or the union of all observer's horizons?

To unpack what I meant when I said the belief was odd/very unusual, It might have been more clear to say "This isn't necessarily wrong, but it doesn't seem to be an answer I would expect, and this thing I thought of just now appears to be my only justification for it, even though I haven't yet seen anything wrong.

And as for why I picked that particular horizon, I think I was thinking of it primarily as a "Eliezer said this was true. If that is the case, what would make it false? Well, if I was living in a simulated world and things were getting deleted when I could never interact with them again, then it would be false." but as you pointed out, I need to fix the thought anyway.

Comment author: evand 02 October 2012 08:58:53PM 1 point [-]

P(A|B) should be read as "the probability of A, given that B is true" or, more concisely, "P of A given B". Search terms like conditional probability(http://en.wikipedia.org/wiki/Conditional_probability) should get you started. You'll probably also want to read about [Bayes' Theorem.

Comment author: kilobug 02 October 2012 10:44:13AM 7 points [-]

There's no reason for your brain not to update if politics aren't involved.

I don't agree nor like this singling-out of politics as the only thing in which people don't update. People fail to update in many fields, they'll fail to update in love, in religion, in drug risks, in ... there is almost no domain of life in which people don't fail to update at times, rationalizing instead of updating.

Comment author: pleeppleep 02 October 2012 12:47:34PM 1 point [-]

He didn't say "politics" was special. He seemed to be pointing out that updating is called for in circumstances other than the example. "Politics" is used to represent all other issues, and it was relevant because a common criticism of truth is that it is an illusion used to gain a political advantage.

Comment author: TimS 02 October 2012 01:18:24PM 16 points [-]

In addition to what pleeppleep said, I think there is a bit of illusion of transparency here.

As I've said elsewhere, what Eliezer clearly intends with the label "political" is not partisan electioneering to decide whether the community organizer or the business executive is the next President of the United States. Instead, he means something closer to what Paul Graham means when he talks about keeping one's identity small.

Among humans at least, "Personal identity is the mindkiller."

Comment author: [deleted] 02 October 2012 09:07:30PM 0 points [-]

Still, there may be a better word than “politics” for him to use.

Comment author: CronoDAS 02 October 2012 11:05:07AM 3 points [-]

Is there a difference between "truth" and "accuracy"?

Comment author: Benquo 02 October 2012 12:39:49PM 4 points [-]

I could figure some cases where would find it natural to say that one proposition is more accurate then another, but not to say that it is more true. For example, saying that my home has 1000 ft.², as opposed to saying that has 978.25 ft.² Or saying that it is the morning, as opposed to saying that it is 8:30 AM.

Comment author: pleeppleep 02 October 2012 01:03:00PM 0 points [-]

In that context it would refer to narrowness, but it would refer to proximity to truth in a different context, so I think its one of those cases where one word is used twice in place of a second word. I don't think narrowness would be confused with truth, so I think my first definition is the more relevant.

Comment author: incariol 02 October 2012 03:01:56PM 1 point [-]

Perhaps this: "accuracy" is a quantitative measue when "truth" is only qualitative/categorical.

Comment author: Benquo 02 October 2012 12:37:00PM 3 points [-]

This post starts out by saying that we know there is such a thing as truth, because there is something that determines our experimental outcomes, aside from our experimental predictions. But by the end of the post, you're talking about truth as correspondence to an arrangement of atoms in the universe. I'm not sure how you got from there to here.

Comment author: incariol 02 October 2012 02:59:07PM 4 points [-]

We know there's such a thing as reality due to the reasons you mention, not truth - that's just a relation between reality and our beliefs.

"Arrangements of atoms" play a role in the idea that not all "syntactically correct" beliefs actually are meaningful and the last koan asks us to provide some rule to achieve this meaningfulness for all constructible beliefs (in an AI).

At least that's my understanding...

Comment author: thomblake 02 October 2012 01:56:35PM *  14 points [-]

This post is better than the simple truth and I will be linking to it more often, even though this isn't as funny.

Nice illustrations.

EDIT: Reworded in praise-first style.

Comment author: lukeprog 02 October 2012 03:42:10PM 3 points [-]

Ditto.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:35:56PM 4 points [-]

Thanks!

Comment author: MBlume 02 October 2012 09:18:08PM *  16 points [-]

The other day Yvain was reading aloud from Feser and I said I wished Feser would read The Simple Truth. I don't think this would help quite as much.

The Simple Truth sought to convey the intuition that truth is not just a property of propositions in brains, but of any system successfully entangled with another system. Once the shepherd's leveled up a bit in his craftsmanship, the sheep can pull aside the curtain, drop a pebble into the bucket, and the level in the bucket will remain true without human intervention.

Comment author: [deleted] 02 October 2012 09:31:23PM 12 points [-]

I also really enjoyed this post, and specifically thought that the illustrations were much nicer than what's been done before.

However, I did notice that out of all the illustrations that were made for this post, there were about 8 male characters drawn, and 0 females. (The first picture of the Sally-Anne test did portray females, but it was taken from another source, not drawn for this post like the others.) In the future, it might be a good idea to portray both men AND women in your illustrations. I know that you personally use the "flip a coin" method for gender assignment when you can, but it doesn't seem like the illustrator does (There IS a 0.3% chance that the coin flips just all came up "male" for the drawings)

Comment author: yli 02 October 2012 02:22:42PM *  25 points [-]

I don't like the "post-utopian" example. I can totally expect differing sensory experiences depending on whether a writer is post-utopian or not. For example, if they're post-utopian, when reading their biography I would more strongly expect reading about them having been into utopian ideas when they were young, but having then changed their mind. And when reading their works, I would more strongly expect seeing themes of the imperfectability of the world and weltschmerz.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:20:21PM 1 point [-]

Is this actually a standard term? I was trying to make up a new one, without having to actually delve into the pits of darkness and find a real postmodern literary term that doesn't mean anything.

Comment author: thomblake 02 October 2012 06:43:37PM 4 points [-]

Well, there are a lot of hits for "post-utopian" on Google, and they don't seem to be references to you.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:46:29PM 3 points [-]

I think there were fewer Google references back when I first made up the word... I will happily accept nominations for either an equally portentous-sounding but unused term, or a portentous-sounding real literary term that is known not to mean anything.

Comment author: Jonathan_Elmer 02 October 2012 07:32:24PM 6 points [-]

Coming up with a made up word will not solve this problem. If the word describes the content of the author's stories then there will be sensory experiences that a reader can expect when reading those stories.

Comment author: thomblake 02 October 2012 07:42:31PM 2 points [-]

I don't think literature has any equivalent to metasyntactic variables. Still, placeholder names might help - perhaps they are examples of "post-kadigan" literature?

Comment author: [deleted] 02 October 2012 07:43:02PM 13 points [-]

Has anyone ever told you your writing style is Alucentian to the core? Especially in the way your municardist influences constrain the transactional nuances of your structural ephamthism.

Comment author: Eliezer_Yudkowsky 02 October 2012 09:03:31PM 4 points [-]

This looks promising. Is it real, or did you verify that the words don't mean anything standard?

Comment author: [deleted] 02 October 2012 08:16:21PM *  0 points [-]
Comment author: Kaj_Sotala 02 October 2012 06:48:53PM *  15 points [-]

I don't think you can avoid the criticism of "literary terms actually do tend to make one expect differing sensory experiences, and your characterization of the field is unfair" simply by inventing a term which isn't actually in use. I don't know whether "post-utopian" is actually a standard term, but yli's comment doesn't depend on it being one.

Comment author: novalis 02 October 2012 08:16:41PM *  26 points [-]

Maybe you should reconsider picking on an entire field you know nothing about?

I'm not saying this to defend postmodernism, which I know almost nothing about, but to point out that the Sokal hoax is not really enough reason to reject an entire field (any more than the Bogdanov affair is for physics).

I'm pointing out that you're neglecting the virtues of curiosity and humility, at least.

And this is leaving aside that there is no particular reason for "post-utopian" to be a postmodern as opposed to modern term; categorizing writers into movements has been a standard tool of literary analysis for ages (unsurprisingly, since people love putting things into categories).

Comment author: selylindi 02 October 2012 02:26:01PM 10 points [-]

nit to pick: Rod and cone cells don't send action potentials.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:33:25PM 2 points [-]

Can you amplify? I'd thought I'd looked this up.

Comment author: shminux 02 October 2012 06:51:01PM 19 points [-]

Photoreceptor cells produce graded potential, not action potential. It goes through a bipolar cell and a ganglion cell before finally spiking, in a rather processed form.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:51:19PM 2 points [-]

Ah, thanks!

Comment author: Larks 02 October 2012 02:39:26PM *  2 points [-]

Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as: ...

Is this true? Maybe there's a formal reason why, but it seems we can informally represent such ideas without the abstract idea of truth. For example, if we grant quantification over propositions,

Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.

becomes

  • Generalized across possible maps and possible cities, if your map of a city says "p" if and only iff p, navigating according to that map is more likely to get you to the airport on time.

To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

becomes

  • To draw a map of a city such that the map says "p" if and only if p, someone has to go out and look at the buildings; there's no way you'd end up with a map that says "p" if and only if p by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

becomes

  • Beliefs of the form "p", where p, are more likely than beliefs of the form "p", where it is not the case that p, to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should incrementally contain more assertions "p" where p, and fewer assertions "p" where not p, over time.
Comment author: endoself 02 October 2012 05:17:53PM *  3 points [-]

Well, yeah, we can taboo 'truth'. You are still using the titular "useful idea" though by quantifying over propositions and making this correspondence. The idea that there are these things that are propositions and that they can appear both in quotation marks and also appear unquoted, directly in our map, is a useful piece of understanding to have.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:31:37PM 4 points [-]

Generalized across possible maps and possible cities, if your map of a city says "p" if and only iff p

If you can generalize over the correspondence between p and the quoted version of p, you have generalized over a correspondence schema between territory and map, ergo, invoked the idea of truth, that is, something mathematically isomorphic to in-general Tarskian truth, whether or not you named it.

Comment author: incariol 02 October 2012 03:08:01PM *  1 point [-]

So... could this style of writing, with koans and pictures, be applied to transforming the majority of sequences into an even greater didactic tool?

Besides the obvious problems, I'm not sure how this would stand with Eliezer - they are, after all, his masterpiece.

Comment author: thomblake 02 October 2012 03:10:53PM 3 points [-]

his masterpiece

Really, more like his student work. It was "Blog every day so I will have actually written something" not "Blog because that is the ultimate expression of my ideas".

Comment author: Eliezer_Yudkowsky 02 October 2012 06:34:21PM 4 points [-]

Yep. The main problem would be that I'd been writing for year and years before then, and, alas for our unfair universe, also have a certain amount of unearned talent; finding somebody who can pick up the Sequences and improve them without making them worse, despite their obvious flaws as they stand, is an extremely nontrivial hiring problem.

Comment author: [deleted] 02 October 2012 04:19:31PM 2 points [-]

Suppose I have two different non-meaningful statements, A and B. Is it possible to tell them apart? On what basis? On what basis could we recognize non-meaningful statements as tokens of language at all?

Comment author: shminux 02 October 2012 05:15:01PM 1 point [-]

Is it possible to tell them apart?

Why would you want to?

Comment author: Eugine_Nier 02 October 2012 05:33:49PM 1 point [-]

See this.

Comment author: shminux 02 October 2012 06:09:33PM 0 points [-]

Not sure how this is relevant, feel free to elaborate.

Comment author: Peterdjones 02 October 2012 08:10:39PM 0 points [-]

What an odd thing to say. I can tell the difference between untestable sentences, and that's all I need to refute the LP verification principle. Stipulating a defintion of "meaning" that goes beyond linguistic tractability doens't solve anything , and stipulating that people shouldn't want to understand sentences about invisible gorillas doens't either.

Comment author: shminux 02 October 2012 08:32:57PM 2 points [-]

invisible gorillas

Seems like we are not on the same page re the definition of meaningful. I expect "invisible gorillas" to be a perfectly meaningful term in some contexts.

Comment author: Peterdjones 02 October 2012 08:34:41PM 1 point [-]

I don't follow that, because it is not clear whether you are using the vanilla, linguistic notion of "meaning" or the stipulated LPish version,

Comment author: shminux 02 October 2012 09:24:53PM *  0 points [-]

I am not a philosopher and not a linguist, to me meaning of a word or a sentence is the information that can be extracted from it by the recipient, which can be a person or a group of people, or a computer, maybe even an AI. Thus it is not something absolute. I suppose it is closest to an internal interpretation. What is your definition?

Comment author: MixedNuts 02 October 2012 05:39:32PM 6 points [-]

Connotation. The statement has no well-defined denotation, but people say it to imply other, meaningful things. Islam is a religion of peace!

Comment author: [deleted] 02 October 2012 07:10:45PM 1 point [-]

Good answer. So, if I've understood you, you're saying that we can recognize meaningless statements as items of language (and as distinct from one another even) because they consist of words that are elsewhere and in different contexts meaningful.

So for example I may have a function "...is green." where we can fill this in with true objects "the tree", false objects "the sky" and objects with render the resulting sentence meaningless, like "three". The function can be meaningfully filled out, and 'three' can be the objet of a meaningful sentence ('three is greater than two') but in this connection the resulting sentence is meaningless.

Does that sound right to you?

Comment author: Peterdjones 02 October 2012 08:06:08PM 0 points [-]

OTOH, there is no reason to go along with the idea that denotion (or empirical consequence) is essential to meaning. You could instead use you realisation that you actually can tell the difference between untestable statements to conclude that they are in fact meaningful, whatever warmed-over Logical Positivism may say.

Comment author: [deleted] 02 October 2012 05:38:11PM 1 point [-]

Highly Advanced Epistemology 101 for Beginners

What does it tell about me that I mentally weighed "Highly Advanced" on a scale pan and "101" and "for Beginners" on the other pan?

I would have inverted the colours in the “All possible worlds” diagram (but with a black border around it) -- light-on-black reminds me of stars, and thence of the spatially-infinite-universe-including-pretty-much-anything idea, which is not terribly relevant here, whereas a white ellipse with a black border reminds me of a classical textbook Euler-Venn diagram.

an infinite family of truth-conditions: • The sentence 'snow is white' is true if and only if snow is white. • The sentence 'the sky is blue' is true if and only if the sky is blue.

What does it tell about me that I immediately thought ‘what about sentences whose meaning depends on the context’? :-)

What does it tell about me that on seeing the right-side part of the picture just about the koan, my System 1 expected to see infinite regress and was disappointed when the innermost frame didn't included a picture of the guy, and that my System 2 then thought ‘what kind of issue EY is neglecting does this correspond to’?

my beliefs determine my experimental predictions, but only reality gets to determine my experimentalresults.

What does it tell about me that I immediately thought ‘what about placebo and stuff’ (well, technically its aliefs that matter there, not beliefs, but not all of the readers will know the distinction)?

Comment author: MixedNuts 02 October 2012 05:46:37PM 0 points [-]

what about sentences whose meaning depends on the context

Ehn, the truth value depends on context too. "That girl over there heard what this guy just said" is true if that girl over there heard what this guy just said, false if she didn't, and meaningless if there's no girl or no guy or he didn't say anything.

what kind of issue EY is neglecting does this correspond to

Common knowledge, in general?

what about placebo and stuff

Beliefs are a strict subset of reality.

Comment author: [deleted] 02 October 2012 07:07:45PM 0 points [-]

what kind of issue EY is neglecting does this correspond to

Common knowledge, in general?

I was thinking more about stuff like, “but reality does also include my map, so a map of reality ought to include a map of itself” (which, as you mentioned, is related to my point about placebo-like effects).

Comment author: [deleted] 02 October 2012 05:48:06PM *  6 points [-]

Highly Advanced Epistemology 101 for Beginners

The joke flew right over my head and I found myself typing "Redundant wording. Advanced Epistemology for Beginners sounds better."

Comment author: anotherblackhat 02 October 2012 06:31:22PM 1 point [-]

The "All possible worlds" picture doesn't include the case of a marble in both the basket and the box.

Comment author: thomblake 02 October 2012 06:39:48PM 8 points [-]

I think there was only one marble in the universe.

Comment author: Armok_GoB 02 October 2012 08:18:37PM 7 points [-]

This sentence is hilarious out of context.

Comment author: earthwormchuck163 02 October 2012 08:22:34PM 2 points [-]

I would like to thank you for bringing my attention to that sentence without any context.

Comment author: wedrifid 02 October 2012 08:36:00PM 5 points [-]

This sentence is hilarious out of context.

Also presumably a true one, assuming he aims the 'was' correctly.

Comment author: Wei_Dai 02 October 2012 07:23:34PM 19 points [-]

There are some kinds of truths that don't seem to be covered by truth-as-correspondence-between-map-and-territory. (Note: This general objection is well know and is given as Objection 1 in SEP's entry on Correspondence Theory.) Consider:

  1. modal truths if one isn't a modal realist
  2. mathematical truths if one isn't a mathematical Platonist
  3. normative truths

Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist). The last one is most problematic to me, because some kinds of normative statements seem to be talking about what one should do given some assumed-to-be-accurate map, and not about the map itself. For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

So, a couple of questions that seem open to me: Do we need other notions of truth, besides correspondence between map and territory? If so, is there a more general notion of truth that covers all of these as special cases?

Comment author: Benquo 02 October 2012 07:46:33PM *  1 point [-]

I don't think 2 is answered even if you say that the mathematical objects are themselves real. Consider a geometry that labels "true" everything that follows from its axioms. If this geometry is consistent, then we want to say that it is true, which implies that everything it labels as "true", is. And the axioms themselves follow from the axioms, so the mathematical system says that they're true. But you can also have another valid mathematical system, where one of those axioms is negated. This is a problem because it implies that something can be both true and not true.

Because of this, the sense in which mathematical propositions can be true can't be the same sense in which "snow is white" can be true, even if the objects themselves are real. We have to be equivocating somewhere on "truth".

Comment author: Peterdjones 02 October 2012 08:49:38PM *  0 points [-]

You are tacitly assuming that Platonists have to hold that what is formally true (proveable, derivable from axioms) is actuallty true. But a significant part of the content of Platonism is that mathematical statements are only really true if they correspond to the organisation of Plato's heaven. Platonists can say, "I know you proved that, but it isn't actually true". So there are indeed different notions of truth at play here.

Which is not to defend Platonism. The notion of a "real truth" which can't be publically assessed or agreed upon in the way that formal proof can be is quite problematical.

Comment author: DuncanS 02 October 2012 10:23:29PM 4 points [-]

It's easy to overcome that simply by being a bit more precise - you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.

It is a different sense of true in that it isn't necessarily related to sensory experience - only to the interrelationships of ideas.

Comment author: amcknight 02 October 2012 10:31:26PM 5 points [-]

I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.

Comment author: Kaj_Sotala 02 October 2012 07:28:43PM 33 points [-]

I just realized that since I posted two comments that were critical over a minor detail, I should balance it out by mentioning that I liked the post - it was indeed pretty elementary, but it was also clear, and I agree about it being considerably better than The Simple Truth. And I liked the koans - they should be a useful device to the readers who actually bother to answer them.

Also:

Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality.

was a cute touch.

Comment author: Jonathan_Elmer 02 October 2012 07:29:43PM 0 points [-]

A belief is true if it is consistent with reality.

Comment author: EricHerboso 02 October 2012 07:33:36PM 3 points [-]

Two minor grammatical corrections:

A space is missing between "itself" and "is " in "The marble itselfis a small simple", and between "experimental" and "results" in "only reality gets to determine my experimentalresults".

Comment author: earthwormchuck163 02 October 2012 08:50:18PM 6 points [-]

The pictures are a nice touch.

Though I found it sort of unnerving to read a paragraph and then scroll down to see a cartoon version of the exact same image I had painted inside my head, several times in a row.

Comment author: [deleted] 02 October 2012 09:21:10PM 0 points [-]

The river side illustration is inaccurate and should be much more like the illustration right above (with the black shirt replaced with a white shirt).

Comment author: DuncanS 02 October 2012 09:28:32PM *  2 points [-]

People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished thought and using it as input to the next thought. In animals, I suspect this sense isn't good enough to allow thought chains to be made - and so they can't make arguments. In humans it is good enough, but probably not by very much - it seems rather likely that the ability to make thought chains evolved quite recently.

I think we probably make mistakes about what we think we think all the time - but there is usually nobody who can correct us.