Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Useful Idea of Truth

73 Post author: Eliezer_Yudkowsky 02 October 2012 06:16PM

(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI.  For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows.  And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation.  Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)


I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan

I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico

What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche


The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.

  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.

  3. Anne leaves the room, and Sally returns.

  4. The experimenter asks the child where Sally will look for her marble.

Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.

(Attributed to:  Baron-Cohen, S., Leslie, L. and Frith, U. (1985) ‘Does the autistic child have a “theory of mind”?’, Cognition, vol. 21, pp. 37–46.)

Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality. A three-year-old has a model only of where the marble is. A four-year old is developing a theory of mind; they separately model where the marble is and where Sally believes the marble is, so they can notice when the two conflict - when Sally has a false belief.

Any meaningful belief has a truth-condition, some way reality can be which can make that belief true, or alternatively false. If Sally's brain holds a mental image of a marble inside the basket, then, in reality itself, the marble can actually be inside the basket - in which case Sally's belief is called 'true', since reality falls inside its truth-condition. Or alternatively, Anne may have taken out the marble and hidden it in the box, in which case Sally's belief is termed 'false', since reality falls outside the belief's truth-condition.

The mathematician Alfred Tarski once described the notion of 'truth' via an infinite family of truth-conditions:

  • The sentence 'snow is white' is true if and only if snow is white.

  • The sentence 'the sky is blue' is true if and only if the sky is blue.

When you write it out that way, it looks like the distinction might be trivial - indeed, why bother talking about sentences at all, if the sentence looks so much like reality when both are written out as English?

But when we go back to the Sally-Anne task, the difference looks much clearer: Sally's belief is embodied in a pattern of neurons and neural firings inside Sally's brain, three pounds of wet and extremely complicated tissue inside Sally's skull. The marble itself is a small simple plastic sphere, moving between the basket and the box. When we compare Sally's belief to the marble, we are comparing two quite different things.

(Then why talk about these abstract 'sentences' instead of just neurally embodied beliefs? Maybe Sally and Fred believe "the same thing", i.e., their brains both have internal models of the marble inside the basket - two brain-bound beliefs with the same truth condition - in which case the thing these two beliefs have in common, the shared truth condition, is abstracted into the form of a sentence or proposition that we imagine being true or false apart from any brains that believe it.)

Some pundits have panicked over the point that any judgment of truth - any comparison of belief to reality - takes place inside some particular person's mind; and indeed seems to just compare someone else's belief to your belief:

So is all this talk of truth just comparing other people's beliefs to our own beliefs, and trying to assert privilege? Is the word 'truth' just a weapon in a power struggle?

For that matter, you can't even directly compare other people's beliefs to our own beliefs. You can only internally compare your beliefs about someone else's belief to your own belief - compare your map of their map, to your map of the territory.

Similarly, to say of your own beliefs, that the belief is 'true', just means you're comparing your map of your map, to your map of the territory. People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

And so saying 'I believe the sky is blue, and that's true!' typically conveys the same information as 'I believe the sky is blue' or just saying 'The sky is blue' - namely, that your mental model of the world contains a blue sky.

Meditation:

If the above is true, aren't the postmodernists right? Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?

(A 'meditation' is a puzzle that the reader is meant to attempt to solve before continuing. It's my somewhat awkward attempt to reflect the research which shows that you're much more likely to remember a fact or solution if you try to solve the problem yourself before reading the solution; succeed or fail, the important thing is to have tried first . This also reflects a problem Michael Vassar thinks is occurring, which is that since LW posts often sound obvious in retrospect, it's hard for people to visualize the diff between 'before' and 'after'; and this diff is also useful to have for learning purposes. So please try to say your own answer to the meditation - ideally whispering it to yourself, or moving your lips as you pretend to say it, so as to make sure it's fully explicit and available for memory - before continuing; and try to consciously note the difference between your reply and the post's reply, including any extra details present or missing, without trying to minimize or maximize the difference.)

...
...
...

Reply:

The reply I gave to Dale Carrico - who declaimed to me that he knew what it meant for a belief to be falsifiable, but not what it meant for beliefs to be true - was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results. If I believe very strongly that I can fly, then this belief may lead me to step off a cliff, expecting to be safe; but only the truth of this belief can possibly save me from plummeting to the ground and ending my experiences with a splat.

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

You won't get a direct collision between belief and reality - or between someone else's beliefs and reality - by sitting in your living-room with your eyes closed. But the situation is different if you open your eyes!

Consider how your brain ends up knowing that its shoelaces are untied:

  • A photon departs from the Sun, and flies to the Earth and through Earth's atmosphere.
  • Your shoelace absorbs and re-emits the photon.
  • The reflected photon passes through your eye's pupil and toward your retina.
  • The photon strikes a rod cell or cone cell, or to be more precise, it strikes a photoreceptor, a form of vitamin-A known as retinal, which undergoes a change in its molecular shape (rotating around a double bond) powered by absorption of the photon's energy. A bound protein called an opsin undergoes a conformational change in response, and this further propagates to a neural cell body which pumps a proton and increases its polarization.
  • The gradual polarization change is propagated to a bipolar cell and then a ganglion cell. If the ganglion cell's polarization goes over a threshold, it sends out a nerve impulse, a propagating electrochemical phenomenon of polarization-depolarization that travels through the brain at between 1 and 100 meters per second. Now the incoming light from the outside world has been transduced to neural information, commensurate with the substrate of other thoughts.
  • The neural signal is preprocessed by other neurons in the retina, further preprocessed by the lateral geniculate nucleus in the middle of the brain, and then, in the visual cortex located at the back of your head, reconstructed into an actual little tiny picture of the surrounding world - a picture embodied in the firing frequencies of the neurons making up the visual field. (A distorted picture, since the center of the visual field is processed in much greater detail - i.e. spread across more neurons and more cortical area - than the edges.)
  • Information from the visual cortex is then routed to the temporal lobes, which handle object recognition.
  • Your brain recognizes the form of an untied shoelace.

And so your brain updates its map of the world to include the fact that your shoelaces are untied. Even if, previously, it expected them to be tied!  There's no reason for your brain not to update if politics aren't involved. Once photons heading into the eye are turned into neural firings, they're commensurate with other mind-information and can be compared to previous beliefs.

Belief and reality interact all the time. If the environment and the brain never touched in any way, we wouldn't need eyes - or hands - and the brain could afford to be a whole lot simpler. In fact, organisms wouldn't need brains at all.

So, fine, belief and reality are distinct entities which do intersect and interact. But to say that we need separate concepts for 'beliefs' and 'reality' doesn't get us to needing the concept of 'truth', a comparison between them. Maybe we can just separately (a) talk about an agent's belief that the sky is blue and (b) talk about the sky itself. Instead of saying, "Jane believes the sky is blue, and she's right", we could say, "Jane believes 'the sky is blue'; also, the sky is blue" and convey the same information about what (a) we believe about the sky and (b) what we believe Jane believes. We could always apply Tarski's schema - "The sentence 'X' is true iff X" - and replace every instance of alleged truth by talking directly about the truth-condition, the corresponding state of reality (i.e. the sky or whatever). Thus we could eliminate that bothersome word, 'truth', which is so controversial to philosophers, and misused by various annoying people.

Suppose you had a rational agent, or for concreteness, an Artificial Intelligence, which was carrying out its work in isolation and certainly never needed to argue politics with anyone. The AI knows that "My model assigns 90% probability that the sky is blue"; it is quite sure that this probability is the exact statement stored in its RAM. Separately, the AI models that "The probability that my optical sensors will detect blue out the window is 99%, given that the sky is blue"; and it doesn't confuse this proposition with the quite different proposition that the optical sensors will detect blue whenever it believes the sky is blue. So the AI can definitely differentiate the map and the territory; it knows that the possible states of its RAM storage do not have the same consequences and causal powers as the possible states of sky.

But does this AI ever need a concept for the notion of truth in general - does it ever need to invent the word 'truth'? Why would it work better if it did?

Meditation: If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?

...
...
...

Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as:

  • Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.

  • To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

  • True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

This is the main benefit of talking and thinking about 'truth' - that we can generalize rules about how to make maps match territories in general; we can learn lessons that transfer beyond particular skies being blue.


Next in main sequence:

Complete philosophical panic has turned out not to be justified (it never is). But there is a key practical problem that results from our internal evaluation of 'truth' being a comparison of a map of a map, to a map of reality: On this schema it is very easy for the brain to end up believing that a completely meaningless statement is 'true'.

Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all 'post-utopians', which you can tell because their writings exhibit signs of 'colonial alienation'. For most college students the typical result will be that their brain's version of an object-attribute list will assign the attribute 'post-utopian' to the authors Carol, Danny, and Elaine. When the subsequent test asks for "an example of a post-utopian author", the student will write down "Elaine". What if the student writes down, "I think Elaine is not a post-utopian"? Then the professor models thusly...

...and marks the answer false.

After all...

  • The sentence "Elaine is a post-utopian" is true if and only if Elaine is a post-utopian.

...right?

Now of course it could be that this term does mean something (even though I made it up).  It might even be that, although the professor can't give a good explicit answer to "What is post-utopianism, anyway?", you can nonetheless take many literary professors and separately show them new pieces of writing by unknown authors and they'll all independently arrive at the same answer, in which case they're clearly detecting some sensory-visible feature of the writing.  We don't always know how our brains work, and we don't always know what we see, and the sky was seen as blue long before the word "blue" was invented; for a part of your brain's world-model to be meaningful doesn't require that you can explain it in words.

On the other hand, it could also be the case that the professor learned about "colonial alienation" by memorizing what to say to his professor.  It could be that the only person whose brain assigned a real meaning to the word is dead.  So that by the time the students are learning that "post-utopian" is the password when hit with the query "colonial alienation?", both phrases are just verbal responses to be rehearsed, nothing but an answer on a test.

The two phrases don't feel "disconnected" individually because they're connected to each other - post-utopianism has the apparent consequence of colonial alienation, and if you ask what colonial alienation implies, it means the author is probably a post-utopian.  But if you draw a circle around both phrases, they don't connect to anything else.  They're floating beliefs not connected with the rest of the model. And yet there's no internal alarm that goes off when this happens. Just as "being wrong feels like being right" - just as having a false belief feels the same internally as having a true belief, at least until you run an experiment - having a meaningless belief can feel just like having a meaningful belief.

(You can even have fights over completely meaningless beliefs.  If someone says "Is Elaine a post-utopian?" and one group shouts "Yes!" and the other group shouts "No!", they can fight over having shouted different things; it's not necessary for the words to mean anything for the battle to get started.  Heck, you could have a battle over one group shouting "Mun!" and the other shouting "Fleem!"  More generally, it's important to distinguish the visible consequences of the professor-brain's quoted belief (students had better write down a certain thing on his test, or they'll be marked wrong) from the proposition that there's an unquoted state of reality (Elaine actually being a post-utopian in the territory) which has visible consquences.)

One classic response to this problem was verificationism, which held that the sentence "Elaine is a post-utopian" is meaningless if it doesn't tell us which sensory experiences we should expect to see if the sentence is true, and how those experiences differ from the case if the sentence is false.

But then suppose that I transmit a photon aimed at the void between galaxies - heading far off into space, away into the night. In an expanding universe, this photon will eventually cross the cosmological horizon where, even if the photon hit a mirror reflecting it squarely back toward Earth, the photon would never get here because the universe would expand too fast in the meanwhile. Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true. And this sort of question can have important policy consequences: suppose we were thinking of sending off a near-light-speed colonization vessel as far away as possible, so that it would be over the cosmological horizon before it slowed down to colonize some distant supercluster. If we thought the colonization ship would just blink out of existence before it arrived, we wouldn't bother sending it.

It is both useful and wise to ask after the sensory consequences of our beliefs. But it's not quite the fundamental definition of meaningful statements. It's an excellent hint that something might be a disconnected 'floating belief', but it's not a hard-and-fast rule.

You might next try the answer that for a statement to be meaningful, there must be some way reality can be which makes the statement true or false; and that since the universe is made of atoms, there must be some way to arrange the atoms in the universe that would make a statement true or false. E.g. to make the statement "I am in Paris" true, we would have to move the atoms comprising myself to Paris. A literateur claims that Elaine has an attribute called post-utopianism, but there's no way to translate this claim into a way to arrange the atoms in the universe so as to make the claim true, or alternatively false; so it has no truth-condition, and must be meaningless.

Indeed there are claims where, if you pause and ask, "How could a universe be arranged so as to make this claim true, or alternatively false?", you'll suddenly realize that you didn't have as strong a grasp on the claim's truth-condition as you believed. "Suffering builds character", say, or "All depressions result from bad monetary policy." These claims aren't necessarily meaningless, but they're a lot easier to say, than to visualize the universe that makes them true or false. Just like asking after sensory consequences is an important hint to meaning or meaninglessness, so is asking how to configure the universe.

But if you say there has to be some arrangement of atoms that makes a meaningful claim true or false...

Then the theory of quantum mechanics would be meaningless a priori, because there's no way to arrange atoms to make the theory of quantum mechanics true.

And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false - since there'd be no atoms arranged to fulfill their truth-conditions.

Meditation: What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?


  • Meditation Answers - (A central comment for readers who want to try answering the above meditation (before reading whatever post in the Sequence answers it) or read contributed answers.)
  • Mainstream Status - (A central comment where I say what I think the status of the post is relative to mainstream modern epistemology or other fields, and people can post summaries or excerpts of any papers they think are relevant.)

 

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Skill: The Map is Not the Territory"

Comments (508)

Comment author: Kaj_Sotala 02 October 2012 07:28:43PM 30 points [-]

I just realized that since I posted two comments that were critical over a minor detail, I should balance it out by mentioning that I liked the post - it was indeed pretty elementary, but it was also clear, and I agree about it being considerably better than The Simple Truth. And I liked the koans - they should be a useful device to the readers who actually bother to answer them.

Also:

Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality.

was a cute touch.

Comment author: [deleted] 03 October 2012 09:26:59PM 10 points [-]

Thank you for being positive.

I've been recently thinking about this, and noticed that despite things like "why our kind can't cooperate", we still focus on criticisms of minor points, even when there are major wins to be celebrated.

Comment author: Alex_Altair 03 October 2012 04:03:26AM 18 points [-]

She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.

She should hand back the paper with the note, "What do you mean by 'mean'?"

Comment author: Wei_Dai 02 October 2012 07:23:34PM 18 points [-]

There are some kinds of truths that don't seem to be covered by truth-as-correspondence-between-map-and-territory. (Note: This general objection is well know and is given as Objection 1 in SEP's entry on Correspondence Theory.) Consider:

  1. modal truths if one isn't a modal realist
  2. mathematical truths if one isn't a mathematical Platonist
  3. normative truths

Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist). The last one is most problematic to me, because some kinds of normative statements seem to be talking about what one should do given some assumed-to-be-accurate map, and not about the map itself. For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

So, a couple of questions that seem open to me: Do we need other notions of truth, besides correspondence between map and territory? If so, is there a more general notion of truth that covers all of these as special cases?

Comment author: Viliam_Bur 05 October 2012 10:33:20AM *  5 points [-]

If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

The problem with Alice's belief is that it is incomplete. It's like saying "I believe that 3 is greater than" (end of sentence).

Even incomplete sentences can work in some contexts where people know how to interpret them. For example if we had a convention that all sentences ending with "greater than" have to be interpreted as "greater than zero", then in given context the sentence "3 is greater than" makes sense, and is true. It just does not make sense outside of this context. Without context, it's not a logical proposition, but rather a proposition template.

Similarly, the sentence "you should X" is meaningful in contexts which provide additional explanation of what "should" means. For a consequentialist, the meaning of "you should" is "maximizes your utility". For a theist, it could mean "makes Deity happy". For both of them, the meaning of "should" is obvious, and within their contexts, they are right. The sentence becomes confusing only when we take it out of context; when we pretend that the context is not necessary for completing it.

So perhaps the problem is not "some truths are not about map-territory correspondence", but rather "some sentences require context to be transformed into true/false expressions (about map-territory correspondence)".

Seems to me that this is somehow related to making ideas pay rent, in sense that when you describe how do you expect the idea to pay rent, in the process you explain the context.

Comment author: Bluehawk 26 November 2012 11:52:11AM 1 point [-]

At the risk of nitpicking:

"Makes Deity happy" sounds to me like a very specific interpretation of "utility", rather than something separate from it. I can't picture any context for the phrase "P should X" that doesn't simply render "X maximizes utility" for different values of the word "utility". If "make Deity happy" is the end goal, wouldn't "utility" be whatever gives you the most efficient route to that goal?

Comment author: amcknight 02 October 2012 10:31:26PM 4 points [-]

I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.

Comment author: faul_sname 02 October 2012 11:22:07PM 2 points [-]

If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

It seems that way to me. Specifically, in that case I think you're saying that Alice (wrongly) expects that her decision is causally independent from the money Omega put in the boxes, and as such thinks that her expected utility is higher from grabbing both boxes.

Comment author: pragmatist 03 October 2012 12:18:38AM 2 points [-]

Michael Lynch has a functionalist theory of truth (described in this book) that responds to concerns like yours. His claim is that there is a "truth role" that is constant across all domains of discourse where we talk about truth and falsity of propositions. The truth role is characterized by three properties:

  1. Objectivity: The belief that p is true if and only if with respect to the belief that p, things are as they are believed to be.

  2. Norm of belief: It is prima facie correct to believe that p if and only if the proposition that p is true.

  3. End of inquiry: Other things being equal, true beliefs are a worthy goal of inquiry.

Lynch claims that, in different domains of discourse, there are different properties that play this truth role. For instance, when we're doing science it's plausible that the appropriate realizer of the truth role is some kind of correspondence notion. On the other hand, when we're doing mathematics, one might think that the truth role is played by some sort of theoretical coherence property. Mathematical truths, according to Lynch, satisfy the truth role, but not by virtue of correspondence to some state of affairs in our external environment. He has a similar analysis of moral truths.

I'm not sure whether Lynch's particular description of the truth role is right, but the functionalist approach (truth is a functional property, and the function can be performed by many different realizers) is very attractive to me.

Comment author: V_V 03 October 2012 10:49:02AM *  1 point [-]

Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist).

I think Yudkowsky is a Platonist, and I'm not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.

For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

Therefore, if you say "Alice has a false belief that if she two-boxes in Newcomb's problem then she will maximize her expected utility" you are saying that her belief doesn't correspond to the mathematical constructs underlying Newcomb's problem. If you take the Platonist position that mathematical constructs exist as external entities ("the territory"), then yes, you are saying that her map doesn't correspond to the territory.

Comment author: TheOtherDave 03 October 2012 02:10:21PM 2 points [-]

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

Well, sure, a utilitarian can always "rephrase" should-statements that way; to a utilitarian what "X should Y" means is "Y maximizes X's expected utility." That doesn't make "X should Y" not a normative statement, it just means that utilitarian normative statements are also objective statements about reality.

Conversely, I'm not sure a deontologist would agree that you can rephrase one as the other... that is, a deontologist might coherently (and incorrectly) say "Yes, two-boxing maximizes expected utility, but you still shouldn't do it."

Comment author: Wei_Dai 03 October 2012 01:03:44PM 1 point [-]

I think Yudkowsky is a Platonist, and I'm not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.

Thanks for the link. That does seem inconsistent.

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

This comment should help you understand why I disagree. Does it make sense?

Comment author: V_V 03 October 2012 03:01:02PM 2 points [-]

This comment should help you understand why I disagree. Does it make sense?

I don't claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can't.

Comment author: [deleted] 03 October 2012 10:11:00AM 1 point [-]

He says that counterfactuals do have a truth value, though IMO he's a bit vague about what that is (or maybe it's me who can't fully understand what he says).

Comment author: Benquo 02 October 2012 07:46:33PM *  1 point [-]

I don't think 2 is answered even if you say that the mathematical objects are themselves real. Consider a geometry that labels "true" everything that follows from its axioms. If this geometry is consistent, then we want to say that it is true, which implies that everything it labels as "true", is. And the axioms themselves follow from the axioms, so the mathematical system says that they're true. But you can also have another valid mathematical system, where one of those axioms is negated. This is a problem because it implies that something can be both true and not true.

Because of this, the sense in which mathematical propositions can be true can't be the same sense in which "snow is white" can be true, even if the objects themselves are real. We have to be equivocating somewhere on "truth".

Comment author: thomblake 02 October 2012 01:56:35PM *  12 points [-]

This post is better than the simple truth and I will be linking to it more often, even though this isn't as funny.

Nice illustrations.

EDIT: Reworded in praise-first style.

Comment author: MBlume 02 October 2012 09:18:08PM *  16 points [-]

The other day Yvain was reading aloud from Feser and I said I wished Feser would read The Simple Truth. I don't think this would help quite as much.

The Simple Truth sought to convey the intuition that truth is not just a property of propositions in brains, but of any system successfully entangled with another system. Once the shepherd's leveled up a bit in his craftsmanship, the sheep can pull aside the curtain, drop a pebble into the bucket, and the level in the bucket will remain true without human intervention.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:35:56PM 4 points [-]

Thanks!

Comment author: [deleted] 02 October 2012 09:31:23PM 12 points [-]

I also really enjoyed this post, and specifically thought that the illustrations were much nicer than what's been done before.

However, I did notice that out of all the illustrations that were made for this post, there were about 8 male characters drawn, and 0 females. (The first picture of the Sally-Anne test did portray females, but it was taken from another source, not drawn for this post like the others.) In the future, it might be a good idea to portray both men AND women in your illustrations. I know that you personally use the "flip a coin" method for gender assignment when you can, but it doesn't seem like the illustrator does (There IS a 0.3% chance that the coin flips just all came up "male" for the drawings)

Comment author: Eliezer_Yudkowsky 02 October 2012 11:33:18PM 15 points [-]

The specs given to the illustrator were stick figures. I noticed the male prevalence and requested some female versions or replacement with actual stick figures.

Comment author: arundelo 02 October 2012 11:52:45PM 4 points [-]

In the light of the illustrations' lack of gender variety it's strange that they do have a variety of skin and hair colors.

Comment author: [deleted] 03 October 2012 10:14:06AM *  2 points [-]

I hadn't noticed about their sex, but I did notice that they all seem to be children and no adults (EDIT: except the professor in the last picture). (BTW, the character with dark hair, pale skin, red T-shirt and blue trousers doesn't obviously look masculine to me; it might as well be a female child (too young to have boobs).)

Comment author: Alex_Altair 04 October 2012 06:37:23PM 1 point [-]

Fixed.

Comment author: lukeprog 02 October 2012 03:42:10PM 3 points [-]

Ditto.

Comment author: buybuydandavis 03 October 2012 01:54:51AM 10 points [-]

I don't think EY has chosen the most useful way to proceed on a discussion of truth. He has started from an anecdote where the correspondence theory of truth is the most applicable, and charges ahead developing the correspondence theory.

We call some beliefs true, and some false. True and false are judgments we apply to beliefs - sorting them into two piles. I think the limited bandwidth of a binary split should already be a tip off that we're heading down the wrong path.

In practice, ideas will be more or less useful, with that usefulness varying depending on the specifics of the context of the application of those beliefs. Even taking "belief as predictive model" as given, it's not that a belief is either accurate or inaccurate, but it will be more or less accurate, and so more or less useful, as I've claimed is the general case of interest.

Going back to the instrumental versus epsitemic distinction, I want to win, and having a model that accurately predicts events is only one tool for winning among many. It's a wonderful simulation tool, but not the only thing I can do with beliefs.

If I'm going to sort beliefs into more and less useful, the first thing to do is identify the ways that a belief can be used. What can I do with a belief?

I can ruminate on it. Sometimes that will be enjoyable, sometimes not.

I can compare it to my other beliefs. That allows for some correction of inconsistent beliefs.

I can use it to take action. This is where the correspondence theory gets its main application. I can use a model in my head to make a prediction, and take action based on that prediction.

However, the prediction itself is mainly an intermediate good for selecting the best action. Well, one can skip the middle man and have a direct algorithmic rule If A, do(x) to get the job done. That rule can be useful without making any predictions. One can believe in such a rule, and rely on it, to take action as well. Beliefs directing action can be algorithmic instead of predictive, so that correspondence theory isn't the only option even in it's main domain of application.

Back to what I can do with a belief, I can tell it to my neighbor. That becomes a very complicated use because it now involves the interaction with another mind with other knowledge. I can inform my neighbor of something. I can lie to my neighbor. I can signal to my neighbor. There are quite a number of uses to communicating a belief to my neighbor. One interesting thing is that I can communicate things to my neighbor that I don't even understand.

What I would expect, in a population of evolved beings, is that there'd be some impulse to judge beliefs for all these uses, and to varying degrees for each usage across the population.

So charging off on the correspondence theory strikes me as going very deep into only one usage of beliefs that people are likely to find compelling, and probably the one that's already best analyzed, as that is the perspective that best allows for systematic analysis.

What I think is potentially much more useful is an analysis of all the other truth modalities from the correspondence theory perspective,

Just as Haidt finds multiple moral modalities, and subpopulations defined in their moral attitudes by their weighting of those different modalities, I suspect that a similar kind of thing is happening with respect to truth modalities. Further, I'd guess that political clustering occurs not just in moral modality space, but in the joint moral-truth modality space as well.

Comment author: selylindi 02 October 2012 02:26:01PM 10 points [-]

nit to pick: Rod and cone cells don't send action potentials.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:33:25PM 2 points [-]

Can you amplify? I'd thought I'd looked this up.

Comment author: shminux 02 October 2012 06:51:01PM 18 points [-]

Photoreceptor cells produce graded potential, not action potential. It goes through a bipolar cell and a ganglion cell before finally spiking, in a rather processed form.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:51:19PM 2 points [-]

Ah, thanks!

Comment author: Sniffnoy 02 October 2012 08:59:24AM 10 points [-]

The quantum-field-theory-and-atoms thing seems to be not very relevant, or at least not well-stated. I mean, why the focus on atoms in the first place? To someone who doesn't already know, it sounds like you're just saying "Yes, elementary particles are smaller than atoms!" or more generally "Yes, atoms are not fundamental!"; it's tempting to instead say "OK, so instead of taking a possible state of configurations of atoms, take a possible state of whatever is fundamental."

I'm guessing the problem you're getting at is that is that when you actually try to do this you encounter the problem that you quickly find that you're talking about not the state of the universe but the state of a whole notional multiverse, and you're not talking about one present state of it but its entire evolution over time as one big block, which makes our original this-universe-focused, present-focused notion a little harder to make sense of -- or if not this particular problem then something similar -- but it sounds like you're just making a stupid verbal trick.

Comment author: DuncanS 02 October 2012 10:08:32PM *  4 points [-]

I agree - atoms and so forth are what our universe happens to consist of. But I can't see why that's relevant to the question of what truth is at all - I'd say that the definition of truth and how to determine it are not a function of the physics of the universe one happens to inhabit. Adding physics into the mix tends therefore to distract from the main thrust of the argument - making me think about two complex things instead of just one.

Comment author: earthwormchuck163 02 October 2012 08:50:18PM 6 points [-]

The pictures are a nice touch.

Though I found it sort of unnerving to read a paragraph and then scroll down to see a cartoon version of the exact same image I had painted inside my head, several times in a row.

Comment author: Konkvistador 02 October 2012 05:48:06PM *  6 points [-]

Highly Advanced Epistemology 101 for Beginners

The joke flew right over my head and I found myself typing "Redundant wording. Advanced Epistemology for Beginners sounds better."

Comment author: yli 02 October 2012 02:22:42PM *  25 points [-]

I don't like the "post-utopian" example. I can totally expect differing sensory experiences depending on whether a writer is post-utopian or not. For example, if they're post-utopian, when reading their biography I would more strongly expect reading about them having been into utopian ideas when they were young, but having then changed their mind. And when reading their works, I would more strongly expect seeing themes of the imperfectability of the world and weltschmerz.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:32:29PM 6 points [-]

I've edited the OP to try and compartmentalize off the example a bit more.

Comment author: TimS 03 October 2012 10:12:34PM 1 point [-]

Do you also think the label "Impressionist painter" is meaningless?

Comment author: Eliezer_Yudkowsky 03 October 2012 10:29:07PM 5 points [-]

I have no idea what Impressionism is (I am not necessarily proud of this ignorance, since for all I know it does mean something important). Do you think that a panel of artists would be able to tell who was and wasn't "Impressionist" and mostly agree with each other? That does seem like a good criterion for whether there's sensory data that they're reacting to.

Comment author: Kaj_Sotala 04 October 2012 05:21:42AM *  13 points [-]

Apparently even computers agree with those judgments (or at least cluster "impressionists" in their own group - I didn't read the paper, but I expect that the cluster labels were added manually).

ETA: Got the paper. Excerpts:

The dataset includes 994 paintings representing 34 painters, such that each painter has at least 19 images in the dataset. The painters represent several different schools of art such as Early, High, and Northern Renaissance, Mannerism, Baroque, Rococo, Romanticism, Impressionism, Post and Neo Impressionism, Abstract Expressionism, Surrealism, and Fauvism, as commonly defined by art historians. The images were downloaded from various online sources, and normalized to a size of 640,000 pixels while preserving the original aspect ratio. The paintings that were selected for the experiment are assumed to be all in their original condition.

[...] To make the analysis more meaningful for comparing similarities between artistic styles of painters, we selected for each painter paintings that reflect the signature artistic style of that painter. For instance, in Wassily Kandinsky collection we included only paintings representing his abstract expressionism signature artistic style, and did not include his earlier work such as “The-Blue-Rider”, which embodies a different artistic style.

The dataset is used such that in each run 17 different paintings per artist are randomly selected to determine the Fisher discriminant scores of the features, and two images from each painter are used to determine the distances between the images using the WND method [Shamir 2008; Shamir et al. 2008, 2009, 2010]. The experiment is repeated automatically 100 times, and the arithmetic means of the distances across all runs are computed. [...]

The image analysis method is based on the WND-CHARM scheme [Shamir 2008; Shamir et al. 2008], which was originally developed for biomedical image analysis [Shamir et al. 2008, 2009]. The CHARM [Shamir, 2008; Shamir et al. 2010] set of numerical image content descriptors is a comprehensive set of 4027 features that reflect very many aspects of the visual content such as shapes (Euler number, Otsu binary object statistics), textures (Haralick, Tamura), edges (Prewitt gradient statistics), colors [Shamir 2006], statistical distribution of the pixel intensities (multiscale histograms, first four moments), fractal features [Wu et al. 1992], and polynomial decomposition of the image (Chebyshev statistics). These content descriptors are described more thoroughly in Shamir [2008] and Shamir et al. [2008, 2009, 2010]. This scheme of numerical image content descriptors was originally developed for complex morphological analysis of biomedical imaging, but was also found useful for the analysis of visual art [Shamir et al. 2010; Shamir 2012].

An important feature of the set of numerical image content descriptors is that the color descriptors are based on a first step of classifying each pixel into one of 10 color classes based on a fuzzy logic model that mimics the human intuition of colors [Shamir 2006]. This transformation to basic color classes ensures that further analysis of the color information is not sensitive to specific pigments that were not available to some of the classical painters in the dataset, or to the condition and restoration of some of the older paintings used in this study.

[...] As the figure shows, the classical artists are placed in the lower part of the phylogeny, while the modern artists are clustered in the upper part. A clear distinction between those groups at the center reflects the difference between classical realism and modern artistic styles that evolved during and after the 19th century.

Inside those two broad groups, it is noticeable that the computer was able to correctly cluster artists that belong in the same artistic movements, and placed these clusters on the graph in a fashion that is largely in agreement with the analysis of art historians. For instance, the bottom center cluster includes the High Renaissance artists Raphael, Da Vinci, and Michelangelo, indicating that the computer analysis could identify that these artists belong in the same school of art and have similar artistic styles [O’Mahony 2006].

The Early Renaissance artists Ghirlandaio, Francesca, and Botticelli are clustered together left to the High Renaissance painters, and the Northern Renaissance artists Bruegel, Van Eyck, and Durer are placed above the High Renaissance. Further to the right, close the High Renaissance, the algorithm placed three painters associated with the Mannerism movement, Veronese, Tintoretto, and El Greco, who were inspired by Renaissance artists such as Michelangelo [O’Mahony 2006]. Below the Mannerism painters the algorithm automatically grouped three Baroque artists, Vermeer, Rubens, and Rembrandt. Interestingly, Goya, Rococo, and Romanticism artist is placed between the Mannerism and the Baroque schools. The Romanticism artists, Gericault and Delacroix, who were inspired by Baroque painters such as Rubens [Gariff 2008], are clustered next to the Baroque group.

The upper part of the phylogeny features the modern artists. The Abstract Expressionists Kandinsky, Rothko, and Pollock are grouped together, as it has been shown that abstract paintings can be automatically differentiated from figural paintings with high accuracy [Shamir et al. 2010]. Surrealists Dali, Ernst, and de Chirico are also clustered by the computer analysis. An interesting observation is Fauvists Matisse and Derain are placed close to each other, between the Neo Impressionists and Abstract Expressionists clusters.

The neighboring clusters of Neo Impressionists Seurat and Signac and Post Impressionists Cezanne and Gauguin are also in agreement with the perception of art historians, as well as the cluster of Impressionists Renoir and Monet. These two artists are placed close to Vincent Van Gogh, who is associated with the Post Impressionism artistic movement. The separation of Van Gogh from the other Post Impressionist painters can be explained by the influence of Monet and Renoir on his artistic style [Walther and Metzger 2006], or by his unique painting style reflected by low-level image features that are similar to the style of Jackson Pollock [Shamir 2012], and could affect the automatic placement of Van Gogh on the phylogeny.

Comment author: TimS 03 October 2012 10:39:44PM 8 points [-]

I'm no art geek, but Impressionism is an art "movement" from the late 1800s. A variety of artists (Monet, Renoir, etc) began using similar visual styles that influenced what they decided to paint and how they depicted images.

Art critics think that artistic "movements" are a meaningful way of analyzing paintings, approximately at the level of usefulness that a biologist might apply to "species" or "genus." Or historian of philosophy might talk about the school of thought know today as "Logical Positivism."

Do you think movements is a reasonable unit of analysis (in art, in literature, in philosophy)? If no, why not? If yes, why are you so hostile to the usage of labels like "post-utopian" or "post-colonialist"?

Comment author: Viliam_Bur 05 October 2012 10:50:24AM *  4 points [-]

Art critics think that artistic "movements" are a meaningful way of analyzing paintings, approximately at the level of usefulness that a biologist might apply to "species" or "genus."

The pictures made within an artistic movement have something similar. We should classify them by that something, not only by the movement. Although the name of the movement can be used as a convenient label for the given cluster of picture-space.

If I give you a picture made by unknown author, you can't classify it by author's participation in given movements. But you can classify it by the contents of the picture itself. So even if we use the movement as a label for the cluster, it is better if we can also describe typical properties of picture within that cluster.

Just like when you find a random dog on a street, you can classify it as "dog" species, without taking a time machine and finding out whether the ancestors of this specific dogs really were domesticated wolves. You can teach "dogs are domesticated wolves" at school, but this is not how you recognize dogs in real life.

So how exactly would you recognize "impressionist" paintings, or "post-utopian" books in real life, when the author is unknown? Without teaching this, you are not truly teaching impressionism or post-utopianism.

(In case of "impressionism", my rule of thumb is that the picture looks nice and realistic from distance, but when you stand close to it, the details become somehow ugly. My interpretation of "impressionism" is: work of authors who obviously realized that milimeter precision for a wall painting is an overkill, and you can make pictures faster and cheaper if you just optimize it for looking correct from a typical viewing distance.)

Comment author: TheOtherDave 05 October 2012 02:12:58PM 2 points [-]

I agree with you that there are immediately obvious properties that I use to classify an object into a category, without reference to various other historical and systemic facts about the object. For example, as you say, I might classify a work of art as impressionist based on the precision with which it is rendered, or classify an animal as a dog based on various aspects of its appearance and behavior, or classify food as nutritious based on color, smell, and so forth.

It doesn't follow that it's somehow better to do so than to classify the object based on the less obvious historical or systemic facts.

If I categorize an object as nutritious based on those superficial properties, and later perform a lab analysis and discover that the object will kill me if I eat it, I will likely consider my initial categorization a mistake.

If I share your rule of thumb about "impressionism", and then later realize that some works of art that share the property of being best viewed from a distance are consistently classed by art students as "pointilist" rather than "impressionist", and I further realize that when I look at a bunch of classed-as-pointilist and classed-as-impressionist paintings it's clear to me that paintings in each class share a family resemblance that they don't share with paintings in the other class, I will likely consider my initial rule of thumb a mistake.

Sometimes, the categorization I perform based on properties that aren't immediately apparent is more reliable than the one I perform "in real life."

Comment author: Eliezer_Yudkowsky 02 October 2012 06:20:21PM 1 point [-]

Is this actually a standard term? I was trying to make up a new one, without having to actually delve into the pits of darkness and find a real postmodern literary term that doesn't mean anything.

Comment author: Kaj_Sotala 02 October 2012 06:48:53PM *  15 points [-]

I don't think you can avoid the criticism of "literary terms actually do tend to make one expect differing sensory experiences, and your characterization of the field is unfair" simply by inventing a term which isn't actually in use. I don't know whether "post-utopian" is actually a standard term, but yli's comment doesn't depend on it being one.

Comment author: thomblake 02 October 2012 06:43:37PM 4 points [-]

Well, there are a lot of hits for "post-utopian" on Google, and they don't seem to be references to you.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:46:29PM 3 points [-]

I think there were fewer Google references back when I first made up the word... I will happily accept nominations for either an equally portentous-sounding but unused term, or a portentous-sounding real literary term that is known not to mean anything.

Comment author: [deleted] 02 October 2012 07:43:02PM 13 points [-]

Has anyone ever told you your writing style is Alucentian to the core? Especially in the way your municardist influences constrain the transactional nuances of your structural ephamthism.

Comment author: Eliezer_Yudkowsky 02 October 2012 09:03:31PM 4 points [-]

This looks promising. Is it real, or did you verify that the words don't mean anything standard?

Comment author: Jonathan_Elmer 02 October 2012 07:32:24PM 6 points [-]

Coming up with a made up word will not solve this problem. If the word describes the content of the author's stories then there will be sensory experiences that a reader can expect when reading those stories.

Comment author: Scottbert 03 October 2012 04:35:09PM 4 points [-]

I think the idea is that the hypothetical teacher is making students memorize passwords instead of teaching the meaning of the concept.

Comment author: lukeprog 11 January 2013 07:15:29AM *  2 points [-]

post-catalytic
psycho-elemental
anti-ludic
anarcho-hegemonic
desublimational

Comment author: fubarobfusco 11 January 2013 12:36:50PM 1 point [-]

"Cogno-intellectual" was the catchphrase for this when I was in school. See Abrahams et al.:

We invite you to take part in a large-scale language experiment. It concerns the word "cogno-intellectual." This noble word can be used as an adjective or as a noun. We just invented it. The fact that "cogno-intellectual" has no meaning makes it a useful word. Meaning nothing, it can be used for anything.

Here is the experiment. Use the word "cogno-intellectual" in written and oral communications with colleagues, especially with colleagues whom you do not know well. If you are a student, use it with your most impressable teachers. If you are a teacher, use it with your most impressable administrators. Use it at meetings. Use it with significant strangers. Use it with abandon. Use it with panache. The main thing is: use it.

Comment author: BerryPick6 11 January 2013 12:44:33PM *  3 points [-]

To see the word used spectacularly, check out this paper: www.es.ele.tue.nl/~tbasten/fun/rhetoric_logic.pdf

Comment author: lukeprog 11 January 2013 05:39:10PM 1 point [-]

LW comments use the Markdown syntax.

Comment author: thomblake 02 October 2012 07:42:31PM 2 points [-]

I don't think literature has any equivalent to metasyntactic variables. Still, placeholder names might help - perhaps they are examples of "post-kadigan" literature?

Comment author: novalis 02 October 2012 08:16:41PM *  26 points [-]

Maybe you should reconsider picking on an entire field you know nothing about?

I'm not saying this to defend postmodernism, which I know almost nothing about, but to point out that the Sokal hoax is not really enough reason to reject an entire field (any more than the Bogdanov affair is for physics).

I'm pointing out that you're neglecting the virtues of curiosity and humility, at least.

And this is leaving aside that there is no particular reason for "post-utopian" to be a postmodern as opposed to modern term; categorizing writers into movements has been a standard tool of literary analysis for ages (unsurprisingly, since people love putting things into categories).

Comment author: paper-machine 03 October 2012 04:36:14PM 13 points [-]

At this point, getting in cheap jabs at post-modernism and philosophy wherever possible is a well-honored LessWrong tradition. Can't let the Greens win!

Comment author: yli 03 October 2012 02:09:57AM *  2 points [-]

Is this actually a standard term?

I have no idea, I just interpreted it in an obvious way.

Comment author: Scottbert 03 October 2012 04:32:12PM *  2 points [-]

I share this interpretation, but I always figured in Eliezer's examples the hypothetical professor was so obsessed with passwords or sounding knowledgeable that they didn't bother to teach the meaning of 'post-utopian', and might even have forgotten it. Or they were teaching to the test, but if this is a college class there is no standard test, so they're following some kind of doubly-lost purpose.

Or it could be that the professor is passing down passwords they were taught as a student themselves. A word must have had some meaning when it was created, but if most people treat it as a password it won't constrain their expectations.

Also, I like that the comment system correctly interpreted my use of underbars to mean italics. I've been using that convention in plaintext for 15 years or so, glad to see someone agrees with it!

Comment author: Eliezer_Yudkowsky 02 October 2012 05:28:00AM 23 points [-]

(The 'Mainstream Status' comment is intended to provide a quick overview of what the status of the post's ideas are within contemporary academia, at least so far as the poster knows. Anyone claiming a particular paper precedents the post should try to describe the exact relevant idea as presented in the paper, ideally with a quote or excerpt, especially if the paper is locked behind a paywall. Do not represent large complicated ideas as standard if only a part is accepted; do not represent a complicated idea as precedented if only a part is described. With those caveats, all relevant papers and citations are much solicited! Hopefully comment-collections like these can serve as a standard link between LW presentations and academic ones.)

The correspondence theory of truth is the first position listed in the Stanford Encyclopedia of Philosophy, which is my usual criterion for saying that something is a solved problem in philosophy. Clear-cut simple visual illustration inspired by the Sally-Anne experimental paradigm is not something I have previously seen associated with it, so the explanation in this post is - I hope - an improvement over what's standard.

Alfred Tarski is a famous mathematician whose theory of truth is widely known.

The notion of possible worlds is very standard and popular in philosophy; some of them even ascribe much more realism to them than I would (since I regard them as imaginary constructs, not thingies that can potentially explain real events as opposed to epistemic puzzles).

I haven't particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to "There are causal processes producing map-territory correspondences" to "You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions". I would not be surprised to find out it existed, especially on the second clause.

Added: The term "post-utopian" was intended to be a made-up word that had no existing standardized meaning in literature, though it's simple enough that somebody has probably used it somewhere. It operates as a stand-in for more complicated postmodern literary terms that sound significant but mean nothing. If you think there are none of those, Alan Sokal would like to have a word with you. (Beating up on postmodernism is also pretty mainstream among Traditional Rationalists.)

You might also be interested in checking out what Mohandas Gandhi had to say about "the meaning of truth", just in case you were wondering what things are like in the rest of the world outside the halls of philosophy departments.

Comment author: pragmatist 02 October 2012 09:36:29PM *  31 points [-]

This is a great post. I think the presentation of the ideas is clearer and more engaging than the sequences, and the cartoons are really nice. Wild applause for the artist.

I have a few things to say about the status of these ideas in mainstream philosophy, since I'm somewhat familiar with the mainstream literature (although admittedly it's not the area of my expertise). I'll split up my individual points into separate comments.

Alfred Tarski is a famous mathematician whose theory of truth is widely known.

Summary of my point: Tarski's biconditionals are not supposed to be a definition of truth. They are supposed to be a test of the adequacy of a proposed definition of truth. Proponents of many different theories claim that their theory passes this test of adequacy, so to identify Tarski's criterion with the correspondence theory is incorrect, or at the very least, a highly controversial claim that requires defense. What follows is a detailed account of why the biconditionals can't be an adequate definition of truth, and of what Tarski's actual theory of truth is.

Describing Tarski's biconditionals as a definition of truth or a theory of truth is misleading. The relevant paper is The Semantic Conception of Truth. Let's call sentences of the form 'p' is true iff p T-sentences. Tarski's claim in the paper is that the T-sentences constitute a criterion of adequacy for any proposed theory of truth. Specifically, a theory of truth is only adequate if all the T-sentences follow from it. This basically amounts to the claim that any adequate theory of truth must get the extension of the truth-predicate right -- it must assign the truth-predicate to all and only those sentences that are in fact true.

I admit that the conjunction of all the T-sentences does in fact satisfy this criterion of adequacy. All the individual T-sentences do follow from this conjunction (assuming we've solved the subtle problem of dealing with infinitely long sentences). So if we are measuring by this criterion alone, I guess this conjunction would qualify as an adequate theory of truth. But there are other plausible criteria according to which it is inadequate. First, it's a frickin' infinite conjunction. We usually prefer our definitions to be shorter. More significantly, we usually demand more than mere extensional adequacy from our definitions. We also demand intensional adequacy.

If you ask someone for a definition of "Emperor of Rome" and she responds "X is an Emperor of Rome iff X is one of these..." and then proceeds to list every actual Emperor of Rome, I suspect you would find this definition inadequate. There are possible worlds in which Julius Caesar was an Emperor of Rome, even though he wasn't in the actual world. If your friend is right, then those worlds are ruled out by definition. Surely that's not satisfactory. The definition is extensionally adequate but not intensionally adequate. The T-sentence criterion only tests for extensional adequacy of a definition. It is satisfied by any theory that assigns the correct truth predicates in our world, whether or not that theory limns the account of truth in a way that is adequate for other possible worlds. Remember, the biconditionals here are material, not subjunctive. The T-sentences don't tell us that an adequate theory would assign "Snow is green" as true if snow were green. But surely we want an adequate theory to do just that. If you regard the T-sentences themselves as the definition of truth, all that the definition gives us is a scheme for determining which truth ascriptions are true and false in our world. It tells us nothing about how to make these determinations in other possible worlds.

To make the problem more explicit, suppose I speak a language in which the sentence "Snow is white" means that grass is green. It will still be true that, for my language, "Snow is white" is true iff snow is white. Yet we don't want to say this biconditional captures what it means for "Snow is white" to be true in my language. After all, in a possible world where snow remained white but grass was red, the sentence would be false.

Tarski was a smart guy, and I'm pretty sure he realized all this (or at least some of it). He constantly refers to the T-sentences as material criteria of adequacy for a definition of truth. He says (speaking about the T-sentences), "... we shall call a definition of truth 'adequate' if all these equivalences follow from it." (although this seems to ignore the fact that there are other important criteria of adequacy) When discussing a particular objection to his view late in the paper, he says, "The author of this objection mistakenly regards scheme (T)... as a definition of truth." Unfortunately, he also says stuff that might lead one to think he does think of the conjunction of all T-sentences as a definition: "We can only say that every equivalence of the form (T)... may be considered a partial definition of truth, which explains wherein the truth of this one individual sentence consists. The general definition has to be, in a certain sense, a logical conjunction of all these partial definitions."

I read the "in a certain sense" there as a subtle concession that we will need more than just a conjunction of the T-sentences for an adequate definition of truth. As support for my reading, I appeal to the fact that Tarski explicitly offers a definition of truth in his paper (in section 11), one that is more than just a conjunction of T-sentences. He defines truth in terms of satisfaction, which in turn is defined recursively using rules like: The objects a and b satisfy the sentential function "P(x, y) or Q(x, y)" iff they satisfy at least one of the functions "P(x, y)" or "Q(x, y)". His definition of truth is basically that a sentence is true iff it is satisfied by all objects and false otherwise. This works because a sentence, unlike a general sentential function, has no free variables to which objects can be bound.

This definition is clearly distinct from the logical conjunction of all T-sentences. Tarski claims it entails all the T-sentences, and therefore satisfies his criterion of adequacy. Now, I think Tarski's actual definition of truth isn't all that helpful. He defines truth in terms of satisfaction, but satisfaction is hardly a more perspicuous concept. True, he provides a recursive procedure for determining satisfaction, but this only tells us when compound sentential functions are satisfied once we know when simple ones are satisfied. His account doesn't explain what it means for a simple sentential function to be satisfied by an object. This is just left as a primitive in the theory. So, yeah, Tarski's actual theory of truth kind of sucks.

His criterion of adequacy, though, has been very influential. But it is not a theory of truth, and that is not the way it is treated by philosophers. It is used as a test of adequacy, and proponents of most theories of truth (not just the correspondence theory) claim that their theory satisfies this test. So to identify Tarski's definition/criterion/whatever with the correspondence theory misrepresents the state of play. There are, incidentally, a group of philosophers who do take the T-sentences to be a full definition of truth, or at least to be all that we can say about truth. But these are not correspondence theorists. They are deflationists.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:38:04PM 7 points [-]

I've slightly edited the OP to say that Tarski "described" rather than "defined" truth - I wish I could include more to reflect this valid point (indeed Tarski's theorems on truth are a lot more complicated and so are surrounding issues, no language can contain its own truth-predicate, etc.), but I think it might be a distraction from the main text. Thank you for this comment though!

Comment author: PaulWright 08 October 2012 01:37:55PM 1 point [-]

The latest Rationally Speaking post looks relevant: Ian Pollock describes aspects of Eliezer's view as "minimalism" with a link to that same SEP article. He also mentions Simon Blackburn's book, where Blackburn describes minimalists or quietists as making the same point Eliezer makes about collapsing "X is true" to "X" and a similar point about the usefulness of the term "truth" as a generalisation (though it seems that minimalists would say that this is only a linguistic convenience, whereas Eliezer seems to have a slightly difference concept of it in that he wants to talk in general about how we get accurate beliefs).

Comment author: pragmatist 02 October 2012 10:53:43PM *  12 points [-]

I haven't particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to "There are causal processes producing map-territory correspondences" to "You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions". I would not be surprised to find out it existed, especially on the second clause.

Depends on what you mean by "explicitly". Many correspondence theorists believe that an adequate understanding of "correspondence" requires an understanding of reference -- how parts of our language are associated with parts of the world. I think this sort of idea stems from trying to fill out Tarski's (actual) definition of truth, which I discussed in another comment. The hope is that a good theory of reference will fill out Tarski's obscure notion of satisfaction, and thereby give some substance to his definition of truth in terms of satisfaction.

Anyway, there was a period when a lot of philosophers believed, following Saul Kripke and Hilary Putnam, that we can understand reference in terms of causal relations between objects in the world and our brains (it appears to me that this view is falling out of vogue now, though). What makes it the case that our use of the term "electron" refers to electrons? That there are the appropriate sorts of causal relations, both social -- the causal chain from physicists who originated the use of the word to contemporary uses of it -- and evidential -- the causal connections with the world that govern the ways in which contemporary physicists come to assert new claims involving the word "electron". The causal theory of reference is used as the basis for a (purportedly) non-mysterious account of satisfaction, which in turn is used as the basis for a theory of truth.

So the idea is that the meanings of the elements in our map are determined by causal processes, and these meanings link the satisfaction conditions of sentential functions to states of affairs in the world. I'm not sure this is exactly the sort of thing you're saying, but it seems close. For an explicit statement of this kind of view, see Hartry Field's Tarski's Theory of Truth. Most of the paper is a (fairly devastating, in my opinion) critique of Tarski's account of truth, but towards the end of section IV he brings up the causal theory.

ETA: More broadly, reliabilism in epistemology has a lot in common with your view. Reliabilism is a refinement of early causal theories of knowledge. The idea is that our beliefs are warranted in so far as they are produced by reliable mechanisms. Most reliabilists I'm aware of are naturalists, and read "reliable mechanism" as "mechanism which establishes appropriate causal connections between belief states and world states". Our senses are presumed to be reliable (and therefore sources of warrant) just because the sorts of causal chains you describe in your post are regularly instantiated. Reliabilism is, however, compatible with anti-naturalism. Alvin Plantinga, for instance, believes that the sensus divinitatis should be regarded as a reliable cognitive faculty, one that atheists lack (or ignore).

One example of a naturalist reliabilism (paired with a naturalist theory of mental representation) is Fred Dretske's Knowledge and the Flow of Information. A summary of the book's arguments is available here (DOC file). Dretske tries to understand perception, knowledge, the truth and falsity of belief, mental content, etc. using the framework of Shannon's communication theory. The basis of his analysis is that information transfer from a sender system to a receiver system must be understood in terms of relations of law-like dependence of the receiver system's state on the sender system's state. He then analyzes various epistemological problems in terms of information transfer from systems in the external world to our perceptual faculties, and information transfer from our perceptual faculties to our cognitive centers. He's written a whole book about this, so there's a lot of detail, and some of the specific details are suspect. In broad strokes, though, Dretske's book expresses pretty much the same point of view you describe in this post.

Comment author: pragmatist 03 October 2012 12:54:19PM 7 points [-]

You might also be interested in checking out what Mohandas Gandhi had to say about "the meaning of truth", just in case you were wondering what things are like in the rest of the world outside the halls of philosophy departments.

Here's a quote from Perry Anderson's recent (highly critical) essay on Gandhi:

There can be no doubt that he was, so far as he himself went, sincere enough in his commitment to non-violence. But as a political leader, his conception of himself as a vessel of divine intention allowed him to escape the trammels of human logic or coherence. Truth was not an objective value – correspondence to reality, or even (in a weaker version) common agreement – but simply what he subjectively felt at any given time. ‘It has been my experience,’ he wrote, ‘that I am always true from my point of view.’ His autobiography was subtitled The Story of My Experiments with Truth, as if truth were material for alteration in a laboratory, or the plaything of a séance. In his ‘readiness to obey the call of Truth, my God, from moment to moment’, he was freed from any requirement of consistency. ‘My aim is not to be consistent with my previous statements,’ he declared, but ‘with truth as it may present itself to me at a given moment’: ‘since I am called “Great Soul” I might as well endorse Emerson’s saying that “foolish consistency is the hobgoblin of little minds.”’ The result was a licence to say whatever he wanted, regardless of what he had said before, whenever he saw fit.

Comment author: lukeprog 02 October 2012 06:24:00AM *  11 points [-]

Speaking as the author of Eliezer's Sequences and Mainstream Academia...

Off the top of my head, I also can't think of a philosopher who has made an explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."

But if this connection has been made explicitly, I would expect it to be made by someone who accepts both the correspondence theory and "naturalized epistemology", often summed up in a quote from Quine:

The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology? ...Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.

(Originally, Quine's naturalized epistemology accounted only for this descriptive part of epistemology, and neglected the normative part, e.g. truth conditions. In the 80s Quine started saying that the normative part entered into naturalized epistemology through "the technology of truth-seeking," but he was pretty vague about this.)

Edit: Another relevant discussion of embodiment and theories of truth can be found in chapter 7 of Philosophy in the Flesh.

Comment author: ciphergoth 02 October 2012 07:50:54AM 11 points [-]

Off the top of my head, I also can't think of a philosopher who has made an explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."

OK, I defended the tweet that got this response from Eliezer as the sort of rhetorical flourish that gets people to actually click on the link. However, it looks like I also underestimated how original the sequences are - I had really expected this sort of thing to mirror work in mainstream philosophy.

Comment author: DuncanS 04 October 2012 11:02:46PM *  2 points [-]

Although I wouldn't think of this particular thing as being an invention on his part - I'm not sure I've read that particular chain of thought before, but all the elements of the chain are things I've known for years.

However I think it illustrates the strength of Eliezer's writing well. It's a perfectly legitimate sequence of thought steps that leads in a clear and obvious way to the right answer. It's not new thought, but a particularly clear way of expressing something that many people have thought and written about in a less organised way.

To clarify - there are times when Eliezer is inventive - for example his work on CEV - but this isn't one of those places. I know I'm partly arguing about the meaning of "inventive", but I don't think we're doing him a favor here by claiming this is an example of his inventiveness when there are much better candidates.

Comment author: MichaelVassar 02 October 2012 07:16:36PM 13 points [-]

It's not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as "what fields are legitimate". Saying that something is known in mainstream academia seems suspiciously like saying that "something is encoded in the matter in my shoelace, given the right decryption schema. OTOH, it's highly meaningful to say that something is discoverable by someone with competent 'google-fu"

Comment author: Eliezer_Yudkowsky 05 October 2012 02:33:57AM 1 point [-]

I haven't particularly run across any philosophy explicitly making the connection from the correspondence theory of truth to "There are causal processes producing map-territory correspondences" to "You have to look at things in order to draw accurate maps of them, and this is a general rule with no exception for special interest groups who want more forgiving treatment for their assertions". I would not be surprised to find out it existed, especially on the second clause.

DevilWorm and pragmatist point to the "reliabilism" school of philosophy (http://en.wikipedia.org/wiki/Reliabilism & http://plato.stanford.edu/entries/reliabilism). Clicking on either link reveals arguments concerned mainly with that old dispute over whether the word "knowledge" should be used to refer to "justified true belief". Going on the wording I'm not even sure whether they're considering how photons from the Sun are involved in correlating your visual cortex to your shoelaces. But it does increase the probability of a precedent - does anyone have something more specific? (A lot of the terminology I've seen so far is tremendously vague, and open to many interpretations...)

Incidentally, there might be an even higher probability of finding some explicit precedent in a good modern AI book somewhere?

Comment author: RichardKennaway 05 October 2012 10:53:03AM 3 points [-]

Incidentally, there might be an even higher probability of finding some explicit precedent in a good modern AI book somewhere?

It might be too obvious to be worth mentioning. If you're actually building (narrow) AI devices like self-driving cars, then of course your car has to have a way of sensing things round about it if it's going to build a map of its surroundings.

This fact should be turned into an SMBC cartoon.

Comment author: TraderJoe 03 October 2012 07:09:49AM 4 points [-]

"Reality is that which, when you stop believing in it, doesn't go away. " - Philip K Dick.

Comment author: learnmethis 12 October 2012 09:28:35PM 1 point [-]

Good quote, but what about the reality that I believe something? ;) The fact that beliefs themselves are real things complicates this slightly.

Comment author: common_law 04 October 2012 12:19:21AM *  10 points [-]

Two quibbles that could turn out to be more than quibbles.

  1. The concept of truth you intend to defend isn't a correspondence theory--rather it's a deflationary theory, one in which truth has a purely metalinguistic role. It doesn't provide any account of the nature of any correspondence relationship that might exist between beliefs and reality. A correspondence theory, properly termed, uses a strong notion of reference to provide a philosophical account of how language ties to reality.

  2. You write:

Some pundits have panicked over the point that any judgment of truth - any comparison of belief to reality - takes place inside some particular person's mind; and indeed seems to just compare someone else's belief to your belief.

I'm inclined to think this is a straw man. (And if they're mere "pundits" and not philosophers why the concern with their silly opinion?) I think you should cite to the most respectable of these pundits or reconsider whether any pundits worth speaking of said this. The notion that reality--not just belief--determines experiments, might be useful to mention, but it doesn't answer any known argument, whether by philosopher or pundit.

Comment author: fubarobfusco 02 October 2012 08:38:44AM 14 points [-]

If the above is true, aren't the postmodernists right?

I do wish that you would say "relativists" or the like here. Many of your readers will know the word "postmodernist" solely as a slur against a rival tribe.

Comment author: jbash 02 October 2012 08:42:35PM 6 points [-]

Actually, "relativist" isn't a lot better, because it's still pretty clear who's meant, and it's a very charged term in some political discussions.

I think it's a bad rhetorical strategy to mock the cognitive style of a particular academic discipline, or of a particular school within a discipline, even if you know all about that discipline. That's not because you'll convert people who are steeped in the way of thinking you're trying to counter, but because you can end up pushing the "undecided" to their side.

Let's say we have a bright young student who is, to oversimplify, on the cusp of going down either the path of Good ("parsimony counts", "there's an objective way to determine what hypothesis is simpler", "it looks like there's an exterior, shared reality", "we can improve our maps"...) or the path of Evil ("all concepts start out equal", "we can make arbitrary maps", "truth is determined by politics" ...). Well, that bright young student isn't a perfectly rational being. If the advocates for Good look like they're being jerks and mocking the advocates for Evil, that may be enough to push that person down the path of Evil.

Wulky Wilkinson is the mind killer. Or so it seems to me.

Comment author: TimS 03 October 2012 02:36:24AM *  3 points [-]

I agree with your point about rhetoric, but I think you give post-modern thought too little credit. First of all, Sturgeon's law says 90% of everything is crap.

all concepts start out equal

I can't understand why you think this statement is post-modern - or why you think it is wrong. Luminiferous Aether was possibly correct - until we tested the proposition, what basis did we have to say that ~P was better than P?

we can make arbitrary maps

This has clear flavors of post-modernism - and is false as stated. But I think someone like Foucault would want the adjective social thrown in there a bit. Given that, the diversity of cultures throughout history is some evidence that the proposition could be true - depending on what caveats we place on / how we define "arbitrary."

Kuhn and Feyerabend have not always been clear on how anti-scientific realist they intended to be, but I think a proposition like "Scientific models are socially mediated" is plausible - unless Kuhn and Feyerabend totally screwed up their history.

truth is determined by politics

Again, post-modern flavored. And again, if we add the word "social" to the front, the statement is likely true. For example, people once thought social class (nobility, peasant, merchant) was very morally relevant. Now, not so much.

Comment author: TimS 02 October 2012 01:12:38PM 3 points [-]

Particularly since many LWers believe things like:

The progress of science is measured as much by deaths among the Old Guard as by discoveries from the Young Idealists.

or

Psychological diagnosis (like those listed in the DSM) function to separate the socially acceptable from the unacceptable and do not even try to cut the world at its joints.

Comment author: Eugine_Nier 02 October 2012 05:31:04PM *  0 points [-]

Psychological diagnosis (like those listed in the DSM) function to separate the socially acceptable from the unacceptable and do not even try to cut the world at its joints.

The difference is that post-modernists believe that something like this is true for all science and use this to justify this state of affairs in psychology, whereas LWers believe that this is not an acceptable state of affairs and should be fixed.

Edit: Also as MizedNuts pointed out, the diagnoses do try to cut reality at the joints, they just frequently fail due to social signaling interfering with seeking truth.

Comment author: TimS 02 October 2012 05:44:49PM 3 points [-]

First, if physical anti-realism is true to some extent, then it is true to that extent. By contrast, if Kuhn and Feyerabend messed up the history, then physical anti-realists have no leg to stand on. People can stand what is true, for they are already enduring it.

Second, folks like Foucault were at the forefront of the argument that unstated social norm enforcement via psychological diagnosis was far worse than explicit social norm enforcement. They certainly don't argue that the current state of affairs in psychology was (or is) justifiable.

Comment author: paper-machine 02 October 2012 05:35:59PM 3 points [-]

Citation appreciated. Foucault was specifically trying to improve the standards of psychiatric care.

Comment author: CarlShulman 03 October 2012 12:08:17AM *  8 points [-]

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true. And this sort of belief can have behavioral consequences!

The belief that someone is epiphenomenally a p-zombie, or belief in consubstantiality can also have behavioral consequences. Classifying some author as an "X" can, too.

Comment author: Eliezer_Yudkowsky 03 October 2012 07:41:24PM 5 points [-]

If an author actually being X has no consequences apart from the professor believing that the author is "X", all consequences accrue to quoted beliefs and we have no reason to believe the unquoted form is meaningful or important. As for p-zombieness, it's not clear at this point in the sequence that this belief is meaningless rather than being false; and the negation of the statement, "people are not p-zombies", has phrasings that make no mention of zombiehood (i.e., "there is a physical explanation of consciousness") and can hence have behavioral consequences by virtue of being meaningful even if its intuitive "counterargument" has a meaningless term in it.

Comment author: wedrifid 03 October 2012 08:36:40PM *  1 point [-]

Can someone please explain to me what is bad or undesirable about the parent? I thought it made sense, even if on a topic I don't much care about. Others evidently didn't. While we are at it, what is so insightful about the grandparent? I just thought it kind of missed the point of the quoted paragraph.

Comment author: TimS 03 October 2012 08:50:28PM *  1 point [-]

My guess? "Behavorial consequences" is not really the touchstone of truth under the Correspondence Theory, so EY's use of the phrase when trying to persuade us of the Correspondence Theory of Truth leaves him open to criticism. EY's response is to deny any mistake.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:05:34PM 0 points [-]

My guess? People are more or less randomly downvoting me these days, for standard fear and hatred of the admin. I suppose somebody's going to say that this is an excuse not to update, but it could also be, y'know, true. It takes a pretty baroque viewpoint to think that I was talking deliberate nonsense in that paragraph, and if anyone hadn't understood what I meant, they could've just asked.

To clarify in response to your particular reply:

Generally speaking but not always, for our belief about something to have behavioral consequences, we have to believe it has consequences which our utility function can run over, meaning it's probably linked into our beliefs about the rest of the universe, which is a good sign. There's all kinds of exceptions to this for meaningless beliefs that have behavioral consequences anyway, and a very large class of exceptions is the class where somebody else is judging what you believe, like the example someone not-Carl-who-Carl-probably-talked-to recently gave me for "Consubstantiality has the consequence that if it's true and you don't believe in it, God will send you to hell", which involves just "consubstantiality" and not consubstantiality, similarly with the tests being graded (my attempt to find a non-religious conjugate of something for which the religious examples are much more obvious).

Comment author: wedrifid 03 October 2012 09:27:24PM 7 points [-]

My guess? People are more or less randomly downvoting me these days, for standard fear and hatred of the admin. I suppose somebody's going to say that this is an excuse not to update, but it could also be, y'know, true.

A review of your recent comments page puts most of the comments upvoted and some of them to stellar levels---not least of which this post. This would suggest that aversion to your admin-related commenting hasn't generalized to your on topic commenting just yet. Either that or all your upvoted comments are so amazingly baddass that they overcome the hatred while the few that get net downvotes were merely outstanding and couldn't compensate.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:31:17PM 0 points [-]

Or the downvoters are fast and early, the upvoters arrive later, which is what I've observed. I'm actually a bit worried about random downvoting of other users as well.

Comment author: Eugine_Nier 04 October 2012 02:37:01AM 9 points [-]

Or the downvoters are fast and early, the upvoters arrive later, which is what I've observed.

Or it's just more memorable when this happens.

Comment author: wedrifid 03 October 2012 09:44:19PM 2 points [-]

Or the downvoters are fast and early, the upvoters arrive later, which is what I've observed. I'm actually a bit worried about random downvoting of other users as well.

Ahh, those kind of downvotes. I get those patterns from time to time---not as many or fast as you are able to I'm sure since I'm a mere commenter. I remind myself to review my comments a day or two later so that some of the contempt for voter judgement can bleed away after I see the correction.

Comment author: Normal_Anomaly 11 December 2012 08:15:01PM 1 point [-]

I've noticed the same thing once or twice--less often than you, and far less often than EY, but my (human, therefore lousy) memory says it's more likely for a comment of mine to go to -1 and then +1 than the reverse.

Comment author: beoShaffer 03 October 2012 03:34:54PM 3 points [-]

For some reason the first picture won't load, even though the rest are fine. I'm using safari.

Comment author: EricHerboso 02 October 2012 07:33:36PM 3 points [-]

Two minor grammatical corrections:

A space is missing between "itself" and "is " in "The marble itselfis a small simple", and between "experimental" and "results" in "only reality gets to determine my experimentalresults".

Comment author: kilobug 02 October 2012 10:44:13AM 7 points [-]

There's no reason for your brain not to update if politics aren't involved.

I don't agree nor like this singling-out of politics as the only thing in which people don't update. People fail to update in many fields, they'll fail to update in love, in religion, in drug risks, in ... there is almost no domain of life in which people don't fail to update at times, rationalizing instead of updating.

Comment author: TimS 02 October 2012 01:18:24PM 16 points [-]

In addition to what pleeppleep said, I think there is a bit of illusion of transparency here.

As I've said elsewhere, what Eliezer clearly intends with the label "political" is not partisan electioneering to decide whether the community organizer or the business executive is the next President of the United States. Instead, he means something closer to what Paul Graham means when he talks about keeping one's identity small.

Among humans at least, "Personal identity is the mindkiller."

Comment author: fubarobfusco 02 October 2012 11:13:28PM 2 points [-]

what Eliezer clearly intends with the label "political" is [...] something closer to what Paul Graham means when he talks about keeping one's identity small.

This is evidently confusing readers, since over here someone thought it was about "social manipulation, status, and signaling".

Comment author: learnmethis 12 October 2012 09:50:57PM 2 points [-]

Great post! If this is the beginning of trend to make Less Wrong posts more accessible to a general audience, then I'm definitely a fan. There's a lot of people I'd love to share posts with who give up when they see a wall of text.

There are two key things here I think can be improved. I think they were probably skipped over for mostly narrative purposes and can be fixed with brief mentions or slight rephrasings:

You won't get a direct collision between belief and reality - or between someone else's beliefs and reality - by sitting in your living-room with your eyes closed.

In addition to comparison to external data such as experimental results, there are also critical insights on reality to be gained by armchair examination. For example, armchair examination of our own or others’ beliefs may lead us to realise that they are self-contradictory, and therefore that it is impossible for them to be true. No experimental results needed! This is extraordinarily common in mathematics, and also of great personal value in everyday thinking, since many cognitive mistakes lead directly to some form of internal contradiction.

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true.

It's better to say that the first statement is unsupported by the evidence and purely speculative. Here's one way that it could in fact be true: if our world is a simulation which destroys data points which won’t in any way impact the future observations of intelligent beings/systems. In fact, that’s an excellent optimisation over an entire class of possible simulations of universes. There would be no way for us to know this of course (the question is inherently undecideable) but it could still happen to be true. In fact, we can construct extremely simply toy universes for which this is true. Undecideability in general is a key consideration that seems missing from many Less Wrong articles, especially considering how frequently it pops up within any complex system.

Comment author: Jonathan_Graehl 06 October 2012 09:58:43PM *  2 points [-]

Similarly, to say of your own beliefs, that the belief is 'true', just means you're comparing your map of your map, to your map of the territory

I assume this is meant in the spirit of "it's as if you are", not "your brain is computing in these terms". When I anticipate being surprised, I'm not consciously constructing any "my map of my map of ..." concepts. Whether my brain is constructing them under the covers remains to be demonstrated.

Comment author: Sewing-Machine 03 October 2012 04:12:48AM *  2 points [-]

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

I think it's apt but ironic that you find a definition of "truth" by comparing beliefs and reality. Beliefs are something that human beings, and maybe some animals have. Reality is vast in comparison, and generally not very animal-centric. Yet every one of these diagrams has a human being or brain in it.

With one interesting exception, the space of all possible worlds. Is truth more animal-centric that reality? Wouldn't "snow is white" be a true statement if people weren't around? Maybe not--who would be around to state it? But I find it easy to imagine a possible world with white snow but no people.

Edit: What would a hypothetical post titled "The Useful Idea of Reality" contain? Would it logically come before or after this post?

Comment author: DuncanS 02 October 2012 09:28:32PM *  2 points [-]

People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished thought and using it as input to the next thought. In animals, I suspect this sense isn't good enough to allow thought chains to be made - and so they can't make arguments. In humans it is good enough, but probably not by very much - it seems rather likely that the ability to make thought chains evolved quite recently.

I think we probably make mistakes about what we think we think all the time - but there is usually nobody who can correct us.

Comment author: [deleted] 02 October 2012 04:19:31PM 2 points [-]

Suppose I have two different non-meaningful statements, A and B. Is it possible to tell them apart? On what basis? On what basis could we recognize non-meaningful statements as tokens of language at all?

Comment author: MixedNuts 02 October 2012 05:39:32PM 6 points [-]

Connotation. The statement has no well-defined denotation, but people say it to imply other, meaningful things. Islam is a religion of peace!

Comment author: shminux 02 October 2012 05:15:01PM 1 point [-]

Is it possible to tell them apart?

Why would you want to?

Comment author: Eugine_Nier 02 October 2012 05:33:49PM 1 point [-]

See this.

Comment author: Larks 02 October 2012 02:39:26PM *  2 points [-]

Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as: ...

Is this true? Maybe there's a formal reason why, but it seems we can informally represent such ideas without the abstract idea of truth. For example, if we grant quantification over propositions,

Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.

becomes

  • Generalized across possible maps and possible cities, if your map of a city says "p" if and only iff p, navigating according to that map is more likely to get you to the airport on time.

To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

becomes

  • To draw a map of a city such that the map says "p" if and only if p, someone has to go out and look at the buildings; there's no way you'd end up with a map that says "p" if and only if p by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

becomes

  • Beliefs of the form "p", where p, are more likely than beliefs of the form "p", where it is not the case that p, to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should incrementally contain more assertions "p" where p, and fewer assertions "p" where not p, over time.
Comment author: Eliezer_Yudkowsky 02 October 2012 06:31:37PM 4 points [-]

Generalized across possible maps and possible cities, if your map of a city says "p" if and only iff p

If you can generalize over the correspondence between p and the quoted version of p, you have generalized over a correspondence schema between territory and map, ergo, invoked the idea of truth, that is, something mathematically isomorphic to in-general Tarskian truth, whether or not you named it.

Comment author: endoself 02 October 2012 05:17:53PM *  3 points [-]

Well, yeah, we can taboo 'truth'. You are still using the titular "useful idea" though by quantifying over propositions and making this correspondence. The idea that there are these things that are propositions and that they can appear both in quotation marks and also appear unquoted, directly in our map, is a useful piece of understanding to have.

Comment author: ArisKatsaris 05 October 2012 01:57:49PM 5 points [-]

Oh come on, yeah the gender-imbalance of the original images was bad, but ugliness is also bad and the new stick figures are ugly...

Comment author: thomblake 05 October 2012 02:07:39PM 3 points [-]

Agreed. The previous illustrations were pretty awesome, and this post has lost a lot for it.

Comment author: Maelin 06 October 2012 04:18:20PM 2 points [-]

Agreed. The stick figures do not mesh well with the colourful cartoony backgrounds that make the images visually appealing. They feel out of place, and I found it harder to tell when I was supposed to consider one stick figure distinct from another one without actively looking for it (I also have this problem with xkcd).

Strong vote for return to the original style diagrams, with the gender imbalance fixed.

Comment author: [deleted] 05 October 2012 07:40:00PM 1 point [-]

[looks back at the top-level post] Yes, they are. Especially the professor in the last picture -- it reminds me of Jack Skellington from A Nightmare Before Christmas. Using thinner lines à la xkcd would be better, IMO.

Comment author: Eliezer_Yudkowsky 02 October 2012 05:26:28AM 5 points [-]

Koan answers here for:

What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?

Comment author: [deleted] 02 October 2012 09:36:27AM 41 points [-]

I dislike the "post utopian" example, and here's why:

Language is pretty much a set of labels. When we call something "white", we are saying it has some property of "whiteness." NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label "white" ends, and grey or yellow begins.

Say I'm carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they're going to end up breaking music space into much smaller pieces. For example, dub step and house.

Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn't necessarily say "It's the one that goes, like, WOPWOPWOPWOP iiinnnnnggg" if I'm a learned professor, so I'll use jargon like "synthetic rhythm," or something.

But not having a complete explainable System 2 algorithm for "How to Tell if it's Dubstep" doesn't mean that my System 1 can't readily identify it. In fact, it's probably easier to just listen to a bunch of music until your System 1 can identify the various genres, even if your System 2 can't codify it. The example is treating the fact that your professor can't really codify "post utopianism" to mean that it's not "true". (this example has been used in other sequence posts, and I disagreed with it then too)

Have someone write a bunch of short stories. Give them to English Literature professors. If they tend to agree which ones are post utopian, and which ones aren't, then they ARE in fact carving up literature-space in a meaningful way. The fact that they can't quite articulate the distinction doesn't make it any less true than knowing that snow was white before you knew about wavelengths. They're both labels, we just understand one better.

Anyways, I know it's just an example, but without a better example, i can't really understand the question well enough to think of a relevant answer.

Comment author: RichardKennaway 02 October 2012 11:19:51AM 13 points [-]

I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:21:18PM 1 point [-]

In particular, "post-utopian" is not a real term so far as I know, and I'm using it as a stand-in for literary terms that do in fact have no meaning. If you think there are none of those, Alan Sokal would like to have a word with you.

Comment author: Yvain 03 October 2012 08:36:06PM 31 points [-]

There's a sense in which a lot of fuzzy claims are meaningless: for example, it would be hard for a computer to evaluate "Socrates is kind" even if the computer could easily evaluate more direct claims like "Socrates is taller than five feet". But "kind" isn't really meaningless; it would just be a lot of work to establish exactly what goes into saying "kind" and exactly where the cutoff point between "kind" and "not so kind" is.

I agree that literary critical terms are fuzzy in the same sense as "kind", but I don't think they're necessarily any more fuzzy. For example, replacing "post-utopian" with its likely inspiration "post-colonial", I don't know much about literature, but I feel pretty okay designating Salman Rushdie as "post-colonial" (since his books very often take place against the backdrop of the issues surrounding British decolonization of India) and J. K. Rowling as "not post-colonial" (since her books don't deal with issues surrounding decolonization at all.)

Likewise, even though "post-utopian" was chosen specifically to be meaningless, I can say with confidence that Sir Thomas More's Utopia was not post-utopian, and I bet most other people will agree with me.

The Sokal Hoax to me was less about totally disproving all literary critical terms, and more about showing that it's really easy to get a paper published that no one understands. People elsewhere in the thread have already given examples of Sokalesque papers in physics, computer science, etc that got published, even though those fields seem pretty meaningful.

Literary criticism does have a bad habit of making strange assertions, but I don't think they hinge on meaningless terms. A good example would be deconstruction of various works to point out the racist or sexist elements within. For example, "It sure is suspicious that Moby Dick is about a white whale, as if Melville believed that only white animals could possibly be individuals with stories of their own."

The claim that Melville was racist when writing Moby Dick seems potentially meaningful - for example, we could go back in time, put him under truth serum, and ask him whether that was intentional. Even if it was wholly unconscious, it still implies that (for example) if we simulate a society without racism, it will be less likely to produce books like Moby Dick, or that if we pick apart Melville's brain we can draw some causal connection between the racism to which he was exposed and the choice to have Moby Dick be white.

However, if I understand correctly literary critics believe these assertions do not hinge on authorial intent; that is, Melville might not have been trying to make Moby Dick a commentary on race relations, but that doesn't mean a paper claiming that Moby Dick is a commentary on race relations should be taken less seriously.

Even this might not be totally meaningless. If an infinite monkey at an infinite typewriter happened to produce Animal Farm, it would still be the case that, by coincidence, it was a great metaphor for Communism. A literary critic (or primatologist) who wrote a paper saying "Hey, Animal Farm can increase our understanding and appreciation of the perils of Communism" wouldn't really be talking nonsense. In fact, I'd go so far as to say that they're (kind of) objectively correct, whereas even someone making the relatively stupid claim about Moby Dick above might still be right that the book can help us think about our assumptions about white people.

If I had to criticize literary criticism, I would have a few vague objections. First, that they inflate terms - instead of saying "Moby Dick vaguely reminds me of racism", they say "Moby Dick is about racism." Second, that even if their terms are not meaningless, their disputes very often are: if one critic says "Moby Dick is about racism" and another critic says "No it isn't", then if what the first one means is "Mobdy Dick vaguely reminds me of racism", then arguing this is a waste of time. My third and most obvious complaint is opportunity costs: to me at least the whole field of talking about how certain things vaguely remind you of other things seems like a waste of resources that could be turned into perfectly good paper clips.

But these seem like very different criticisms than arguing that their terms are literally meaningless. I agree that to students they may be meaningless and they might compensate by guessing the teacher's password, but this happens in every field.

Comment author: [deleted] 03 October 2012 11:17:49PM 21 points [-]

I liked your comment and have a half-formed metaphor for you to either pick apart or develop:

LW/ rationalist types tend towards hard sciences. This requires more System 2 reasoning. Their fields are like computer programs. Every step makes sense, and is understood.

Humanities tends toward more System 1 pattern recognition. This is more akin to a neural network. Even if you are getting the "right" answer, it is coming out of a black box.

Because the rationalist types can't see the algorithm, they assume it can't be "right".

Thoughts?

Comment author: Yvain 04 October 2012 12:02:00AM 6 points [-]

I like your idea and upvoted the comment, but I don't know enough about neural networks to have a meaningful opinion on it.

Comment author: Peterdjones 12 October 2012 06:30:48PM *  3 points [-]

I agree that literary critical terms are fuzzy in the same sense as "kind", but I don't think they're necessarily any more fuzzy.

That is an important point. It is not so easy to come up up with a criterion of "meaningfulness" that excludes the stuff rationalists don't like, but doens't exclude a lot of everyday terninology at the same time.

I could add that others have their own criteria of "meaningfulness". Humanities types aren't very bothered about questions like how many moons saturn has, because it doens't affect them or their society. The common factor seems to both kinds of "meaningfullness" is that they amount to "the stuff I personally consider to be worth bothering about". A concern with objective meaningfullness is still a subjective concern.

Comment author: JulianMorrison 04 October 2012 12:15:34AM 0 points [-]

FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture - an idea that pre-dates the invention of race. I think a case could be made out that (1) the causality runs from whiteness as a special or magical attribute, to its selection as a pertinent physical feature when racism was being invented (considering that there were a number of parallel candidates, like phrenology, that didn't do so well memetically), and (2) in a world that now has racism, the ongoing presence of valuing white things as special has been both consciously used to reinforce it (cf the KKK's name and its connotations) and unconsciously reinforces it by association,

Comment author: Sewing-Machine 04 October 2012 03:31:54AM 4 points [-]

FWIW, the Moby Dick example is less stupid than you paint it, given the recurrence of whiteness as an attribute of things special or good in western culture - an idea that pre-dates the invention of race.

I can't resist. I think you should read Moby Dick. Whiteness in that novel is not used as any kind of symbol for good:

This elusive quality it is, which causes the thought of whiteness, when divorced from more kindly associations, and coupled with any object terrible in itself, to heighten that terror to the furthest bounds. Witness the white bear of the poles, and the white shark of the tropics; what but their smooth, flaky whiteness makes them the transcendent horrors they are? That ghastly whiteness it is which imparts such an abhorrent mildness, even more loathsome than terrific, to the dumb gloating of their aspect. So that not the fierce-fanged tiger in his heraldic coat can so stagger courage as the white-shrouded bear or shark.

If you want to talk about racism and Moby Dick, talk about Queequeg!

Not that white animals aren't often associated with good things, but this is not unique in western culture:

So in spring, when appears the constellation Visakha, the Bodhisatwa, under the appearance of a young white elephant of six defenses, with a head the color of cochineal, with tusks shining like gold, perfect in his organs and limbs, entered the right side of his mother, and she, by means of a dream, was conscious of the fact.

Comment author: Kaj_Sotala 02 October 2012 06:55:54PM *  12 points [-]

If that's your criteria, you could use some stand-in for computer science terms that have no meaning.

WMSCI, the World Multiconference on Systemics, Cybernetics and Informatics, is a computer science and engineering conference that has occurred annually since 1995. [...] WMSCI attracted publicity of a less favorable sort in 2005 when three graduate students at MIT succeeded in getting a paper accepted as a "non-reviewed paper" to the conference that had been randomly generated by a computer program called SCIgen

Comment author: RichardKennaway 02 October 2012 07:35:57PM *  9 points [-]

I'm sure there's a lot of nonsense, but "post-utopian" appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.

We're all utopians here.

Comment author: TheOtherDave 02 October 2012 08:05:36PM 1 point [-]

Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.

By this definition, wouldn't the belief that science will not lead to perfection but we can still look forward to more of what we already have (rather than ruin and destruction) be equally post-utopian?

Comment author: RichardKennaway 02 October 2012 08:33:44PM 2 points [-]

Not as I see the word used, which appears to involve the sense of not merely less enthusiastic than, but turning away from. You can't make a movement on the basis of "yes, but not as sparkly".

Comment author: TheOtherDave 02 October 2012 10:12:43PM 5 points [-]

Pity. "It will be kind of like it is now" is an under-utilized prediction.

Comment author: [deleted] 04 October 2012 03:08:29PM 5 points [-]

Dunno, Futurama is pretty much entirely based on that.

Comment author: JulianMorrison 02 October 2012 08:09:39PM 13 points [-]

I think you are playing to what you assume are our prejudices.

Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it's actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren't really expecting X to be meaningless.

The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.

Comment author: garethrees 19 February 2013 05:00:05PM *  4 points [-]

"Post-utopian" is a real term, and even in the absence of examples of its use, it is straightforward to deduce its (likely) meaning, since "post-" means "subsequent to, in reaction to" and "utopian" means "believing in or aiming at the perfecting of polity or social conditions". So post-utopian texts are those which react against utopianism, express skepticism at the perfectibility of society, and so on. This doesn't seem like a particularly difficult idea and it is not difficult to identify particular texts as post-utopian (for example, Koestler's Darkness at Noon, Huxley's Brave New World, or Nabokov's Bend Sinister).

So I think you need to pick a better example: "post-utopian" doesn't cut it. The fact that you have chosen a weak example increases my skepticism as to the merits of your general argument. If meaningless terms are rife in the field of English literature, as you seem to be suggesting, then it should be easy for you to pick a real one.

(I made a similar point in response to your original post on this subject.)

Comment author: paper-machine 02 October 2012 06:25:20PM 8 points [-]

What would he have to say? The Sokal Hoax was about social engineering, not semantics.

Comment author: Manfred 02 October 2012 11:00:03AM 4 points [-]

Example: an irishman arguing with a mongolian over what dragons look like.

Comment author: Vaniver 02 October 2012 07:39:35PM 6 points [-]

When the Irishman is a painter and the Mongolian a dissatisfied customer, does their disagreement have meaning?

Comment author: Ender 03 October 2012 02:35:09AM 3 points [-]

In that case, they're arguing about the wrong thing. Their real dispute is that the painting isn't what the Mongolian wanted as a result of a miscommunication which neither of them noticed until one of them had spent money (or promised to) and the other had spent days painting.

So, no, even in that situation, there's no such thing as a dragon, so they might as well be arguing about the migratory patterns of unicorns.

Comment author: loup-vaillant 02 October 2012 01:40:54PM *  9 points [-]

There is the literature professor's belief, the student's belief, and the sentence "Carol is 'post-utopian'". While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor's belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student's belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label ("colonial alienation"), and then it stops. From Eliezer's main post (emphasis mine) :

Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all 'post-utopians', which you can tell because their writings exhibit signs of 'colonial alienation'. For most college students the typical result will be that their brain's version of an object-attribute list will assign the attribute 'post-utopian' to the authors Carol, Danny, and Elaine.

  1. The professor have a meaningful belief.
  2. Unable to express it properly (it may not be his fault), gives a mysterious explanation.
  3. That mysterious explanation generates a floating belief in the student's mind.

Well, not that floating. The student definitely expects a sensory experience: grades. The problem isn't the lack of expectations, but that they're based on an overly simplified model of the professor's beliefs, with no direct ties to the writing themselves –only to the authors' names. Remove professors and authors' names, and the students' beliefs are really floating: they will have no way to tie them to reality –the writing. And if they try anyway, I bet their carvings won't agree.

Now when the professor grades an answer, only a label will be available ("post-utopian", or whatever). This label probably reflects the student's belief directly. That answer will indeed be quickly patterned matched against a label inside the professor's brain, generating a quick "right" or "wrong" response (and the corresponding motion in the hand that wield the red pen). Just as drawn in the picture actually.

However, the label in the professor's head is not a floating belief like the student's. It's a cached thought, based on a much more meaningful belief (or so I hope).

Okay, now that I recognize your name, I see you're not exactly a newcomer here. Sorry if I didn't told anything you don't know. But it did seem like you conflated mysterious answers (like "phlogiston") and floating beliefs (actual neural constructs). Hope this helped.

Comment author: evand 02 October 2012 03:18:47PM 3 points [-]

If the teacher does not have a precise codification of what makes a writer "post-utopian", then how should he teach it to students?

I would say the best way is a mix of demonstrating examples ("Alice is not a post-utopian; Carol is a post-utopian"), and offering generalizations that are correlated with whether the author is a post-utopian ("colonial alienation"). This is a fairly slow method of instruction, at least in some cases where the things being studied are complicated, but it can be effective. While the student's belief may not yet be as well-formed as the professor's, I would hesitate to call it meaningless. (More specifically, I would agree denotatively but object connotatively to such a classification.) I would definitely not call the belief useless, since it forms the basis for a later belief that will be meaningful. If a route to meaningful, useful belief B goes through "meaningless" belief A, then I would say that A is useful, and that calling A meaningless produces all the wrong sorts of connotations.

Comment author: Alejandro1 02 October 2012 02:37:59PM 4 points [-]

If that is what Eliezer meant, then it was confusing to use an example for which many people suspect that the concept itself is not meaningful. It just generates distraction, like the "Is Nixon a pacifist?" example in the original Politics is the mind-killer post (and actually,the meaningfulness of post-colonialism as a category might be a political example in the wide sense of the word). He could have used something from physics like "Heat is transmitted by convention", or really any other topic that a student can learn by rot without real understanding.

Comment author: loup-vaillant 02 October 2012 03:21:06PM *  3 points [-]

I don't think Eliezer meant all what I have written (edit: yep, he didn't). I was mainly analysing (and defending) the example to death, under Daenerys' proposed assumption that the belief in the professor's head is not floating. More likely, he picked something familiar that would make us think something like "yeah, if those are just labels, that's no use".¹

By the way is there any good example? Something that (i) clearly is meaningful, and (ii) let us empathise with those who nevertheless extract a floating belief out of it? I'm not sure. I for one don't empathise with the students who merely learn by rot, for I myself don't like loosely connected belief networks: I always wanted to understand.

Also, Eliezer wasn't very explicit about the distinction between a statement, embodied in text, images, or whatever our senses can process, and belief, embodied in a heap of neurons. But this post is introductory. It is probably not very useful to make the distinction so soon. More important is to realize that ideas are not floating in the void, but are embodied in a medium: paper, computers… and of course brains.

[1] We're not familiar to "post-utopianism" and "colonial alienation" specifically, but we do know the feeling generated by such literary mumbo jumbo.

Comment author: Rixie 05 April 2013 01:10:54PM 1 point [-]

Thank you! Your post helped me finally to understand what it was that I found so dissatisfying with the way I'm being taught chemistry. I'm not sure right now what I can do to remedy this, but thank you for helping me come to the realization.

Comment author: eridu 04 October 2012 12:47:17AM *  5 points [-]

To over-extend your metaphor, dubstep is electronic music with a breakbeat and certain BPM. Bassnectar described it in an inverview once as hip-hop beats at half time in breakbeat BPMs.

It's really easy to tell the difference between dubstep and house, because dubstep has a broken kick..kickSNARE beat, while house has a 4/4 kick.kick.kick.kick beat.

(Interestingly, the dubstep you seem to describe is what people who listened to earlier dubstep commonly call "brostep," and was inspired by one Rusko song ("Cockney Thug," if I remember correctly).)

The point I mean to make by this is that most concepts do have system 2 algorithms that identify them, even if most people on LW would disagree with the social groups that advance those concepts.

I have many friends and comrades that are liberal arts students, and most of the time, if they said something like "post-utopian" or "colonial alienation" they'd have a coherent system-2 algorithm for identifying which authors or texts are more or less post-utopian.

Really, I agree that this is a bad example, because there are two things going on: the students have to guess the teacher's password (which is the same as if you had Skirllex teaching MUSC 202: Dubstep Identification, and only accepted "songs with that heavy wobble bass shit" as "real dubstep, bro"), and there's an alleged unspoken conspiracy of academics to have a meaningless classifier (which is maybe the same as subgenres of hard noise music, where there truly is no difference between typical songs in each subgenre, and only artist self-identification or consensus among raters can be used as a grouping strategy).

As others have said better than me, the Sokal affair seems to be better evidence of how easy it is to publish a bad paper than it is evidence that postmodernism is a flawed field.

Comment author: buybuydandavis 07 January 2013 06:17:46AM 2 points [-]

While the English profs may consistently classify writing samples as post utopian or not, the use of the label "post utopian" should be justified by the english meanings of "post" and "utopian" in some way. "Post" and "utopian" are concepts with meaning, they're not just nonsense sounds available for use as labels.

If you have no conceptual System 1 algorithm for "post utopian", and just have some consistent System 2 algorithm, it's a conceptual confusion to use compound of labels for concepts that may have nothing at all to do with your underlying System 2 defined concept.

Likely the confusion serves an intellectually dishonest purpose, as in euphemism. When you see this kind of nonsense, there is some politically motivated obfuscation nearby.

Comment author: RobinZ 02 October 2012 02:47:52PM 9 points [-]

Before reading other answers, I would guess that a statement is meaningful if it is either implied or refuted by a useful model of the universe - the more useful the model, the more meaningful the statement.

Comment author: RichardKennaway 02 October 2012 08:06:48AM 7 points [-]

A set of beliefs is not like a bag of sand, individual beliefs unconnected with each other, about individual things. They are connected to each other by logical reasoning, like a lump of sandstone. Not all beliefs need to have a direct connection with experience, but as long as pulling on the belief pulls, perhaps indirectly, on anticipated experience, the belief is meaningful.

When a pebble of beliefs is completely disconnected from experience, or when the connection is so loose that it can be pulled around arbitrarily without feeling the tug of experience, then we can pronounce it meaningless. The pebble may make an attractive paperweight, with an intricate structure made of elements that also occur in meaningful beliefs, but that's all it can be. Music of the mind, conveying a subjective impression of deep meaning, without having any.

For the hypothetical photon disappearing in the far-far-away, no observation can be made on that photon, but we have other observations leading to beliefs about photons in general, according to which they cannot decay. That makes it meaningful to say that the far away photon acts in the same way. If we discovered processes of photon decay, it would still be meaningful, but then we would believe it could be false.

Comment author: Bundle_Gerbe 04 October 2012 12:47:38PM *  2 points [-]

Your view reminds me of Quine's "web of belief" view as expressed in "Two Dogmas of Empiricism" section 6:

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. Truth values have to be redistributed over some of our statements. Reevaluation of some statements entails reevaluation of others, because of their logical interconnections--the logical laws being in turn simply certain further statements of the system, certain further elements of the field.

Quine doesn't use Bayesian epistemology, unfortunately because I think it would have helped him clarify and refine his view.

One way to try to flesh this intuition out is to say that some beliefs are meaningful by virtue of being subject to revision by experience (i.e. they directly pay rent), while others are meaningful by virtue of being epistemically entangled with beliefs that pay rent (in the sense of not being independent beliefs in the probabilistic sense). But that seems to fail because any belief connected to a belief that directly pays rent must itself be subject to revision by experience, at least to some extent, since if A is entangled with B, an observation which revises P(A) typically revises P(B), however slightly.

Comment author: dankane 02 October 2012 08:38:58AM 2 points [-]

Interesting idea. But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as "photons obey Maxwell's equations up to an event horizon and case to exist outside of it". You could then add other beliefs like "nothing exists outside of the event horizon" which are incompatible with the photon continuing to exist.

In other words, your beliefs cannot afford to be independent of one another, but you could build two different belief systems, one in which the photon continues to exist and one in which it does not, that make identical predictions about experiences. Is it meaningful to ask which of these belief systems is true?

Comment author: RichardKennaway 02 October 2012 09:19:43AM 3 points [-]

But how did you know how to phrase your original beliefs about photons? You could just have easily decided to describe photons as "photons obey Maxwell's equations up to an event horizon and case to exist outside of it".

Systems of belief are more like a lump of sandstone than a pile of sand, but they are also more like a lump of sandstone, a rather friable lump, than a lump of marble. They are not indissoluble structures that can be made in arbitrary shapes, the whole edifice supported by an attachment at one point to experience.

Experience never brought hypotheses such as you suggest to physicists' attention. The edifice as built has no need of it, and it cannot be bolted on: it will just fall off again.

Comment author: Bundle_Gerbe 04 October 2012 01:45:26PM *  4 points [-]

Consider "Elaine is a post-utopian and the Earth is round" This statement is meaningless, at least in the case where the Earth is round, where it is equivalent to "Elaine is a post-utopian." Yet it does constrain my experience, because observing that the Earth is flat falsifies it. If something like this came to seem like a natural proposition to consider, I think it would be hard to notice it was (partly) meaningless, since I could still notice it being updated.

This seems to defeat many suggestions people have made so far. I guess we could say it's not a real counterexample, because the statement is still "partly meaningful". But in that case it would be still be nice if we could say what "partly meaningful" means. I think that the situation often arises that a concept or belief people throw around has a lot of useless conceptual baggage that doesn't track anything in the real world, yet doesn't completely fail to constrain reality (I'd put phlogiston and possibly some literary criticism concepts in this category).

My first attempt is to say that a belief A of X is meaningful to the extent that it (is contained in / has an analog in / is resolved by) the most parsimonious model of the universe which makes all predictions about direct observations that X would make.

Comment author: thomblake 04 October 2012 02:34:56PM 3 points [-]

A solution to that particular example is already in logic - the statements "Elaine is a post-utopian" and "the Earth is round" can be evaluated separately, and then you just need a separate rule for dealing with conjunctions.

Comment author: Sewing-Machine 03 October 2012 04:25:10AM 3 points [-]

What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?

A variation on this question "what rule could restrict our beliefs to just propositions that can be decided, without excluding a priori anything true?" is known to hopeless in a strong sense.

Incidentally I think the phrase "in principle" isn't doing any work in your koan.

Comment author: Vaniver 02 October 2012 07:41:26PM *  3 points [-]

Meaningful seems like a odd word to choose, as it contains the answer itself. What rule restricts our beliefs to just propositions that can be meaningful? Why, we could ask ourselves if the proposition has meaning.

The "atoms" rule seems fine, if one takes out the word "atoms" and replaces it with "state of the universe," with the understanding that "state" includes both statics and dynamics. Thus, we could imagine a world where QM was not true, and other physics held sway- and the state of that world, including its dynamics, would be noticeably different than ours.

And, like daenerys, I think the statement that "Elaine is a post-utopian" can be meaningful, and the implied expanded version of it can be concordant with reality.

[edit] I also wrote my koan answers as I was going through the post, so here's 1:

Supposing that knowledge only exists in minds, then truth judgments- that is, knowledge that a belief corresponds to reality- will only exist in heads, because it is knowledge.

The postmodernists are wrong if they seek to have material implications from this definitional argument. What makes truth judgments special compared to other judgments is that we have access to the same reality. If Sally believes that the marble is in the basket and Anne believes the marble is in the box, the straw postmodernist might claim that both have their own truth- but two beliefs do not generate two marbles. Sally and Anne will both see the marble in the same container when they go looking for it.

Again, the bare facts agree with the postmodernists- Sally and Anne would need to look to see where the marble is, which they can hardly do without their heads! But the lack of an unthinking truth oracle does not make "the concordance of beliefs with reality"- what I would submit as a short definition of truth- a useful and testable concept.

And 2:

Quite probably, as it would want to have beliefs about the potential pasts and futures, or counterfactuals, or beliefs in the minds of others.

Comment author: RobinZ 03 October 2012 02:23:14AM 1 point [-]

I very much like your response to (1) - I think the point about having access to a common universe makes it very clear.

Comment author: Patrick 02 October 2012 10:55:33AM 3 points [-]

I don't think there can be any such rule.

Comment author: Yvain 02 October 2012 06:46:20AM *  8 points [-]

If a person with access to the computer simulating whichever universe (or set of universes) a belief is about could in principle write a program that takes as input the current state of the universe (as represented in the computer) and outputs whether the belief is true, then the belief is meaningful.

(if the universe in question does not run on a computer, begin by digitizing your universe, then proceed as above)

Comment author: MixedNuts 02 October 2012 09:34:25AM 15 points [-]

That has the same problem as atomic-level specifications that become false when you discover QM. If the Church-Turing thesis is false, all statements you have specified thus become meaningless or false. Even using a hierarchy of oracles until you hit a sufficient one might not be enough if the universe is even more magical than that.

Comment author: Salutator 02 October 2012 12:27:41PM 4 points [-]

But that's only useful if you make it circular.

Taking you more strictly at your word than you mean it the program could just return true for the majority belief on empirically non-falsifiable questions. Or it could just return false on all beliefs including your belief that that is illogical. So with the right programs pretty much arbitrary beliefs pass as meaningful.

You actually want it to depend on the state of the universe in the right way, but that's just another way to say it should depend on whether the belief is true.

Comment author: Yvain 02 October 2012 10:14:03PM 2 points [-]

That's a problem with all theories of truth, though. "Elaine is a post-utopian author" is trivially true if you interpret "post-utopian" to mean "whatever professors say is post-utopian", or "a thing that is always true of all authors" or "is made out of mass".

To do this with programs rather than philosophy doesn't make it any worse.

I'm suggesting is that there is a correspondence between meaningful statements and universal computer programs. Obviously this theory doesn't tell you how to match the right statement to the right computer program. If you match the statement "snow is white" to the computer program that is a bunch of random characters, the program will return no result and you'll conclude that "snow is white" is meaningless. But that's just the same problem as the philosopher who refuses to accept any definition of "snow", or who claims that snow is obviously black because "snow" means that liquid fossil fuel you drill for and then turn into gasoline.

If your closest match to "post-utopian" is a program that determines whether professors think someone is post-utopian, then you can either conclude that post-utopian literally means "something people call post-utopian" - which would probably be a weird and nonstandard word use the same way using "snow" to mean "oil" would be nonstandard - or that post-utopianism isn't meaningful.

Comment author: Salutator 03 October 2012 09:42:10AM 1 point [-]

Yeah, probably all theories of truth are circular and the concept is simply non-tabooable. I agree your explanation doesn't make it worse, but it doesn't make it better either.

Comment author: pragmatist 03 October 2012 05:54:22AM *  2 points [-]

Doesn't this commit you to the claim that at least some beliefs about whether or not a particular Turing machine halts must be meaningless? If they are all meaningful and your criterion of meaningfulness is correct, then your simulating computer solves the halting problem. But it seems implausible that beliefs about whether Turing machines halt are meaningless.

Comment author: JulianMorrison 02 October 2012 12:24:12PM 5 points [-]

For a belief to be meaningful you have to be able to describe evidence that would move your posterior probability of it being true after a Bayesian update.

This is a generalization of falsifiability that allows, for example, indirect evidence pertaining to universal laws.

Comment author: Matt_Simpson 02 October 2012 11:54:11PM 1 point [-]

Possible counterexample: "All possible mathematical structures exist."

Comment author: mbrubeck 02 October 2012 08:31:21PM 2 points [-]

I think this one gets more complicated when you include beliefs about things like theorems of logic, e.g., "Any consistent formal system powerful enough to describe Peano arithmetic is incomplete." It seems to me that this belief is meaningful, yet independent of any sensory experience or physical law. That is, it's not really a belief about "the universe" of atoms or quantum fields or whatnot. Perhaps it would be better to talk about these "beliefs" as a separate category.

Comment author: ArisKatsaris 02 October 2012 01:06:36PM 3 points [-]

For every meaningful proposition P, an author should (in theory) be able to write coherently about a fictional universe U where P is true and a fictional universe U' where P is false.

Comment author: Eugine_Nier 02 October 2012 05:24:52PM 7 points [-]

So my belief that 2+2=4 isn't meaningful?

Comment author: khafra 02 October 2012 06:32:57PM *  2 points [-]

I thought Eliezer's story about waking up in a universe where 2+2 seems to equal 3 felt pretty coherent.

edit: It seems like the story would be less coherent if it involved detailed descriptions of re-deriving mathematics from first principles. So perhaps ArisKatsaris' definition leaves too much to the author's judgement in what to leave out of the story.

Comment author: dankane 02 October 2012 06:56:12PM 4 points [-]

I think that it's a good deal more subtle than this. Eliezer described a universe in which he had evidence that 2+2=3, not a universe in which 2 plus 2 was actually equal to 3. If we talk about the mathematical statement that 2+2=4, there is actually no universe in which this can be false. On the other hand in order to know this fact we need to acquire evidence of it, which, because it is a mathematical truth, we can do without any interaction with the outside world. On the other hand if someone messed with your head, you could acquire evidence that 2 plus 2 was 3 instead, but seeing this evidence would not cause 2 plus 2 to actually equal 3.

Comment author: CCC 03 October 2012 12:31:28PM 2 points [-]

If we talk about the mathematical statement that 2+2=4, there is actually no universe in which this can be false.

On the contrary. Imagine a being that cannot (due to some neurological quirk) directly percieve objects - it can only percieve the spaces between objects, and thus indirectly deduce the presence of the objects themselves. To this being, the important thing - the thing that needs to be counted and to which a number is assigned - is the space, not the object.

Thus, "two" looks like this, with two spaces: 0 0 0

Placing "two" next to "two" gives this: 0 0 0 0 0 0

Counting the spaces gives five. Thus, 2+2=5.

Comment author: dankane 03 October 2012 06:47:29PM 2 points [-]

I think you misunderstand what I mean by "2+2=4". Your argument would be reasonable if I had meant "when you put two things next to another two things I end up with four things". On the other hand, this is not what I mean. In order to get that statement you need the additional, and definitely falsifiable statement "when I put a things next to b things, I have a+b things".

When I say "2+2=4", I mean that in the totally abstract object known as the natural numbers, the identity 2+2=4 holds. On the other hand the Platonist view of mathematics is perhaps a little shaky, especially among this crowd of empiricists, so if you don't want to accept the above meaning, I at least mean that "SS0+SS0=SSSS0" is a theorem in Peano Arithmetic. Neither of these claims can be false in any universe.

Comment author: RobinZ 03 October 2012 07:21:26PM 1 point [-]

I think I understand what CCC means by the being that perceives spaces instead of objects - Peano Arithmetic only exists because it is useful for us, human beings, to manipulate numbers that way. Given a different set of conditions, a different set of mathematical axioms would be employed.

Comment author: dankane 03 October 2012 07:52:41PM 1 point [-]

Peano Arithmetic is merely a collection of axioms (and axiom schema), and inference laws. It's existence is not predicated upon its usefulness, and neither are its theorems.

I agree that the fact that we actually talk about Peano Arithmetic is a consequence of the fact that it (a) is useful to us (b) appeals to our aesthetic sense. On the other hand, although the being described in CCC's post may not have developed Peano's axioms on their own, once they are informed of these axioms (and modus ponens, and what it means for something to be a theorem), they would still agree that "SS0+SS0=SSSS0" in Peano Arithmetic.

In summary, although there may be universes in which the belief "2+2=4" is no longer useful, there are no universes in which it is not true.

Comment author: RobinZ 03 October 2012 08:06:04PM 1 point [-]

I freely concede that a tree falling in the woods with no-one around makes acoustic vibrations, but I think it is relevant that it does not make any auditory experiences.

In retrospect, however, backtracking to the original comment, if "2+2=4" were replaced by "not(A and B) = (not A) or (not B)", I think my argument would be nearly untenable. I think that probably suffices to demonstrate that ArisKatsaris's theory of meaningfulness is flawed.

Comment author: katydee 02 October 2012 05:50:10AM *  1 point [-]
Comment author: Pavitra 05 October 2012 03:19:25AM 2 points [-]

Insufficient: the colony ship leaves no evidence.

Comment author: selylindi 02 October 2012 03:15:16PM *  1 point [-]

"God's-eye-view" verificationism

A proposition P is meaningful if and only if P and not-P would imply different perceptions for a hypothetical entity which perceives all existing things.

(This is not any kind of argument for the actual existence of a god. Downvote if you wish, but please not due to that potential misunderstanding.)

Comment author: Alex_Altair 02 October 2012 05:47:01AM *  0 points [-]

Solomonoff induction! Just kidding.

Comment author: seanwelsh77 14 June 2013 04:20:09AM 1 point [-]

Restrict propositions to observable references? (Or have a rule about falsifiablility?)

The problem with the observable reference rule is that sense can be divorced from reference and things can be true (in principle) even if un-sensed or un-sensable. However, when we learn language we start by associating sense with concrete reference. Abstractions are trickier.

It is the case that my sensorimotor apparatus will determine my beliefs and my ability to cross-reference my beliefs with other similar agents with similar sensorimotor apparatus will forge consensus on propositions that are meaningful and true.

Falsifiability is better. I can ask another human is Orwell post-Utopian? They can say 'hell no he is dystopian'... But if some say yes and some say no, it seems I have an issue with vagueness which I would have to clarify with some definition of criteria for post-Utopian and dystopian.

Then once we had clarity of definition we could seek evidence in his texts. A lot of humanities texts however just leave observable reference at the door and run amok with new combinations of sense. Thus you get unicorns and other forms of fantasy...

Comment author: alex_zag_al 05 October 2012 02:44:34PM *  1 point [-]

All the propositions must be logical consequences of a theory that predicts observation, once you've removed everything you can from the theory without changing its predictions, and without adding anything.

Comment author: CCC 03 October 2012 12:21:21PM 1 point [-]

"A statement can be meaningful if a test can be constructed that will return only one result, in all circumstances, if the statement is true."

Consider the satement: If I throw an object off this cliff, then the object will fall. The test is obvious; I can take a wide variety of objects (a bowling ball, a rock, a toy car, and a set of music CDs by <unpopular musician>) and throw them off the cliff. I can then note that all of them fall, and therefore improve the probability that the statement is true. I can then take one final object, a helium balloon, and throw it off the cliff; as the balloon rises, however, I have therefore shown that the statement is false. (A more correct version would be "if I throw a heavier-than-air object off this cliff, then the object will fall." It's still not completely true yet - a live pigeon is heavier than air - but it's closer).

By this test, however, the statement "Carol is a post-utopian author" is meaningful, as long as there exist some features which are the features of post-modern authors (the features do not need to be described, or even known, as long as their existence can be proven - repeatable, correct classification by a series of artificial neural networks would prove that such features exist).

Comment author: Karl 03 October 2012 01:41:17AM 1 point [-]

Proposition p is meaningful relative to the collection of possible worlds W if and only if there exist w, w' in W such that p is true in the possible world w and false in the possible world w'.

Then the question become: to be able to reason in all generality what collection of possible worlds should one use?

That's a very hard question.

Comment author: Benquo 02 October 2012 12:37:00PM 3 points [-]

This post starts out by saying that we know there is such a thing as truth, because there is something that determines our experimental outcomes, aside from our experimental predictions. But by the end of the post, you're talking about truth as correspondence to an arrangement of atoms in the universe. I'm not sure how you got from there to here.

Comment author: incariol 02 October 2012 02:59:07PM 4 points [-]

We know there's such a thing as reality due to the reasons you mention, not truth - that's just a relation between reality and our beliefs.

"Arrangements of atoms" play a role in the idea that not all "syntactically correct" beliefs actually are meaningful and the last koan asks us to provide some rule to achieve this meaningfulness for all constructible beliefs (in an AI).

At least that's my understanding...

Comment author: CronoDAS 02 October 2012 11:05:07AM 3 points [-]

Is there a difference between "truth" and "accuracy"?

Comment author: Benquo 02 October 2012 12:39:49PM 4 points [-]

I could figure some cases where would find it natural to say that one proposition is more accurate then another, but not to say that it is more true. For example, saying that my home has 1000 ft.², as opposed to saying that has 978.25 ft.² Or saying that it is the morning, as opposed to saying that it is 8:30 AM.

Comment author: pleeppleep 02 October 2012 12:43:12PM -3 points [-]

"Truth" and "accuracy" are just words, and there is no inherent difference between them.

That said, if you wanted to assign useful meaning to the two, you could use truth as a noun to describe the condition of belief matching reality, and accuracy as an adjective to refer to the place of a condition on a scale of proximity between belief and reality.

Or, you could use them the other way around.

Or, you could use both words as nouns in one context, and adjectives in another. This is usually the case, with accuracy more likely to be used as an adjective as it implies lack of confidence to some degree.

Comment author: AlexMennen 03 October 2012 02:30:23AM 2 points [-]

Didn't you say you were working on a sequence on open problems in friendly AI? And how could this possibly be higher priority than that sequence?

Comment author: Manfred 03 October 2012 02:49:04AM 9 points [-]

A guess: prerequisites. Also, we have lots of new people, so to be safe: prerequisites to prerequisites.

Comment author: Eliezer_Yudkowsky 03 October 2012 07:35:31PM 5 points [-]

Prereqs.

Comment author: shminux 02 October 2012 06:31:53AM 1 point [-]

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

You ought to admit that the statement 'there is "the thingy that determines my experimental results"' is a belief. A useful belief, but still a belief. And forgetting that sometimes leads to meaningless questions like "Which interpretation of QM is true?" or "Is wave function a real thing?"

Comment author: Peterdjones 02 October 2012 08:31:55PM *  1 point [-]

You ought to admit that the statement 'there is "the thingy that determines my experimental results"' is a belief.

Why? Didn;t anyone ever see results that conflict with their beliefs?

Comment author: Daemon 16 September 2013 09:50:04PM *  1 point [-]

Fine, Eliezer, as someone who would really like to think/believe that there's Ultimate Truth (not based in perception) to be found, I'll bite.

I don't think you are steelmanning post-modernists in your post. Suppose I am a member of a cult X -- we believe that we can leap off of Everest and fly/not die. You and I watch my fellow cult-member jump off a cliff. You see him smash himself dead. I am so deluded ("deluded") that all I see is my friend soaring in the sky. You, within your system, evaluate me as crazy. I might think the same of you.

You might think that the example is overblown and this doesn't actually happen, but I've had discussions (mostly religious) in which other people and I would look at the same set of facts and see radically, radically different things. I'm sure you've been in such situations too. It's just that I don't find it comforting to dismiss such people as 'crazy/flawed/etc.' when they can easily do the same to me in their minds/groups, putting us in equivalent positions -- the other person is wrong within our own system of reference (which each side declares to be 'true' in describing reality) and doesn't understand it.

I think this ties in with http://lesswrong.com/lw/rn/no_universally_compelling_arguments/ .

Now, I'm not trying to be ridiculous or troll. I really, really want to think that there's one truth and that rationality -- and not some other method -- is the way to get to it. But at the very fundamental level (see http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ ), it seems like a choice between picking from various axioms.

I wish the arguments you presented here convinced me, I really do. But they haven't, and I have no way of knowing that I'm not in some matrix-simulated world where everything is, really, based on how my perception was programmed. How does this work for you -- do you just start off with assumption that there is truth, and go from there? At some fundamental level, don't you believe that your perception just.. works and describes reality 'correctly,' after adjusting for all the biases? Please convince me to pick this route, I'd rather take it, instead of waiting for a philosopher of perfect emptiness to present a way to view the world without any assumptions.

(I understand that 'everything is relative to my perception' gets you pretty much nowhere in reality. It's just that I don't have a way to perfectly counter that, and it bothers me. And if I did find all of your arguments persuasive, I would be concerned if that's just an artifact of how my brain is wired [crudely speaking] -- while some other person can read a religious text and, similarly, find it compelling/non-contradictory/'makes-sense-ey' so that the axioms this person would use wouldn't require explanation [because of that other person's nature/nurture]).

If I slipped somewhere myself, please steelman my argument in responding!

Comment deleted 05 October 2012 01:29:32AM *  [-]
Comment author: [deleted] 05 October 2012 08:22:55PM 1 point [-]

Why did you reply directly to the top-level post rather than to where the quotation was taken from?

Comment author: living_philosophy 19 November 2012 07:57:36PM 1 point [-]

As a graduate philosophy student, who went to liberal arts schools, and studied mostly continental philosophy with lots of influence from post-modernism, we can infer from the comments and articles on this site that I must be a complete idiot that spouts meaningless jargon and calls it rational discussion. Thanks for the warm welcome ;) Let us hope I can be another example for which we can dismiss entire fields and intellectuals as being unfit for "true" rationality. /friendly-jest.

Now my understanding may be limited, having actually studied post-modern thought, but the majority of the critiques of post-modernism I have read in these comments seem to completely miss key tenants and techniques in the field. The primary one being deconstruction, which in literature interpretation actually challenges ALL genres of classification for works, and single-minded interpretations of meaning or intent. An example actually happened in this comment section when people were discussing Moby Dick and the possibility of pulling out racial influences and undertones. One commenter mentioned using "white" examples from the book that might show white privilege, and the other used "white" examples to show that white-ness was posed as an extremely negative trait. That was a very primitive and unintentional use of deconstruction; showing that a work has the evidence and rational for having one meaning/interpretation, but at the same time its opposite (or further pluralities). So any claim of a work/author being "post-utopian" would only partially be supported by deconstruction (by building a frame of mind and presenting textual/historical evidence of such a classification), but then be completely undermined by reverse interpretation(s) (work/author is "~post-utopian", or "utopian", or otherwise). Post-modernism and deconstruction actually fully agree, to my understanding, that such a classification is silly and possibly untenable, but also go on to show why other interpretations face similar issues, and to show the merit available in the text for such a classification. As a deconstructionist (i.e. specific stream of post-modernism), one would object to any single-minded interpretation or classification of a text/author, and so most of the criticisms of post-modernism that develop from a critique of terms like "post-utopian" or "post-colonial" are actually stretching the criticism way beyond its bounds, and targeting a field whose critique of such terms actually runs parallel to the criticism itself. It's also important to remember that post-modernism/deconstruction was not just a literary movement but one that spans across several fields of thought. In philosophy deconstruction is used to self-defeat universal claims, and bring forth opposing elements within any particular definition. It is actually an extremely useful tool of critical thought, and I have been continually surprised by how easily and consistently the majority of the community on this site dismiss it and the rest of philosophy/post-modernism as being useless or just silly language games. I hope to write an article in the future on the uses of tools like deconstruction in the rationality and bias reduction enterprises of this site.

Comment author: TimS 19 November 2012 08:06:13PM 4 points [-]

I hope to write an article in the future on the uses of tools like deconstruction in the rationality and bias reduction enterprises of this site.

Please do. (But . . . with paragraphs?)

Comment author: shminux 02 October 2012 06:20:54AM -2 points [-]

Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

Probably because your definition of existence is no good. Try a better one.

Comment author: ArisKatsaris 02 October 2012 07:08:35AM 5 points [-]

That's an attempt to dismiss epistemic rationality by arguing that only instrumental rationality matters.

I suppose that's true by certain definitions of "matters", but it ignores those of us who do assign some utility to understanding the universe itself, and therefore at least partially incorporate the epistemic in the instrumental....

Also, if I die tomorrow of a heart attack, I think it's still meaningful to say that the rest of the planet will still exist afterwards, even though there won't exist any experimental prediction I can make and personally verify to that effect. I find solipsism rather uninteresting.

Comment author: V_V 03 October 2012 10:29:39AM *  2 points [-]

That's an attempt to dismiss epistemic rationality by arguing that only instrumental rationality matters.

No. Please note that the terminology here is overloaded, hence it can cause confusion.

Instrumentalism, in the contex of epistemology, does not refer to instrumental rationality. It is the position that concepts are meaningful only up to the extent that they are useful to explain and predict experiences.

In the instrumentalist framework, you start with an input of sensorial experiences and possibly an output of actions (you may even consider your long-term memories as a type of sensorial experiences). You notice that your experiences show some regularities: they are correlated with each others and with your actions. Thus, put forward, test, and falsify hypotheses in order to build a model that explains these regularities and helps you to predict your next experience.

In this framework, the notion that there are entities external to yourself is just a scientific hypothesis, not an assumption.

Epistemological realism, on the other hand, assumes a priori that there are external entities which cause your experiences, they are called "Reality" or "the Truth" or "Nature" or "the Territory".

Believing that abstract concepts, such as mathematical axioms and theorems, are also external entities, is called Platonism. That's for instance, the position of Roger Penrose and, IIUC, Eliezer Yudkowsky.

The distinction between assuming a priori that there is an external world and merely hypothesizing it may appear of little importance, and indeed for most part it is possible to do science in both frameworks. However, the difference shows up in intricate issues which are far removed from intuition, such as the interpretaion of quantum mechanics:

Does the wavefunction exist? For an instrumentalist, the wavefunction exists in the same sense that the ground beneath their feet exists: they are both hypothesis useful to predict sensorial experiences. For a realist, instead, it makes sense to ponder whether the wavefunction is just in the map or also in the territory.

Comment author: purge 13 January 2013 08:36:58AM 1 point [-]

Beliefs should pay rent, check. Arguments about truth are not just a matter of asserting privilege, check. And yet... when we do have floating beliefs, then our arguments about truth are largely a matter of asserting privilege. I missed that connection at first.

Comment author: folkTheory 16 October 2012 04:12:36PM 1 point [-]

I don't understand the part about post-utopianism being meaningless. If people agree on what the term means, and they can read a book and detect (or not) colonial alienation, and thus have a test for post-utopianism, and different people will reach the same conclusions about any given book, then how exactly is the term meaningless?

Comment author: fortyeridania 18 October 2012 03:09:45PM 3 points [-]

I think "postmodernism," "colonial alienation," and "post-utopianism" are all meant to be blanks, which we're supposed to fill in with whatever meaningless term seems appropriate.

But I share your uneasiness about using these terms. First, I don't know enough about postmodernism to judge whether it's a field filled with empty phrases. (Yudkowsky seems to take the Sokal affair as a case-closed demonstration of the vacuousness of postmodernism. However, it is less impressive than it may seem at first. The way the scandal is presented by some"science-types"--as an "emperor's new clothes" story, with pretentious, obfuscationist academics in the role of the court sycophants--does not hold up well after reading the Wikipedia article. The editors of Social Text failed to adhere to appropriate standards of rigor, but it's not like they took one look at Sokal's manuscript and were floored by its pseudo-brilliance.)

Second, I suspect there aren't any clear-cut examples of meaningless claims out there that actually have any currency.(I only suspect this; I'm not certain. Some things seem meaningless to me; however, that could be just because I'm an outsider.)

Counterexamples?

Comment author: thomblake 16 October 2012 04:32:59PM *  1 point [-]

If people agree on what the term means, and they can read a book and detect (or not) colonial alienation, and thus have a test for post-utopianism, and different people will reach the same conclusions about any given book

By hypothesis, none of those things are true. If those things happen to be true for "post-utopianism" in the real world, substitute a different word that people use inconsistently and doesn't refer to anything useful.

Comment author: Sewing-Machine 04 October 2012 03:11:19AM 1 point [-]

Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

One shouldn't form theories about a particular photon. The statement "photons in general continue to exist after crossing the cosmological horizon" and "photons in general blink out of existence when they cross the cosmological horizon" have distinct testable consequences, if you have a little freedom of motion.

Comment author: chaosmosis 03 October 2012 01:13:47AM *  1 point [-]

Here's my map of my map with respect to the concept of truth.

Level Zero: I don't know. I wouldn't even be investigating these concepts about truth unless on some level I had some form of doubt about them. The only reason I think I know anything is because I assume it's possible for me to know anything. Maybe all of my priors are horribly messed up with respect to whatever else they potentially should be. Maybe my entire brain is horribly broken and all of my intuitive notions about reality and probability and logic and consistency are meaningless. There's no way for me to tell.

Level One: I know nothing. The problem of induction is insurmountable.

Level Two: I want to know something, or at least to believe. Abstract truths outside the content of my experience are meaningless. I don't care about whether or not induction is necessarily a valid form of logic; I only care whether or not it will work in the context of my future experiences. I don't care whether or not my priors are valid, they're my priors all the same. On this level I refuse to reject the validity of any viewpoint if that viewpoint is authentic, although I still only abide myself by my own internalized views. My fundamental values are just a fact, and they reject the idea that there is no truth despite whatever my brain might say. Ironically, irrational processes are at the root of my beliefs about rationality and reality.

Level Three: My level three seems to be Eliezer's level zero. The world consistently works by certain fundamental laws which can be used to make predictions. The laws of this universe can be investigated through the use of my intuitions about logic and the way reality should work. I spend most of my time on this level, but I think that the existence of the other levels is significant because those levels shape the way I understand epistemology and my ability to understand other perspectives.

Level Four: There are certain things which it is good to proclaim to be true, or to fool oneself into believing are true. Some of these things actually are true, and some are actually false. But in the case of self deception, the recognition that some of these things are actually false must be avoided. The self deception aspect of this level of truth does not come into play very often for me, except in some specific hypothetical circumstances.