Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Useful Idea of Truth

77 Post author: Eliezer_Yudkowsky 02 October 2012 06:16PM

(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI.  For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows.  And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation.  Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)


I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan

I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico

What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche


The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.

  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.

  3. Anne leaves the room, and Sally returns.

  4. The experimenter asks the child where Sally will look for her marble.

Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.

(Attributed to:  Baron-Cohen, S., Leslie, L. and Frith, U. (1985) ‘Does the autistic child have a “theory of mind”?’, Cognition, vol. 21, pp. 37–46.)

Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality. A three-year-old has a model only of where the marble is. A four-year old is developing a theory of mind; they separately model where the marble is and where Sally believes the marble is, so they can notice when the two conflict - when Sally has a false belief.

Any meaningful belief has a truth-condition, some way reality can be which can make that belief true, or alternatively false. If Sally's brain holds a mental image of a marble inside the basket, then, in reality itself, the marble can actually be inside the basket - in which case Sally's belief is called 'true', since reality falls inside its truth-condition. Or alternatively, Anne may have taken out the marble and hidden it in the box, in which case Sally's belief is termed 'false', since reality falls outside the belief's truth-condition.

The mathematician Alfred Tarski once described the notion of 'truth' via an infinite family of truth-conditions:

  • The sentence 'snow is white' is true if and only if snow is white.

  • The sentence 'the sky is blue' is true if and only if the sky is blue.

When you write it out that way, it looks like the distinction might be trivial - indeed, why bother talking about sentences at all, if the sentence looks so much like reality when both are written out as English?

But when we go back to the Sally-Anne task, the difference looks much clearer: Sally's belief is embodied in a pattern of neurons and neural firings inside Sally's brain, three pounds of wet and extremely complicated tissue inside Sally's skull. The marble itself is a small simple plastic sphere, moving between the basket and the box. When we compare Sally's belief to the marble, we are comparing two quite different things.

(Then why talk about these abstract 'sentences' instead of just neurally embodied beliefs? Maybe Sally and Fred believe "the same thing", i.e., their brains both have internal models of the marble inside the basket - two brain-bound beliefs with the same truth condition - in which case the thing these two beliefs have in common, the shared truth condition, is abstracted into the form of a sentence or proposition that we imagine being true or false apart from any brains that believe it.)

Some pundits have panicked over the point that any judgment of truth - any comparison of belief to reality - takes place inside some particular person's mind; and indeed seems to just compare someone else's belief to your belief:

So is all this talk of truth just comparing other people's beliefs to our own beliefs, and trying to assert privilege? Is the word 'truth' just a weapon in a power struggle?

For that matter, you can't even directly compare other people's beliefs to our own beliefs. You can only internally compare your beliefs about someone else's belief to your own belief - compare your map of their map, to your map of the territory.

Similarly, to say of your own beliefs, that the belief is 'true', just means you're comparing your map of your map, to your map of the territory. People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

And so saying 'I believe the sky is blue, and that's true!' typically conveys the same information as 'I believe the sky is blue' or just saying 'The sky is blue' - namely, that your mental model of the world contains a blue sky.

Meditation:

If the above is true, aren't the postmodernists right? Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?

(A 'meditation' is a puzzle that the reader is meant to attempt to solve before continuing. It's my somewhat awkward attempt to reflect the research which shows that you're much more likely to remember a fact or solution if you try to solve the problem yourself before reading the solution; succeed or fail, the important thing is to have tried first . This also reflects a problem Michael Vassar thinks is occurring, which is that since LW posts often sound obvious in retrospect, it's hard for people to visualize the diff between 'before' and 'after'; and this diff is also useful to have for learning purposes. So please try to say your own answer to the meditation - ideally whispering it to yourself, or moving your lips as you pretend to say it, so as to make sure it's fully explicit and available for memory - before continuing; and try to consciously note the difference between your reply and the post's reply, including any extra details present or missing, without trying to minimize or maximize the difference.)

...
...
...

Reply:

The reply I gave to Dale Carrico - who declaimed to me that he knew what it meant for a belief to be falsifiable, but not what it meant for beliefs to be true - was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results. If I believe very strongly that I can fly, then this belief may lead me to step off a cliff, expecting to be safe; but only the truth of this belief can possibly save me from plummeting to the ground and ending my experiences with a splat.

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

You won't get a direct collision between belief and reality - or between someone else's beliefs and reality - by sitting in your living-room with your eyes closed. But the situation is different if you open your eyes!

Consider how your brain ends up knowing that its shoelaces are untied:

  • A photon departs from the Sun, and flies to the Earth and through Earth's atmosphere.
  • Your shoelace absorbs and re-emits the photon.
  • The reflected photon passes through your eye's pupil and toward your retina.
  • The photon strikes a rod cell or cone cell, or to be more precise, it strikes a photoreceptor, a form of vitamin-A known as retinal, which undergoes a change in its molecular shape (rotating around a double bond) powered by absorption of the photon's energy. A bound protein called an opsin undergoes a conformational change in response, and this further propagates to a neural cell body which pumps a proton and increases its polarization.
  • The gradual polarization change is propagated to a bipolar cell and then a ganglion cell. If the ganglion cell's polarization goes over a threshold, it sends out a nerve impulse, a propagating electrochemical phenomenon of polarization-depolarization that travels through the brain at between 1 and 100 meters per second. Now the incoming light from the outside world has been transduced to neural information, commensurate with the substrate of other thoughts.
  • The neural signal is preprocessed by other neurons in the retina, further preprocessed by the lateral geniculate nucleus in the middle of the brain, and then, in the visual cortex located at the back of your head, reconstructed into an actual little tiny picture of the surrounding world - a picture embodied in the firing frequencies of the neurons making up the visual field. (A distorted picture, since the center of the visual field is processed in much greater detail - i.e. spread across more neurons and more cortical area - than the edges.)
  • Information from the visual cortex is then routed to the temporal lobes, which handle object recognition.
  • Your brain recognizes the form of an untied shoelace.

And so your brain updates its map of the world to include the fact that your shoelaces are untied. Even if, previously, it expected them to be tied!  There's no reason for your brain not to update if politics aren't involved. Once photons heading into the eye are turned into neural firings, they're commensurate with other mind-information and can be compared to previous beliefs.

Belief and reality interact all the time. If the environment and the brain never touched in any way, we wouldn't need eyes - or hands - and the brain could afford to be a whole lot simpler. In fact, organisms wouldn't need brains at all.

So, fine, belief and reality are distinct entities which do intersect and interact. But to say that we need separate concepts for 'beliefs' and 'reality' doesn't get us to needing the concept of 'truth', a comparison between them. Maybe we can just separately (a) talk about an agent's belief that the sky is blue and (b) talk about the sky itself. Instead of saying, "Jane believes the sky is blue, and she's right", we could say, "Jane believes 'the sky is blue'; also, the sky is blue" and convey the same information about what (a) we believe about the sky and (b) what we believe Jane believes. We could always apply Tarski's schema - "The sentence 'X' is true iff X" - and replace every instance of alleged truth by talking directly about the truth-condition, the corresponding state of reality (i.e. the sky or whatever). Thus we could eliminate that bothersome word, 'truth', which is so controversial to philosophers, and misused by various annoying people.

Suppose you had a rational agent, or for concreteness, an Artificial Intelligence, which was carrying out its work in isolation and certainly never needed to argue politics with anyone. The AI knows that "My model assigns 90% probability that the sky is blue"; it is quite sure that this probability is the exact statement stored in its RAM. Separately, the AI models that "The probability that my optical sensors will detect blue out the window is 99%, given that the sky is blue"; and it doesn't confuse this proposition with the quite different proposition that the optical sensors will detect blue whenever it believes the sky is blue. So the AI can definitely differentiate the map and the territory; it knows that the possible states of its RAM storage do not have the same consequences and causal powers as the possible states of sky.

But does this AI ever need a concept for the notion of truth in general - does it ever need to invent the word 'truth'? Why would it work better if it did?

Meditation: If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?

...
...
...

Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as:

  • Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.

  • To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

  • True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

This is the main benefit of talking and thinking about 'truth' - that we can generalize rules about how to make maps match territories in general; we can learn lessons that transfer beyond particular skies being blue.


Next in main sequence:

Complete philosophical panic has turned out not to be justified (it never is). But there is a key practical problem that results from our internal evaluation of 'truth' being a comparison of a map of a map, to a map of reality: On this schema it is very easy for the brain to end up believing that a completely meaningless statement is 'true'.

Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all 'post-utopians', which you can tell because their writings exhibit signs of 'colonial alienation'. For most college students the typical result will be that their brain's version of an object-attribute list will assign the attribute 'post-utopian' to the authors Carol, Danny, and Elaine. When the subsequent test asks for "an example of a post-utopian author", the student will write down "Elaine". What if the student writes down, "I think Elaine is not a post-utopian"? Then the professor models thusly...

...and marks the answer false.

After all...

  • The sentence "Elaine is a post-utopian" is true if and only if Elaine is a post-utopian.

...right?

Now of course it could be that this term does mean something (even though I made it up).  It might even be that, although the professor can't give a good explicit answer to "What is post-utopianism, anyway?", you can nonetheless take many literary professors and separately show them new pieces of writing by unknown authors and they'll all independently arrive at the same answer, in which case they're clearly detecting some sensory-visible feature of the writing.  We don't always know how our brains work, and we don't always know what we see, and the sky was seen as blue long before the word "blue" was invented; for a part of your brain's world-model to be meaningful doesn't require that you can explain it in words.

On the other hand, it could also be the case that the professor learned about "colonial alienation" by memorizing what to say to his professor.  It could be that the only person whose brain assigned a real meaning to the word is dead.  So that by the time the students are learning that "post-utopian" is the password when hit with the query "colonial alienation?", both phrases are just verbal responses to be rehearsed, nothing but an answer on a test.

The two phrases don't feel "disconnected" individually because they're connected to each other - post-utopianism has the apparent consequence of colonial alienation, and if you ask what colonial alienation implies, it means the author is probably a post-utopian.  But if you draw a circle around both phrases, they don't connect to anything else.  They're floating beliefs not connected with the rest of the model. And yet there's no internal alarm that goes off when this happens. Just as "being wrong feels like being right" - just as having a false belief feels the same internally as having a true belief, at least until you run an experiment - having a meaningless belief can feel just like having a meaningful belief.

(You can even have fights over completely meaningless beliefs.  If someone says "Is Elaine a post-utopian?" and one group shouts "Yes!" and the other group shouts "No!", they can fight over having shouted different things; it's not necessary for the words to mean anything for the battle to get started.  Heck, you could have a battle over one group shouting "Mun!" and the other shouting "Fleem!"  More generally, it's important to distinguish the visible consequences of the professor-brain's quoted belief (students had better write down a certain thing on his test, or they'll be marked wrong) from the proposition that there's an unquoted state of reality (Elaine actually being a post-utopian in the territory) which has visible consquences.)

One classic response to this problem was verificationism, which held that the sentence "Elaine is a post-utopian" is meaningless if it doesn't tell us which sensory experiences we should expect to see if the sentence is true, and how those experiences differ from the case if the sentence is false.

But then suppose that I transmit a photon aimed at the void between galaxies - heading far off into space, away into the night. In an expanding universe, this photon will eventually cross the cosmological horizon where, even if the photon hit a mirror reflecting it squarely back toward Earth, the photon would never get here because the universe would expand too fast in the meanwhile. Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true. And this sort of question can have important policy consequences: suppose we were thinking of sending off a near-light-speed colonization vessel as far away as possible, so that it would be over the cosmological horizon before it slowed down to colonize some distant supercluster. If we thought the colonization ship would just blink out of existence before it arrived, we wouldn't bother sending it.

It is both useful and wise to ask after the sensory consequences of our beliefs. But it's not quite the fundamental definition of meaningful statements. It's an excellent hint that something might be a disconnected 'floating belief', but it's not a hard-and-fast rule.

You might next try the answer that for a statement to be meaningful, there must be some way reality can be which makes the statement true or false; and that since the universe is made of atoms, there must be some way to arrange the atoms in the universe that would make a statement true or false. E.g. to make the statement "I am in Paris" true, we would have to move the atoms comprising myself to Paris. A literateur claims that Elaine has an attribute called post-utopianism, but there's no way to translate this claim into a way to arrange the atoms in the universe so as to make the claim true, or alternatively false; so it has no truth-condition, and must be meaningless.

Indeed there are claims where, if you pause and ask, "How could a universe be arranged so as to make this claim true, or alternatively false?", you'll suddenly realize that you didn't have as strong a grasp on the claim's truth-condition as you believed. "Suffering builds character", say, or "All depressions result from bad monetary policy." These claims aren't necessarily meaningless, but they're a lot easier to say, than to visualize the universe that makes them true or false. Just like asking after sensory consequences is an important hint to meaning or meaninglessness, so is asking how to configure the universe.

But if you say there has to be some arrangement of atoms that makes a meaningful claim true or false...

Then the theory of quantum mechanics would be meaningless a priori, because there's no way to arrange atoms to make the theory of quantum mechanics true.

And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false - since there'd be no atoms arranged to fulfill their truth-conditions.

Meditation: What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?


  • Meditation Answers - (A central comment for readers who want to try answering the above meditation (before reading whatever post in the Sequence answers it) or read contributed answers.)
  • Mainstream Status - (A central comment where I say what I think the status of the post is relative to mainstream modern epistemology or other fields, and people can post summaries or excerpts of any papers they think are relevant.)

 

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Skill: The Map is Not the Territory"

Comments (515)

Comment author: ArthurRainbow 11 July 2016 08:48:41AM 0 points [-]

The first image of this post is broken

Comment author: Elo 13 July 2016 01:10:01AM -2 points [-]
Comment author: topynate 08 January 2016 05:28:15AM 1 point [-]

The first image is a dead hotlink. It's in the internet archive and I've uploaded it to imgur.

Comment author: Gram_Stone 02 July 2015 11:10:36AM 0 points [-]

Maybe it's just me, but the first image is broken.

Comment author: Daemon 16 September 2013 09:50:04PM *  2 points [-]

Fine, Eliezer, as someone who would really like to think/believe that there's Ultimate Truth (not based in perception) to be found, I'll bite.

I don't think you are steelmanning post-modernists in your post. Suppose I am a member of a cult X -- we believe that we can leap off of Everest and fly/not die. You and I watch my fellow cult-member jump off a cliff. You see him smash himself dead. I am so deluded ("deluded") that all I see is my friend soaring in the sky. You, within your system, evaluate me as crazy. I might think the same of you.

You might think that the example is overblown and this doesn't actually happen, but I've had discussions (mostly religious) in which other people and I would look at the same set of facts and see radically, radically different things. I'm sure you've been in such situations too. It's just that I don't find it comforting to dismiss such people as 'crazy/flawed/etc.' when they can easily do the same to me in their minds/groups, putting us in equivalent positions -- the other person is wrong within our own system of reference (which each side declares to be 'true' in describing reality) and doesn't understand it.

I think this ties in with http://lesswrong.com/lw/rn/no_universally_compelling_arguments/ .

Now, I'm not trying to be ridiculous or troll. I really, really want to think that there's one truth and that rationality -- and not some other method -- is the way to get to it. But at the very fundamental level (see http://lesswrong.com/lw/s0/where_recursive_justification_hits_bottom/ ), it seems like a choice between picking from various axioms.

I wish the arguments you presented here convinced me, I really do. But they haven't, and I have no way of knowing that I'm not in some matrix-simulated world where everything is, really, based on how my perception was programmed. How does this work for you -- do you just start off with assumption that there is truth, and go from there? At some fundamental level, don't you believe that your perception just.. works and describes reality 'correctly,' after adjusting for all the biases? Please convince me to pick this route, I'd rather take it, instead of waiting for a philosopher of perfect emptiness to present a way to view the world without any assumptions.

(I understand that 'everything is relative to my perception' gets you pretty much nowhere in reality. It's just that I don't have a way to perfectly counter that, and it bothers me. And if I did find all of your arguments persuasive, I would be concerned if that's just an artifact of how my brain is wired [crudely speaking] -- while some other person can read a religious text and, similarly, find it compelling/non-contradictory/'makes-sense-ey' so that the axioms this person would use wouldn't require explanation [because of that other person's nature/nurture]).

If I slipped somewhere myself, please steelman my argument in responding!

Comment author: duckduckMOO 21 July 2014 01:38:11AM *  0 points [-]

The downvotes and no reply are a pretty good example of what's wrong with less wrong. Someone who is genuinely confused should not be shooed away then insulted when they ask again.

First of all remember to do and be what's best. If this doubt is engendering good attitudes in you, why not keep it? The rest of this is premised on it not helping or being unhelpful.

External reality is much more likely than being part of a simulation which adjusts itself to your beliefs because a simulation which adjusts itself to your beliefs is way, way more complicated. It requires more assumptions than a single level reality. If there's a programmer of your reality, that programmer has a reality too, which needs to be explained in the same way a single level one should as does their ability to program such a lifelike entity and all sorts of other things.

More fundamentally though, this is just the reality you live in, whatever its position in a potential reality chain.

If we are being simulated, trying to metagame potential matrix lords' dispositions/ ask for favours/look for loopholes/care less about its contents is only a bug of human cognition. If this is a simulation, it is inhabited by at least me, and almost certainly many other people, and there's real consequences for all of us. If you don't earn your simulation rent you'll get kicked out of your simulation place. Qualify everything with "potentially simulated-" and it changes nothing. "Real" just isn't a useful (and so, important) distinction to make in first person reguarding simulations.

and/or you could short circuit any debilitating doubt using fighting games or sports (or engaging in other similiar activities) which illustrate the potential importance of leaning all in towards the evidence without worrying about the nature of things, and are a good way to train that habit.

Also, in this potentially simulated world, social pressure is a real thing. The more infallible and sensitive you make your thinking (or allow it to be) the more prone it is to interference from people who want to disrupt you, unless you're willing to cut yourself off from people to some extent. When someone gives you an idiotic objection (and there are a lot of those here), the more nuanced your own view actually is the harder it will be to explain and the less likely people will listen fairly. You could just say whatever you think is going to influence them best but that adds a layer of complexity and is another tradeoff. If you're not going to try to be a "philosopher of perfect emptiness" taking external reality as an assumption is the most reliable to work with your human mind, and not confuse it: how are you supposed to act if there are matrix lords? There's nothing to go on so any leaning such beliefs (beliefs which shouldn't change your approaches or attitudes) prompts is bound to be a bias.

Comment author: Daemon 17 September 2013 12:16:44AM 0 points [-]

If this wasn't clear: responses would be much more helpful than up/down votes.

Comment author: notsonewuser 21 September 2013 04:14:30AM 1 point [-]

I downvoted your comment because it was unclear to me what your point was. It seems to me that it lacks a single, precise focus.

Comment author: notsonewuser 27 June 2013 05:16:04PM *  0 points [-]

The first image in this post does not show up anymore. The URL in the source code, http://labspace.open.ac.uk/file.php/4771/DSE232_1_004i.jpg , needs to be replaced by http://labspace.open.ac.uk/file.php/8398/DSE232_1_004i.jpg . However, perhaps it would be best to host somewhere other than labspace.open.ac.uk, if they will continue to frequently reorganize their files.

(Feel free to delete this comment when the issue is fixed.)

Comment author: purge 13 January 2013 08:36:58AM 2 points [-]

Beliefs should pay rent, check. Arguments about truth are not just a matter of asserting privilege, check. And yet... when we do have floating beliefs, then our arguments about truth are largely a matter of asserting privilege. I missed that connection at first.

Comment author: living_philosophy 19 November 2012 07:57:36PM 2 points [-]

As a graduate philosophy student, who went to liberal arts schools, and studied mostly continental philosophy with lots of influence from post-modernism, we can infer from the comments and articles on this site that I must be a complete idiot that spouts meaningless jargon and calls it rational discussion. Thanks for the warm welcome ;) Let us hope I can be another example for which we can dismiss entire fields and intellectuals as being unfit for "true" rationality. /friendly-jest.

Now my understanding may be limited, having actually studied post-modern thought, but the majority of the critiques of post-modernism I have read in these comments seem to completely miss key tenants and techniques in the field. The primary one being deconstruction, which in literature interpretation actually challenges ALL genres of classification for works, and single-minded interpretations of meaning or intent. An example actually happened in this comment section when people were discussing Moby Dick and the possibility of pulling out racial influences and undertones. One commenter mentioned using "white" examples from the book that might show white privilege, and the other used "white" examples to show that white-ness was posed as an extremely negative trait. That was a very primitive and unintentional use of deconstruction; showing that a work has the evidence and rational for having one meaning/interpretation, but at the same time its opposite (or further pluralities). So any claim of a work/author being "post-utopian" would only partially be supported by deconstruction (by building a frame of mind and presenting textual/historical evidence of such a classification), but then be completely undermined by reverse interpretation(s) (work/author is "~post-utopian", or "utopian", or otherwise). Post-modernism and deconstruction actually fully agree, to my understanding, that such a classification is silly and possibly untenable, but also go on to show why other interpretations face similar issues, and to show the merit available in the text for such a classification. As a deconstructionist (i.e. specific stream of post-modernism), one would object to any single-minded interpretation or classification of a text/author, and so most of the criticisms of post-modernism that develop from a critique of terms like "post-utopian" or "post-colonial" are actually stretching the criticism way beyond its bounds, and targeting a field whose critique of such terms actually runs parallel to the criticism itself. It's also important to remember that post-modernism/deconstruction was not just a literary movement but one that spans across several fields of thought. In philosophy deconstruction is used to self-defeat universal claims, and bring forth opposing elements within any particular definition. It is actually an extremely useful tool of critical thought, and I have been continually surprised by how easily and consistently the majority of the community on this site dismiss it and the rest of philosophy/post-modernism as being useless or just silly language games. I hope to write an article in the future on the uses of tools like deconstruction in the rationality and bias reduction enterprises of this site.

Comment author: almkglor 15 December 2012 01:02:39AM *  0 points [-]

I proffer the following quotes rather than an entire article (I think the major problem with post-modernism isn't irrationality, but verbosity. JUST LOOK AT YOURSELF):

"For the sake of sanity, use ET CETERA: When you say 'Mary is a good girl!' be aware that Mary is much more than 'good'. Mary is 'good', nice, kind, et cetera, meaning she also has other characteristics." - A.E. Van Vogt, World of Null-A

"For the sake of sanity, use QUOTATIONS: For instance 'conscious' and 'unconscious' mind are useful descriptive terms, but it has yet to be proved that the terms themselves accurately reflect the 'process' level of events. They are maps of a territory about which we can possibly never have exact information. Since Null-A training is for the individuals, the important thing is to be conscious of the 'multiordinal' -that is the many valued- meaning of the words one hears or speaks." - A.E. Van Vogt, World of Null-A

Comment author: living_philosophy 24 February 2013 06:56:14AM *  -2 points [-]

Ya, I can see that criticism. Here's a shorter version for you: arguing against post-modernism by arguing against the use of a different term (post-colonial, or even worse the made-up post-utopian) is a complete straw-man and fallacious argumentation. It also makes the OP and commenters look exceptionally naive when the thing they argue against (post-modernism) would actually agree with their point (critiquing literary genres), and preempted them in making it (thus the discussion of deconstruction above).

Also, thanks for the quotes :) And remember, being overly verbose is a critique of communication, not of the rationality of a position or method. SELF-EXAMINATION & MODIFICATION COMPLETE

Comment author: TimS 19 November 2012 08:06:13PM 4 points [-]

I hope to write an article in the future on the uses of tools like deconstruction in the rationality and bias reduction enterprises of this site.

Please do. (But . . . with paragraphs?)

Comment author: BobTheBob 13 November 2012 04:30:51AM 1 point [-]

A criticism - somewhat harsh but hopefully constructive.

As you know, lots of people have written on the subjects of truth and meaning (aside from Tarski). It seems, however, that you don't accord them much importance (no references, failure to consider alternate points of view, apparent lack of awareness of the significance of the matter of what the bearer of truth (sentence, proposition, 'neurally embodied belief') properly is, etc.). I put it to you this is a manifestation of irrationality: you have a known means at your disposal to learn reliably about a subject which is plainly important to you, but you apparently reject it in favour of the more personally satisfying but much less reliable alternative of blogging your own ideas -you willingly choose an inferior path to belief formation. If you want to get a good understanding of such things as truth, reference and mathematical proof, I submit that the rational starting point is to read at least a survey of what experts in the fields have written, and to develop your own thoughts, at least initially, in the context they provide.

Comment author: Eliezer_Yudkowsky 13 November 2012 05:30:22AM 4 points [-]

Give me an example of a specific thing relevant to constructing an AI which I should have referenced, plus the role it plays in a (self-modifying) AI. Keep in mind that I only care about constructing self-modifying AIs and not about "what is the bearer of truth".

I've read works-not-referenced on "meaning", they just don't seem relevant to anything I care about. Though obviously there's quite a lot of standard work on mathematical proof that I care about (some small amount of which I've referenced).

Comment author: Peterdjones 16 November 2012 10:20:18AM 0 points [-]

You mention that an AI might need a cross-domain notion of truth, or might realise that truth applies accross domains. Michael Lynch's functionalist thbeory of truth, mentioned elsewhere on this page, is such a theory.

Comment author: BobTheBob 14 November 2012 03:46:37AM 1 point [-]

1) I don't see that this really engages the criticism. I take it you reject that the subjects of truth and reference are important to you. On this, two thoughts:

a) This doesn't affect the point about the reliability of blogging versus research. The significance of the irrationality maybe, but the point remains. You may hold that the value to you of the creative process of explicating your own thoughts is sufficiently high that it trumps the value of coming to optimally informed beliefs - that the cost-benefit analysis favours blogging. I am sceptical of this, but would be interested to hear the case.

b) It seems just false that you don't care about these subjects. You've written repeatedly on them, and seem to be aiming for an internally coherent epistemology and semantics.

2) My claim was that your lack of references is evidence that you don't accord importance to experts on truth and meaning, not that there are specific things you should be referencing. That said, if your claim is ultimately just the observation that truth is useful as a device for so-called semantic ascent, you might mention Quine (see the relevant section of Word and Object or the discussion in Pursuit of Truth) or the opening pages of Paul Horwich's book Truth, to give just two examples.

3) My own view is that AI should have nothing to do with truth, meaning, belief or rationality - that AI theory should be elaborated entirely in terms of pattern matching and generation, and that philosophy (and likewise decision theory) should be close to irrelevant to it. You seem to think you need to do some philosophy (else why these posts?), but not too much (you don't have to decide whether the sorts of things properly called 'true' are sentences, abstract propositions or neural states, or all or none of the above). Where the line lies and why is not clear to me.

Comment author: chaosmosis 14 November 2012 05:27:22AM 0 points [-]

Your comment carries the assumption that studying the work of experts makes you better at understanding epistemology, and I'm not sure why you think that. Much of philosophy has a poor understanding of epistemology, in my mind. Can you explain why you think reading the work of experts is important for having worthwhile thoughts on epistemology?

Comment author: BobTheBob 15 November 2012 02:01:04PM 5 points [-]

This seems to me a reasonable question (at least partly - see below). To be clear, I said that reading the work of experts is more likely to produce a good understanding than merely writing-up one's own thoughts. My answer:

For any given field, reading the thoughts of experts -ie, smart people who have devoted substantial time and effort to thinking and collaborating in the field- is more likely to result in a good understanding of the field's issues than furrowing one's brow and typing away in relative isolation. I take this to be common sense, but please say if you need some substantiation. The conclusion about philosophy follows by universal instantiation.

"Ah", I hear you say, "but philosophy does not fit this pattern, because the people who do it aren't smart. They're all at best of mediocre intelligence." (is there another explanation of the poor understanding you refer to?). From what I've seen on LW, this position will be inferred to from a bad experience or two with philosophy profs , or perhaps on the grounds that no smart person would elect to study such a diseased subject.

Two rejoinders:

i) Suppose it were true that only second rate thinkers do philosophy. It would be still the case that with a large number of people discussing the issues over many years, there'd be a good chance something worth knowing -if there's anything to know- would emerge. It wouldn't be obvious that the rational course is to ignore it, if interested in the issues.

ii) It's obviously false (hence the 'partly' above). Just try reading the work of Timothy Williamson or David Lewis or Crispin Wright or W.V.O. Quine or Hilary Putnam or Donald Davidson or George Boolos or any of a huge number of other writers, and then making a rational case that the leading thinkers of philosophy are second-rate intellects. I think this is sufficiently obvious that the failure to see it suggests not merely oversight but bias.

Philosophical progress may tend to take the form just of increasingly nuanced understandings of its problems' parameters rather than clear resolutions of them, and so may not seem worth doing, to some. I don't know whether I'd argue with someone who thinks this, but I would suggest if one thinks it, one shouldn't be claiming it even while expounding a philosophical theory.

Comment author: RichardKennaway 16 November 2012 09:10:50AM 2 points [-]

"Ah", I hear you say, "but philosophy does not fit this pattern, because the people who do it aren't smart. They're all at best of mediocre intelligence." (is there another explanation of the poor understanding you refer to?)

There's no fool like an intelligent fool. You have to be really smart to be as stupid as a philosopher.

Even in antiquity it was remarked that "no statement is too absurd for some philosophers to make" (Cicero).

If ever one needed a demonstration that intelligence is not usefully thought of as a one-dimensional attribute, this is it.

Philosophical progress may tend to take the form just of increasingly nuanced understandings of its problems' parameters

When I hear the word "nuanced", I reach for my sledgehammer.

Comment author: thomblake 16 November 2012 04:21:52PM 1 point [-]

When I hear the word "nuanced", I reach for my sledgehammer.

Quoting this.

Comment author: Peterdjones 16 November 2012 08:25:12AM 3 points [-]

Reading the work of experts also puts you in a position to communicate complex ideas in a way others can understand.

Comment author: hairyfigment 16 November 2012 08:04:50AM 0 points [-]

I think philosophers include some smart people, and they produced some excellent work (some of which might still help us today). I also think philosophy is not a natural class. You would never lump the members of this category together without specific social factors pushing them together. Studying "philosophy" seems unlikely to produce any good results unless you know what to look for.

I have little confidence in your recommendations, because your sole concrete example to date of a philosophical question seems ludicrous. What would change if a neurally embodied belief rather than a sentence (or vice versa) were the "bearer of meaning"? And as a separate question, why should we care?

Comment author: BobTheBob 17 November 2012 03:34:20PM 1 point [-]

The issue is whether a sentence's meaning is just its truth conditions, or whether it expresses some kind of independent thought or proposition, and this abstract object has truth conditions. These are two quite different approaches to doing semantics.

Why should you care? Personally, I don't see this problem has anything to do with the problem of figuring out how a brain acquires the patterns of connections needed to create the movements and sounds it does given the stimuli it receives. To me it's an interesting but independent problem, and the idea of 'neurally embodied beliefs' is worthless. Some people (with whom I disagree but whom I nevertheless respect) think the problems are related, in which case there's an extra reason to care, and what exactly a neurally embodied belief is, will vary. If you don't care, that's your business.

Comment author: thomblake 19 November 2012 03:59:00PM 1 point [-]

These are two quite different approaches to doing semantics.

Thanks for pointing this out. I tend to conflate the two, and it's worth keeping the distinction in mind.

Comment author: Emile 19 November 2012 02:40:49PM 1 point [-]

This has done very little to convince me that I should care (and I probably care more about academic Philosophy than most here).

Comment author: Eliezer_Yudkowsky 14 November 2012 05:19:39AM 8 points [-]

I'm saying, "Show me something in particular that I should've looked at, and explain why it matters; I do not respond to non-specific claims that I should've paid more homage to whatever."

Comment author: BobTheBob 15 November 2012 01:54:55PM 0 points [-]

As far as I can see, your point is something like:

"Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken." (or, "unless you can produce such a thing...")

Is this right? In any case, I don't see that the conditional is correct. I can only give examples of works which would help. Here are three more. Your second part seeks (as I understand it) a theory of meaning which would imply that your ' Elaine is a post-utopian' is meaningless, but that 'The photon continues to exist...' is both meaningful and true. I get the impression you think that an adequate answer could be articulated in a few paragraphs. To get a sense of some of the challenges you might face -ie, of what the project of contriving a theory of meaning entails- consider looking at Stephen Schiffer's excellent Remnants of Meaning and The Things we Mean or Scott Soames's What is Meaning? .

Comment author: Emile 19 November 2012 01:42:49PM 3 points [-]

As far as I can see, your point is something like:

"Your reasoning implies I should read some specific thing; there is no such thing; therefore your reasoning is mistaken." (or, "unless you can produce such a thing...")

I think it's more like

"Your reasoning implies I should have read some specific idea, but so far you haven't given me any such idea and why it should matter, only general references to books and authors without pointing to any specific idea in them"

Part of the talking-past-each-other may come from the fact that by "thing", Eliezer seems to mean "specific concept", and you seem to mean "book".

There also seems to be some disagreement as to what warrants references - for Eliezer it seems to be "I got idea X from Y", for you it's closer to "Y also has idea X".

Comment author: TruePath 25 October 2012 07:51:27AM 0 points [-]

Also on the issue of insisting that all facts be somehow reducible to facts about atoms or whatever physical features of the world you insist on consider the claim that you have experiences.

As Chalmers and others have long argued it's logically coherent to believe in a world that is identical to ours in every 'physical' respect (position of atoms, chairs, neuron firings etc..) but yet it's inhabitants simply lacked any experiences. Thus, the belief that one does in fact have experiences is a claim that can't be reduced to facts about atoms or whatever.

Worse, insisting on any such reduction causes huge epistemic problems. Presumably, you learned that the universe was made of atoms, quarks, waves rather than magical forces, spirit stuff or whatever by interacting with the world. Yet, ruling out any claims that can't be spelled out in completely physical terms forces you to assert that you didn't learn anything when you found out that the world wasn't made of spirit stuff because such talk, by it's very nature, can't be reduced to a claim about the properties of quantum fields (or whatever).

Comment author: DaFranker 30 October 2012 02:57:54PM *  1 point [-]

You're basically attacking (one of?) the strongest tenet of LessWrong culture with practically no basis other than "presumably", "Chalmers and others" as an authority (Chalmers' words are not taken as Authority here, and argument trumps authority anyway), and some vague phrasings about "physical terms", "by it's very nature [sic]", "can't be reduced" and "properties of quantum fields".

My own best interpretation is that you're making a question-begging ontological argument that information, learning, knowledge, consciousness or whatever other things are implied by your vague wording are somehow located in separate magisteria.

Also, please note that, as discussed in more details in the other articles following this one, Eliezer clearly states that these epistemic techniques don't rule out a priori any concepts just because they don't fit with some materialistic physical laws one assumes to be true.

Comment author: TruePath 25 October 2012 07:43:40AM *  0 points [-]

First a little clarification.

The contribution of Tarski was to define the idea of truth in a model of a theory and to show that one could finitely define truth in a model. Separately, he also showed no consistent theory can include a truth predicate for itself.

As for the issue of truth-conditions this is really a matter of philosophy of language. The mere insistence that there is some objective fact out there that my words hook on to doesn't seem enough. If I insist that "There are blahblahblah in my room." but that "There are no blahblahblah in your room." and when asked to clarify I only explain that blahblahblah are something that can't ever be experimentally measured or defined but I know when they are present and no one else does then my insistence that my words reflect some external reality really shouldn't be enough to convince you that they indeed do. Less extreme examples are the many philosophies of life people adopt that seem to have no observable implications.

One might react by insisting that only testable statements are coherent but this leads one down the rabbithole of positivism. Testable by who, when? Do they actually have to be tested? If not then in what sense are they testable, especially in a deterministic universe in which untested claims are automatically physically impossible to have tested (the initial conditions plus the laws determine they will not be tested). Taken to any kind of coherent end you find yourself denying everyday statements like "There wasn't a leprachan in my fridge yesterday," as nonsense since no one actually performed any measurement that would determine the truth of the statement.

Ultimately, I take a somewhat deflationary view of truth and philosophy of language. IMO all one can do is simply choose (like your priors) what assertions you take to be meaningful and which you don't. There is no logical flaw in the person who insists on the existence of extra facts but agrees with all your conclusions about shared facts. All you can do is simply tell them you don't understand these extra facts they claim to believe in.

This gunk about postmodernism is nothing but fanciful angst. You do in fact use language and make choices. If they are going to say there are extra facts about whether 'truth' is meaningful that amount to more than the fact that I might be a brain in a vat and that the disquotational biconditional holds then they are just another person insisting on extra facts I have to say I simply fail to understand (to the extent they are simply attacking the existence of shared interpersonal experience/history this is simply a disagreement over priors and no argument will settle it....however, since that concern exhausts the sense I understand the notion of truth and further worry is talking about something I'm not).

Comment author: ChristianKl 19 October 2012 05:14:14PM 0 points [-]

Can the many world hypothesis be true or false according to this theory of truth?

Comment author: DaFranker 19 October 2012 05:20:18PM 0 points [-]

Yes.

Can we verify or falsify it? Yes, iff it somehow constricts possible realities in a manner that is exclusively different from other hypotheses and in-principle observable from our reference frame(s) assuming we eventually obtain the means to make relevant observations.

Comment author: TheAncientGeek 24 February 2017 12:25:27PM *  2 points [-]

It's actually called the Many Worlds INTERPRETATION, and what interpretation means in this case is specifically that there is not experimental test to distinguish it from interpretation. Theory=thing you can test, interpretation=thing you can't test. Indeed, EY's arguments for MWI are not empirical and are therefore his own version of Post Utopianism.

Comment author: folkTheory 16 October 2012 04:12:36PM 1 point [-]

I don't understand the part about post-utopianism being meaningless. If people agree on what the term means, and they can read a book and detect (or not) colonial alienation, and thus have a test for post-utopianism, and different people will reach the same conclusions about any given book, then how exactly is the term meaningless?

Comment author: fortyeridania 18 October 2012 03:09:45PM 3 points [-]

I think "postmodernism," "colonial alienation," and "post-utopianism" are all meant to be blanks, which we're supposed to fill in with whatever meaningless term seems appropriate.

But I share your uneasiness about using these terms. First, I don't know enough about postmodernism to judge whether it's a field filled with empty phrases. (Yudkowsky seems to take the Sokal affair as a case-closed demonstration of the vacuousness of postmodernism. However, it is less impressive than it may seem at first. The way the scandal is presented by some"science-types"--as an "emperor's new clothes" story, with pretentious, obfuscationist academics in the role of the court sycophants--does not hold up well after reading the Wikipedia article. The editors of Social Text failed to adhere to appropriate standards of rigor, but it's not like they took one look at Sokal's manuscript and were floored by its pseudo-brilliance.)

Second, I suspect there aren't any clear-cut examples of meaningless claims out there that actually have any currency.(I only suspect this; I'm not certain. Some things seem meaningless to me; however, that could be just because I'm an outsider.)

Counterexamples?

Comment author: thomblake 16 October 2012 04:32:59PM *  1 point [-]

If people agree on what the term means, and they can read a book and detect (or not) colonial alienation, and thus have a test for post-utopianism, and different people will reach the same conclusions about any given book

By hypothesis, none of those things are true. If those things happen to be true for "post-utopianism" in the real world, substitute a different word that people use inconsistently and doesn't refer to anything useful.

Comment author: folkTheory 17 October 2012 02:59:51AM 0 points [-]

But, from the article:

you can nonetheless take many literary professors and separately show them new pieces of writing by unknown authors and they'll all independently arrive at the same answer, in which case they're clearly detecting some sensory-visible feature of the writing.

Seems like what I was saying...

Comment author: learnmethis 12 October 2012 09:50:57PM 2 points [-]

Great post! If this is the beginning of trend to make Less Wrong posts more accessible to a general audience, then I'm definitely a fan. There's a lot of people I'd love to share posts with who give up when they see a wall of text.

There are two key things here I think can be improved. I think they were probably skipped over for mostly narrative purposes and can be fixed with brief mentions or slight rephrasings:

You won't get a direct collision between belief and reality - or between someone else's beliefs and reality - by sitting in your living-room with your eyes closed.

In addition to comparison to external data such as experimental results, there are also critical insights on reality to be gained by armchair examination. For example, armchair examination of our own or others’ beliefs may lead us to realise that they are self-contradictory, and therefore that it is impossible for them to be true. No experimental results needed! This is extraordinarily common in mathematics, and also of great personal value in everyday thinking, since many cognitive mistakes lead directly to some form of internal contradiction.

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true.

It's better to say that the first statement is unsupported by the evidence and purely speculative. Here's one way that it could in fact be true: if our world is a simulation which destroys data points which won’t in any way impact the future observations of intelligent beings/systems. In fact, that’s an excellent optimisation over an entire class of possible simulations of universes. There would be no way for us to know this of course (the question is inherently undecideable) but it could still happen to be true. In fact, we can construct extremely simply toy universes for which this is true. Undecideability in general is a key consideration that seems missing from many Less Wrong articles, especially considering how frequently it pops up within any complex system.

Comment author: Jonathan_Graehl 06 October 2012 09:58:43PM *  2 points [-]

Similarly, to say of your own beliefs, that the belief is 'true', just means you're comparing your map of your map, to your map of the territory

I assume this is meant in the spirit of "it's as if you are", not "your brain is computing in these terms". When I anticipate being surprised, I'm not consciously constructing any "my map of my map of ..." concepts. Whether my brain is constructing them under the covers remains to be demonstrated.

Comment author: ArisKatsaris 05 October 2012 01:57:49PM 5 points [-]

Oh come on, yeah the gender-imbalance of the original images was bad, but ugliness is also bad and the new stick figures are ugly...

Comment author: Normal_Anomaly 11 December 2012 08:01:11PM 0 points [-]

I didn't see the old stick figures, but I think the ones that are there now are fine.

Comment author: Maelin 06 October 2012 04:18:20PM 2 points [-]

Agreed. The stick figures do not mesh well with the colourful cartoony backgrounds that make the images visually appealing. They feel out of place, and I found it harder to tell when I was supposed to consider one stick figure distinct from another one without actively looking for it (I also have this problem with xkcd).

Strong vote for return to the original style diagrams, with the gender imbalance fixed.

Comment author: [deleted] 05 October 2012 07:40:00PM 1 point [-]

[looks back at the top-level post] Yes, they are. Especially the professor in the last picture -- it reminds me of Jack Skellington from A Nightmare Before Christmas. Using thinner lines à la xkcd would be better, IMO.

Comment author: thomblake 05 October 2012 02:07:39PM 3 points [-]

Agreed. The previous illustrations were pretty awesome, and this post has lost a lot for it.

Comment author: darrenreynolds 05 October 2012 08:32:01AM -1 points [-]

Why is it accepted that experiments with reality prove or disprove beliefs?

It seems to me that they merely confirm or alter beliefs. The answer given to the first koan and the explanation of the shoelaces seem to me to lead to that conclusion.

"...only reality gets to determine my experimental results."

Does it? How does it do that? Isn't it the case that all reality can "do" is passively be believed? Surely one has to observe results, and thus, one has belief about the results. When I jump off the cliff I might go splat, but if the cliff is high enough and involves passing through a large empty space during the fall, there are various historical physical theories that might be 'proved' at first, but later disproved as my speed increases.

I'm very confused. Please forgive my naivety.

Similarly:

"If we thought the colonization ship would just blink out of existence before it arrived, we wouldn't bother sending it."

What if it blinks out of our existence, but not out of the existence of the people on the ship?

Comment author: TheOtherDave 05 October 2012 02:34:20PM *  4 points [-]

Why is it accepted that experiments with reality prove or disprove beliefs?

Well, in one sense it isn't accepted... not if you want "prove" to mean something monolithic and indisputable. If a proposition starts out with a probability between 0 and 1, no experiment can reduce that probability to 0 or raise it to 1... there's always a nonzero probability that the experiment itself was flawed or illusory in some way.

But we do accept that experiments with reality give us evidence on the basis of which we can legitimately increase or decrease our confidence in beliefs. In most real-world contexts, that's what we mean by "prove": provide a large amount of evidence that support confidence in a belief.

So, OK, why do we accept that experiments do that?

Because when we predict future experiences based on the results of those experiments, we find that our later experiences conform to our earlier predictions.

Or, more precisely: the set of techniques that we classify as "reliable experiments"are just those techniques that have that predictive properties (sometimes through intermediate stages, such as model building and solving mathematical equations). Other, superficially similar, techniques which lack those properties we don't classify that way. And if we found some superficially different technique that it turned out had that property as well, we would classify that technique similarly. (We might not call it an "experiment," but we would use it the same way we use experiments).

Of course, once we've come to trust our experimental techniques (and associated models and equations), because we've seen them work over and over again on verifiable predictions, we also develop a certain level of confidence in the unverifiable predictions made by the same techniques. That is, once I have enough experience of the sun rising in the morning that I am confident it will do tomorrow, (including related experiences, like those supporting theories about the earth orbiting the sun etc., which also serve to predict that event), I can be confident that it will rise on October 22 2143 even though I haven't yet observed that event (and possibly never will).

So, yes. If I jump off a cliff I might start out with theories that seem to predict future behavior, and then later have unpredicted experiences as my speed improves that cause me to change those theories. Throughout this process, what I'm doing is using my observations as evidence for various propositions. "Reality" is my label for the framework that allows for those observations to occur, so what we call this process is "observing reality."

What's confusing?

Comment author: darrenreynolds 08 October 2012 10:44:37PM 0 points [-]

"Throughout this process, what I'm doing is using my observations as evidence for various propositions. "Reality" is my label for the framework that allows for those observations to occur, so what we call this process is "observing reality."

"What's confusing?"

It seems to me that given this explanation, we can never know reality. We can only ever have a transient belief in what it is, and that belief might turn out to be wrong. However many 9's one adds onto 99.999% confident, it's never 100%.

From the article: "Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?"

I think the article was, in part, setting out to debunk the above idea, but surely the explanation you have provided proves it to be the case? That's why I'm confused.

Comment author: nshepperd 09 October 2012 06:39:46AM 0 points [-]

It seems to me that given this explanation, we can never know reality. We can only ever have a transient belief in what it is, and that belief might turn out to be wrong. However many 9's one adds onto 99.999% confident, it's never 100%.

That's progress.

Comment author: TheOtherDave 09 October 2012 12:22:40AM 0 points [-]

However many 9's one adds onto 99.999% confident, it's never 100%.

Yes, that's true.

I think the article was, in part, setting out to debunk the above idea, but surely the explanation you have provided proves it to be the case? That's why I'm confused.

Mm.
It sounds to me like we're not using the word "reality" at all consistently in this conversation.
I would recommend trying to restate your concern without using that word. (Around here this is known as "Tabooing" the word.)

Comment author: darrenreynolds 09 October 2012 09:22:16AM 0 points [-]

Thanks for engaging on this - I'm finding it educating. I'll try your suggestion but admit to finding it hard.

So, there's a Chinese rocket-maker in town and Sir Isaac Newton has been offered the ride of his life atop the rocket. This is no ordinary rocket, and it's going to go really, really fast. A little boy from down the road excitedly asks to join him, and being a jolly fellow, Newton agrees.

Now, Newton's wife is pulling that funny face that only a married man will recognise, because she's got dinner in the oven and she knows Newton is going to be late home again. But Newton is confident that THIS time, he's going to be home at precisely 6pm. Newton has recently become the proud owner of the world's most reliable and accurate watch.

As the rocket ignites, the little boy says to Newton, "The vicar told me that when we get back, dinner is going to be cold and your wife is going to insist that your watch is wrong."

Now, we all now how that story plays out. Newton had been pretty confident about his timepiece. 99.9999%, in fact. And when they land, lo and behold his watch and the church clock agree precisely and dinner is very nice.

Er, huh?

Because in fact, the child is a brain in a vat, and the entire experience was a computer simulation, an advanced virtual reality indistinguishable from the real thing until someone disconnects him.

That's the best I can do without breaking the taboo.

Comment author: TheOtherDave 09 October 2012 01:30:23PM 1 point [-]

You've mostly lost me, here.

Reading between the lines a little, you seem to be suggesting that if Newton says "It's true that we returned in time for dinner!" that's just an attempt to assert the privilege of his beliefs over the boy's, and we know that because Newton is unaware of the simulators.

Yes? No? Something else?

If I understood that right, then I reject it. Sure, Newton is unaware of the simulators, and may have beliefs that the existence of the simulators contradicts. Perhaps it's also true that the little boy is missing two toes on his left foot, and Newton believes the boy's left foot is whole. There's undoubtedly vast numbers of things that Newton has false beliefs about, in addition to the simulators and the boy's foot.

None of that changes the fact that Newton and the boy had beliefs about the rocket and the clock, and observed events supported one of those beliefs over the other. This is not just Newton privileging his beliefs over the boy's; there really is something (in this case, the programming of the simulation) that Newton understands better and is therefore better able to predict.

If "reality" means anything at all, the thing it refers to has to include whatever made it predictably the case that Newton was arriving for dinner on time. That it also includes things of which Newton is unaware, which would contradict his predictions about other things were he to ever make the right observations, doesn't change that.

Comment author: BerryPick6 08 October 2012 11:53:07PM *  0 points [-]

However many 9's one adds onto 99.999% confident, it's never 100%.

I thought that 99.999999.... actually does equal 100, no?

Comment author: wedrifid 09 October 2012 01:04:22AM *  3 points [-]

However many 9's one adds onto 99.999% confident, it's never 100%.

I thought that 99.999999.... actually does equal 100, no?

There is no instantiation of "however many" with an integer, n, that results in the "equals 100%" result (because then n+1 would result in more that 100% which is just way off). There are some more precise things we can say along the lines of "limit as n approaches infinity where..." that express what is going on fairly clearly.

Writing the "99.9 repeating" syntax with the dot does mean "100" according to how the "writing the dot on the numbers" syntax tends to be defined, which is I think what you are getting at but seems different to what Berry seems to be saying.

Comment author: BerryPick6 09 October 2012 11:34:46AM 0 points [-]

Ah, I get it now, thanks.

Comment author: Alejandro1 09 October 2012 12:30:40AM 2 points [-]

Yes, but us being finite creatures, we cannot ever add more than a finite number of 9's.

Comment author: ArisKatsaris 05 October 2012 08:56:47AM 1 point [-]

It seems to me that they merely confirm or alter beliefs.

And one of the beliefs they've confirmed is "reality is really real, it isn't just a belief." :-)

Isn't it the case that all reality can "do" is passively be believed?

No. If that's all it could do then it would be indistinguishable from fiction. It's not, we know it's not, and I bet that you yourself treat reality differently than you treat fiction, thus disproving your claim.

Comment author: shminux 05 October 2012 08:02:08PM *  0 points [-]

reality is really real

Using the same word in triplicate to make your point does not make the point more convincing.

Comment author: wedrifid 05 October 2012 08:25:45PM 1 point [-]

Using the same word in triplicate to make your point does not make the point more convincing.

It doesn't seem to be intended to be more convincing, or the point for that matter. That relies on the rest of the sentence.

Comment author: darrenreynolds 05 October 2012 11:41:45AM -1 points [-]

"It's not, we know it's not, and I bet that you yourself treat reality differently than you treat fiction, thus disproving your claim."

How do we know it's not? You might say that I know that the table in front of me is solid. I can see it, I can feel it, I can rest things on it and I can try but fail to walk through it. But nowadays, I think a physicist with the right tools would be able to show us that, in fact, it is almost completely empty space.

So, do I treat reality different from how I treat fiction? I think the post we are commenting on has finally convinced me that there is no reality, only belief, and therefore the question is untestable. I think that is the opposite of what the post author intended?

History does tend to suggest that anyone who thinks they know anything is probably wrong. Perhaps those here are less wrong, but they - we - are still wrong.

"And one of the beliefs they've confirmed is "reality is really real, it isn't just a belief." :-)"

Hah! Exactly! The experiments confirm a belief. A confirmed belief is, of course, still a belief. If your belief that reality is really real is confirmed, you now have a confirmed belief that reality is really real. That's not the same thing as reality being really real, though, is it?

;-)

Comment author: [deleted] 05 October 2012 07:46:22PM 5 points [-]

How do we know it's not? You might say that I know that the table in front of me is solid. I can see it, I can feel it, I can rest things on it and I can try but fail to walk through it. But nowadays, I think a physicist with the right tools would be able to show us that, in fact, it is almost completely empty space.

So f***ing what? What does solidity have to do with amount of empty space? If according to your definition of solid, ice is less solid than water because it contains more empty space, your definition of solid is broken.

Comment author: ArisKatsaris 05 October 2012 12:08:17PM 4 points [-]

So, do I treat reality different from how I treat fiction?

Yes. I bet that if a fire happens you'll call the fire-brigade, not shout for Superman. That if you want to get something for Christmas, you'll not be writing to Santa Claus.

No matter how much one plays with words, most people, even philosophers, recognize reality as fundamentally different to fiction.

You might say that I know that the table in front of me is solid.

This is playing with words. "Solidity" has a macroscale meaning which isn't valid for nanoscales. That's how reality works in the macroscale and the nanoscale, and it's fiction in neither. If it was fiction then your ability to enjoy the table's solidity would be dependent on your suspension of disbelief.

History does tend to suggest that anyone who thinks they know anything is probably wrong. Perhaps those here are less wrong, but they - we - are still wrong.

The operative word here is "less". Here's a relevant Isaac Asimov quote: "When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

You are effectively being "wronger than both of them put together"

That's not the same thing as reality being really real, though, is it?

1 and 0 aren't probabilities, but you're effectively treating a statement of 99.999999999% certainty as if it's equivalent to 0.000000000000001% certainty; just because neither of them is 0 or 1.

That's pretty much an example of "wronger than both of them put together" that Isaac Asimov described...

Comment deleted 05 October 2012 01:29:32AM *  [-]
Comment author: [deleted] 05 October 2012 08:22:55PM 1 point [-]

Why did you reply directly to the top-level post rather than to where the quotation was taken from?

Comment author: [deleted] 04 October 2012 03:11:19AM 1 point [-]

Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."

One shouldn't form theories about a particular photon. The statement "photons in general continue to exist after crossing the cosmological horizon" and "photons in general blink out of existence when they cross the cosmological horizon" have distinct testable consequences, if you have a little freedom of motion.

Comment author: common_law 04 October 2012 12:19:21AM *  10 points [-]

Two quibbles that could turn out to be more than quibbles.

  1. The concept of truth you intend to defend isn't a correspondence theory--rather it's a deflationary theory, one in which truth has a purely metalinguistic role. It doesn't provide any account of the nature of any correspondence relationship that might exist between beliefs and reality. A correspondence theory, properly termed, uses a strong notion of reference to provide a philosophical account of how language ties to reality.

  2. You write:

Some pundits have panicked over the point that any judgment of truth - any comparison of belief to reality - takes place inside some particular person's mind; and indeed seems to just compare someone else's belief to your belief.

I'm inclined to think this is a straw man. (And if they're mere "pundits" and not philosophers why the concern with their silly opinion?) I think you should cite to the most respectable of these pundits or reconsider whether any pundits worth speaking of said this. The notion that reality--not just belief--determines experiments, might be useful to mention, but it doesn't answer any known argument, whether by philosopher or pundit.

Comment author: beoShaffer 03 October 2012 03:34:54PM 3 points [-]

For some reason the first picture won't load, even though the rest are fine. I'm using safari.

Comment author: IainM 03 October 2012 12:38:22PM *  0 points [-]

Retracted

Comment author: IainM 03 October 2012 12:43:34PM *  0 points [-]

Retracted

Comment author: TraderJoe 03 October 2012 07:09:49AM 4 points [-]

"Reality is that which, when you stop believing in it, doesn't go away. " - Philip K Dick.

Comment author: learnmethis 12 October 2012 09:28:35PM 1 point [-]

Good quote, but what about the reality that I believe something? ;) The fact that beliefs themselves are real things complicates this slightly.

Comment author: Normal_Anomaly 11 December 2012 08:09:08PM 0 points [-]

It's possible to stop believing that you believe something while continuing to believe it. It's rare, and you won't notice you did so, but it can happen.

Comment author: [deleted] 03 October 2012 04:12:48AM *  2 points [-]

Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.

I think it's apt but ironic that you find a definition of "truth" by comparing beliefs and reality. Beliefs are something that human beings, and maybe some animals have. Reality is vast in comparison, and generally not very animal-centric. Yet every one of these diagrams has a human being or brain in it.

With one interesting exception, the space of all possible worlds. Is truth more animal-centric that reality? Wouldn't "snow is white" be a true statement if people weren't around? Maybe not--who would be around to state it? But I find it easy to imagine a possible world with white snow but no people.

Edit: What would a hypothetical post titled "The Useful Idea of Reality" contain? Would it logically come before or after this post?

Comment author: purge 13 January 2013 08:42:29AM 0 points [-]

If people weren't around, then "snow is white" would still be a true sentence, but it wouldn't be physically embodied anywhere (in quoted form). If we want to depict the quoted sentence, the easiest way to do that is to depict its physical embodiment.

Comment author: beriukay 04 October 2012 12:00:10PM 0 points [-]

Truth is more about how you get to know reality than it is about reality. For instance, it is easy to conceive of a possibility where everything a person knows about something points to it being true, even if it later turns out to be false. Even if you do everything right, there's no cosmic guarantee that you have found truth, and therefore cut straight through to reality.

But it is still a very important concept. Consider: someone you love is in the room with you, and all the evidence available to you points to a bear trying to get into the room. You would be ill-advised to second-guess your belief when there's impending danger.

Wouldn't "snow is white" be a true statement if people weren't around?

Not exactly. White isn't a fundamental concept like mass is. Brain perception of color is an extremely relative and sticky issue. When I go outside at night and look at snow, I'd swear up and down that the stuff is blue.

Comment author: Alex_Altair 03 October 2012 04:03:26AM 21 points [-]

She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.

She should hand back the paper with the note, "What do you mean by 'mean'?"

Comment author: danceapocalypse 17 November 2012 07:30:16AM 0 points [-]

If someday the vast majority of people decided that what is known as "blue" should be renamed "snarffle" then eventually it would cease to be blue. Instead it would be snarffle because that is the belief. But that doesn't change the reality that it is the wavelength 475 nm. Human beliefs determine how we interpret information, not reality.

Comment author: AlexMennen 03 October 2012 02:30:23AM 2 points [-]

Didn't you say you were working on a sequence on open problems in friendly AI? And how could this possibly be higher priority than that sequence?

Comment author: Eliezer_Yudkowsky 03 October 2012 07:35:31PM 5 points [-]

Prereqs.

Comment author: Manfred 03 October 2012 02:49:04AM 9 points [-]

A guess: prerequisites. Also, we have lots of new people, so to be safe: prerequisites to prerequisites.

Comment author: buybuydandavis 03 October 2012 01:54:51AM 10 points [-]

I don't think EY has chosen the most useful way to proceed on a discussion of truth. He has started from an anecdote where the correspondence theory of truth is the most applicable, and charges ahead developing the correspondence theory.

We call some beliefs true, and some false. True and false are judgments we apply to beliefs - sorting them into two piles. I think the limited bandwidth of a binary split should already be a tip off that we're heading down the wrong path.

In practice, ideas will be more or less useful, with that usefulness varying depending on the specifics of the context of the application of those beliefs. Even taking "belief as predictive model" as given, it's not that a belief is either accurate or inaccurate, but it will be more or less accurate, and so more or less useful, as I've claimed is the general case of interest.

Going back to the instrumental versus epsitemic distinction, I want to win, and having a model that accurately predicts events is only one tool for winning among many. It's a wonderful simulation tool, but not the only thing I can do with beliefs.

If I'm going to sort beliefs into more and less useful, the first thing to do is identify the ways that a belief can be used. What can I do with a belief?

I can ruminate on it. Sometimes that will be enjoyable, sometimes not.

I can compare it to my other beliefs. That allows for some correction of inconsistent beliefs.

I can use it to take action. This is where the correspondence theory gets its main application. I can use a model in my head to make a prediction, and take action based on that prediction.

However, the prediction itself is mainly an intermediate good for selecting the best action. Well, one can skip the middle man and have a direct algorithmic rule If A, do(x) to get the job done. That rule can be useful without making any predictions. One can believe in such a rule, and rely on it, to take action as well. Beliefs directing action can be algorithmic instead of predictive, so that correspondence theory isn't the only option even in it's main domain of application.

Back to what I can do with a belief, I can tell it to my neighbor. That becomes a very complicated use because it now involves the interaction with another mind with other knowledge. I can inform my neighbor of something. I can lie to my neighbor. I can signal to my neighbor. There are quite a number of uses to communicating a belief to my neighbor. One interesting thing is that I can communicate things to my neighbor that I don't even understand.

What I would expect, in a population of evolved beings, is that there'd be some impulse to judge beliefs for all these uses, and to varying degrees for each usage across the population.

So charging off on the correspondence theory strikes me as going very deep into only one usage of beliefs that people are likely to find compelling, and probably the one that's already best analyzed, as that is the perspective that best allows for systematic analysis.

What I think is potentially much more useful is an analysis of all the other truth modalities from the correspondence theory perspective,

Just as Haidt finds multiple moral modalities, and subpopulations defined in their moral attitudes by their weighting of those different modalities, I suspect that a similar kind of thing is happening with respect to truth modalities. Further, I'd guess that political clustering occurs not just in moral modality space, but in the joint moral-truth modality space as well.

Comment author: chaosmosis 03 October 2012 01:13:47AM *  1 point [-]

Here's my map of my map with respect to the concept of truth.

Level Zero: I don't know. I wouldn't even be investigating these concepts about truth unless on some level I had some form of doubt about them. The only reason I think I know anything is because I assume it's possible for me to know anything. Maybe all of my priors are horribly messed up with respect to whatever else they potentially should be. Maybe my entire brain is horribly broken and all of my intuitive notions about reality and probability and logic and consistency are meaningless. There's no way for me to tell.

Level One: I know nothing. The problem of induction is insurmountable.

Level Two: I want to know something, or at least to believe. Abstract truths outside the content of my experience are meaningless. I don't care about whether or not induction is necessarily a valid form of logic; I only care whether or not it will work in the context of my future experiences. I don't care whether or not my priors are valid, they're my priors all the same. On this level I refuse to reject the validity of any viewpoint if that viewpoint is authentic, although I still only abide myself by my own internalized views. My fundamental values are just a fact, and they reject the idea that there is no truth despite whatever my brain might say. Ironically, irrational processes are at the root of my beliefs about rationality and reality.

Level Three: My level three seems to be Eliezer's level zero. The world consistently works by certain fundamental laws which can be used to make predictions. The laws of this universe can be investigated through the use of my intuitions about logic and the way reality should work. I spend most of my time on this level, but I think that the existence of the other levels is significant because those levels shape the way I understand epistemology and my ability to understand other perspectives.

Level Four: There are certain things which it is good to proclaim to be true, or to fool oneself into believing are true. Some of these things actually are true, and some are actually false. But in the case of self deception, the recognition that some of these things are actually false must be avoided. The self deception aspect of this level of truth does not come into play very often for me, except in some specific hypothetical circumstances.

Comment author: CarlShulman 03 October 2012 12:08:17AM *  8 points [-]

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true. And this sort of belief can have behavioral consequences!

The belief that someone is epiphenomenally a p-zombie, or belief in consubstantiality can also have behavioral consequences. Classifying some author as an "X" can, too.

Comment author: Eliezer_Yudkowsky 03 October 2012 07:41:24PM 5 points [-]

If an author actually being X has no consequences apart from the professor believing that the author is "X", all consequences accrue to quoted beliefs and we have no reason to believe the unquoted form is meaningful or important. As for p-zombieness, it's not clear at this point in the sequence that this belief is meaningless rather than being false; and the negation of the statement, "people are not p-zombies", has phrasings that make no mention of zombiehood (i.e., "there is a physical explanation of consciousness") and can hence have behavioral consequences by virtue of being meaningful even if its intuitive "counterargument" has a meaningless term in it.

Comment author: TheAncientGeek 24 February 2017 12:35:05PM *  1 point [-]

If an author actually being X has no consequences apart from the professor believing that the author is "X", all consequences accrue to quoted beliefs and we have no reason to believe the unquoted form is meaningful or important.

No consequences meaning no consequences, or no consequences meaning no empirical testability? Consider replacing the vague and subjective predicate "Post Utopian" with the even more subjective "good". If a book is (believed to be) good or bad, that clearly has consequences, such as ones willingness to read it.

There are two consistent courses here: you can expand the notion of truth to include judgements of value and quality backed by handwavy on-empirical arguments; or you can keep a narrow, positivist notion of truth and abandon the use of handwaviness yourself. And you are not doing the latter because your arguments for MWI (to take just one example) are non-empirical handwaviness.

Comment author: onelasttime 18 October 2012 03:17:05PM 0 points [-]

How do you infer "there is a physical explanation of consciousness" from "people are not p-zombies"?

Comment author: wedrifid 03 October 2012 08:36:40PM *  1 point [-]

Can someone please explain to me what is bad or undesirable about the parent? I thought it made sense, even if on a topic I don't much care about. Others evidently didn't. While we are at it, what is so insightful about the grandparent? I just thought it kind of missed the point of the quoted paragraph.

Comment author: TimS 03 October 2012 08:50:28PM *  1 point [-]

My guess? "Behavorial consequences" is not really the touchstone of truth under the Correspondence Theory, so EY's use of the phrase when trying to persuade us of the Correspondence Theory of Truth leaves him open to criticism. EY's response is to deny any mistake.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:05:34PM 0 points [-]

My guess? People are more or less randomly downvoting me these days, for standard fear and hatred of the admin. I suppose somebody's going to say that this is an excuse not to update, but it could also be, y'know, true. It takes a pretty baroque viewpoint to think that I was talking deliberate nonsense in that paragraph, and if anyone hadn't understood what I meant, they could've just asked.

To clarify in response to your particular reply:

Generally speaking but not always, for our belief about something to have behavioral consequences, we have to believe it has consequences which our utility function can run over, meaning it's probably linked into our beliefs about the rest of the universe, which is a good sign. There's all kinds of exceptions to this for meaningless beliefs that have behavioral consequences anyway, and a very large class of exceptions is the class where somebody else is judging what you believe, like the example someone not-Carl-who-Carl-probably-talked-to recently gave me for "Consubstantiality has the consequence that if it's true and you don't believe in it, God will send you to hell", which involves just "consubstantiality" and not consubstantiality, similarly with the tests being graded (my attempt to find a non-religious conjugate of something for which the religious examples are much more obvious).

Comment author: Daniel_Burfoot 04 October 2012 04:05:40AM 0 points [-]

People are more or less randomly downvoting me these days, for standard fear and hatred of the admin

I think smart statistical analysis of the voting records should reveal hate-voting if it occurs, which I agree with you that it probably does.

Comment author: wedrifid 03 October 2012 09:27:24PM 7 points [-]

My guess? People are more or less randomly downvoting me these days, for standard fear and hatred of the admin. I suppose somebody's going to say that this is an excuse not to update, but it could also be, y'know, true.

A review of your recent comments page puts most of the comments upvoted and some of them to stellar levels---not least of which this post. This would suggest that aversion to your admin-related commenting hasn't generalized to your on topic commenting just yet. Either that or all your upvoted comments are so amazingly baddass that they overcome the hatred while the few that get net downvotes were merely outstanding and couldn't compensate.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:31:17PM 0 points [-]

Or the downvoters are fast and early, the upvoters arrive later, which is what I've observed. I'm actually a bit worried about random downvoting of other users as well.

Comment author: Eugine_Nier 04 October 2012 02:37:01AM 9 points [-]

Or the downvoters are fast and early, the upvoters arrive later, which is what I've observed.

Or it's just more memorable when this happens.

Comment author: wedrifid 03 October 2012 09:44:19PM 2 points [-]

Or the downvoters are fast and early, the upvoters arrive later, which is what I've observed. I'm actually a bit worried about random downvoting of other users as well.

Ahh, those kind of downvotes. I get those patterns from time to time---not as many or fast as you are able to I'm sure since I'm a mere commenter. I remind myself to review my comments a day or two later so that some of the contempt for voter judgement can bleed away after I see the correction.

Comment author: Normal_Anomaly 11 December 2012 08:15:01PM 1 point [-]

I've noticed the same thing once or twice--less often than you, and far less often than EY, but my (human, therefore lousy) memory says it's more likely for a comment of mine to go to -1 and then +1 than the reverse.

Comment author: wedrifid 03 October 2012 08:58:03PM 0 points [-]

My guess? "Behavorial consequences" is not really the touchstone of truth under the Correspondence Theory, so EY's use of the phrase when trying to persuade us of the Correspondence Theory of Truth leaves him open to criticism. EY's response is to deny any mistake.

Ok, I think both you and Carl read more of an implied argument into Eliezer's mention of that particular fact than I did.

Comment author: DuncanS 02 October 2012 09:28:32PM *  2 points [-]

People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished thought and using it as input to the next thought. In animals, I suspect this sense isn't good enough to allow thought chains to be made - and so they can't make arguments. In humans it is good enough, but probably not by very much - it seems rather likely that the ability to make thought chains evolved quite recently.

I think we probably make mistakes about what we think we think all the time - but there is usually nobody who can correct us.

Comment author: [deleted] 02 October 2012 09:21:10PM 0 points [-]

The river side illustration is inaccurate and should be much more like the illustration right above (with the black shirt replaced with a white shirt).

Comment author: earthwormchuck163 02 October 2012 08:50:18PM 6 points [-]

The pictures are a nice touch.

Though I found it sort of unnerving to read a paragraph and then scroll down to see a cartoon version of the exact same image I had painted inside my head, several times in a row.

Comment author: EricHerboso 02 October 2012 07:33:36PM 3 points [-]

Two minor grammatical corrections:

A space is missing between "itself" and "is " in "The marble itselfis a small simple", and between "experimental" and "results" in "only reality gets to determine my experimentalresults".

Comment author: Jonathan_Elmer 02 October 2012 07:29:43PM 0 points [-]

A belief is true if it is consistent with reality.

Comment author: amcknight 02 October 2012 10:49:01PM 3 points [-]

I think this includes too much. It would includes meaningless beliefs. "Zork is Pork." True or false? Consistency seems to me to be, at best, a necessary condition, but not a sufficient one.

Comment author: Jonathan_Elmer 04 October 2012 02:04:52AM 1 point [-]

Could you give me an example of a belief that is consistent with reality but false?

Comment author: Matt_Simpson 04 October 2012 06:47:52PM *  0 points [-]

Counterfactuals? If there's a unicorn on Mars, then I'm the president. Though it depends on what gets included in the term "reality."

Comment author: wedrifid 04 October 2012 07:26:03PM *  1 point [-]

Could you give me an example of a belief that is consistent with reality but false?

Counterfactuals? If there's a unicorn on Mars, then I'm the president. Though it depends on what gets included in the term "reality."

Neither of those things are examples of beliefs that are consistent with reality but false. The belief "If there's a unicorn on Mars, then I'm the president" is true, consistent with reality but also utterly worthless.

Counterfactuals are also not false. (Well, except for false counterfactual claims.) A (well formed) counterfactual claim is of the type "Apply this specified modifier to reality. If that is done then this conclusion will follow.". Such claims can be true, albeit somewhat difficult to formally specify.

Comment author: Matt_Simpson 04 October 2012 10:04:54PM 2 points [-]

Counterfactuals are also not false. (Well, except for false counterfactual claims.) A (well formed) counterfactual claim is of the type "Apply this specified modifier to reality. If that is done then this conclusion will follow.". Such claims can be true, albeit somewhat difficult to formally specify.

I didn't mean that all counterfactuals are false, I meant a specific example of a counterfactual claim that is false - e.g. If you put a unicorn on Mars, then I'll become president (which expresses the example I meant to give in the grandparent, not a logical if-then).

(Apologies for not clearly saying that)

Comment author: wedrifid 05 October 2012 12:04:47AM *  0 points [-]

Thankyou, I understand what you are saying now.

For what it is worth I would describe that counterfactual claim as inconsistent with reality and false. That is, when instantiating the counterfactual using the counterfactual operation as reasonably as possible it would seem that reality as I know it is not such that the modified version would result in the consequences predicted.

(Note that with my understanding of the terms in question I think it is impossible to have something consistent with reality and false so it is unsurprising that given examples would not appear to me to meet those criteria simultaneously.)

Comment author: Matt_Simpson 05 October 2012 04:42:40AM 1 point [-]

Yeah, I think I agree after thinking about it a bit - I mean, why wouldn't we define the terms that way?

Comment author: shokwave 04 October 2012 06:23:53PM 1 point [-]

I take "consistent" to mean roughly "does not contain a contradiction", so "a belief that is consistent with reality" would mean something like "if you take all of reality as a collection of facts, and then add this belief, as a fact, to that collection, the collection won't contain a contradiction." It seems to me, if this is a fair representation of the concept, that some beliefs about the future are consistent with reality, but false. For example:

Humanity will be mining asteroids in 2024.

This is consistent with reality: there is at least one company talking about it, there are no obvious impossibilities (there are barriers, but we recognise they can be overcome with engineering)... but it's very probably false.

Comment author: amcknight 04 October 2012 04:59:51AM 1 point [-]

I'm definitely having more trouble than I expected. Unicorns have 5 legs... does that count? You're making me doubt myself.

Comment author: Jonathan_Elmer 04 October 2012 06:10:17AM 0 points [-]

Cool. : )

Is "Unicorns have 5 legs" consistent with reality? I would be quite surprised to find out that it was.

Comment author: amcknight 04 October 2012 05:44:41PM 0 points [-]

Well it doesn't seem to be inconsistent with reality.

Comment author: Jonathan_Elmer 05 October 2012 02:14:22AM -1 points [-]

The non-existence of unicorns makes the claim that they have legs, in whatever number, inconsistent with reality.

Comment author: DaFranker 04 October 2012 06:02:15PM 0 points [-]

It doesn't even have any referents to reality. It's not even a statement about whatever "reality" we live in, to the best of my knowledge. If it does mean five-leggedness of unicorn creatures with the implication of the existence of possible-existence of such creatures in reality, then it is false, but it's inconsistent with what we know of reality, since there's no way such a creature would exist.

...I think, anyway. Not quite sure about that second part.

Comment author: Peterdjones 03 October 2012 10:03:56AM *  0 points [-]

Mutualy inconsistent statements can be consistent with known facts, eg Lady MacBeth had 2 chidren, Ldy MacBeth had 3 children...but that just exposes the problem with correspondence. if it isn't consistency...what is it?

Comment author: Larks 03 October 2012 09:49:50AM 1 point [-]

Better example, maybe: the continuum hypothesis

Comment author: Jonathan_Elmer 03 October 2012 02:09:37AM 0 points [-]

Tell me what Zork is and i'll let you know. : )

Comment author: Pavitra 05 October 2012 03:26:32AM 1 point [-]

Zork is a classic computer game (or game series, or game franchise; usage varies with context) from c.1980.

Comment author: Kaj_Sotala 02 October 2012 07:28:43PM 33 points [-]

I just realized that since I posted two comments that were critical over a minor detail, I should balance it out by mentioning that I liked the post - it was indeed pretty elementary, but it was also clear, and I agree about it being considerably better than The Simple Truth. And I liked the koans - they should be a useful device to the readers who actually bother to answer them.

Also:

Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality.

was a cute touch.

Comment author: [deleted] 03 October 2012 09:26:59PM 11 points [-]

Thank you for being positive.

I've been recently thinking about this, and noticed that despite things like "why our kind can't cooperate", we still focus on criticisms of minor points, even when there are major wins to be celebrated.

Comment author: Wei_Dai 02 October 2012 07:23:34PM 19 points [-]

There are some kinds of truths that don't seem to be covered by truth-as-correspondence-between-map-and-territory. (Note: This general objection is well know and is given as Objection 1 in SEP's entry on Correspondence Theory.) Consider:

  1. modal truths if one isn't a modal realist
  2. mathematical truths if one isn't a mathematical Platonist
  3. normative truths

Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist). The last one is most problematic to me, because some kinds of normative statements seem to be talking about what one should do given some assumed-to-be-accurate map, and not about the map itself. For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

So, a couple of questions that seem open to me: Do we need other notions of truth, besides correspondence between map and territory? If so, is there a more general notion of truth that covers all of these as special cases?

Comment author: Viliam_Bur 05 October 2012 10:33:20AM *  6 points [-]

If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

The problem with Alice's belief is that it is incomplete. It's like saying "I believe that 3 is greater than" (end of sentence).

Even incomplete sentences can work in some contexts where people know how to interpret them. For example if we had a convention that all sentences ending with "greater than" have to be interpreted as "greater than zero", then in given context the sentence "3 is greater than" makes sense, and is true. It just does not make sense outside of this context. Without context, it's not a logical proposition, but rather a proposition template.

Similarly, the sentence "you should X" is meaningful in contexts which provide additional explanation of what "should" means. For a consequentialist, the meaning of "you should" is "maximizes your utility". For a theist, it could mean "makes Deity happy". For both of them, the meaning of "should" is obvious, and within their contexts, they are right. The sentence becomes confusing only when we take it out of context; when we pretend that the context is not necessary for completing it.

So perhaps the problem is not "some truths are not about map-territory correspondence", but rather "some sentences require context to be transformed into true/false expressions (about map-territory correspondence)".

Seems to me that this is somehow related to making ideas pay rent, in sense that when you describe how do you expect the idea to pay rent, in the process you explain the context.

Comment author: Bluehawk 26 November 2012 11:52:11AM 1 point [-]

At the risk of nitpicking:

"Makes Deity happy" sounds to me like a very specific interpretation of "utility", rather than something separate from it. I can't picture any context for the phrase "P should X" that doesn't simply render "X maximizes utility" for different values of the word "utility". If "make Deity happy" is the end goal, wouldn't "utility" be whatever gives you the most efficient route to that goal?

Comment author: Chrysophylax 15 January 2013 08:01:00PM -1 points [-]

Utility has a single, absolute, unexpressible meaning. To say "X gives me Y utility" is pointless, because I am making a statement about qualia, which are inherently incommunicable - I cannot describe the quale "red" to a person without a visual cortex, because that person is incapable of experiencing red (or any other colour-quale). "X maximises my utility" is implied by the statements "X maximises my deity's utility" and "maximising my deity's utility maximises my utility", but this is not the same thing as saying that X should occur (which requires also that maximisng your own utility is your objective). Stripped of the word "utility", your statement reduces to "The statement 'If X is the end goal, and option A is the best way to achieve X, A should be chosen' is tautologous", which is true because this is the definition of the word "should".

Comment author: V_V 03 October 2012 10:49:02AM *  1 point [-]

Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist).

I think Yudkowsky is a Platonist, and I'm not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.

For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

Therefore, if you say "Alice has a false belief that if she two-boxes in Newcomb's problem then she will maximize her expected utility" you are saying that her belief doesn't correspond to the mathematical constructs underlying Newcomb's problem. If you take the Platonist position that mathematical constructs exist as external entities ("the territory"), then yes, you are saying that her map doesn't correspond to the territory.

Comment author: TheOtherDave 03 October 2012 02:10:21PM 2 points [-]

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

Well, sure, a utilitarian can always "rephrase" should-statements that way; to a utilitarian what "X should Y" means is "Y maximizes X's expected utility." That doesn't make "X should Y" not a normative statement, it just means that utilitarian normative statements are also objective statements about reality.

Conversely, I'm not sure a deontologist would agree that you can rephrase one as the other... that is, a deontologist might coherently (and incorrectly) say "Yes, two-boxing maximizes expected utility, but you still shouldn't do it."

Comment author: V_V 03 October 2012 02:57:41PM *  0 points [-]

I think you are conflating two different types of "should" statements: moral injunctions and decision-theoretical injunctions.

The statement "You should two-box in Newcomb's problem" is normally interpreted as a decision-theoretical injunction. As such, it can be rephrased epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

But you could also interpret the statement "You should two-box in Newcomb's problem" as the moral injunction "It is morally right for you to two-box in Newcomb's problem". Moral injunctions can't be rephrased epistemically, at least unless you assume a priori that there exist some external moral truths that can't be further rephrased.

The utilitarianist of your comment is doing that. His actual rephrasing is "If you two-box in Newcomb's problem then you will maximize the expected universe cumulative utility". This assumes that:

  • This universe cumulative utility exists as an external entity

  • The statement "It is morally right for you to maximize the expected universe cumulative utility" exists as an external moral truth.

Comment author: Wei_Dai 03 October 2012 01:03:44PM 1 point [-]

I think Yudkowsky is a Platonist, and I'm not sure he has a consistent position on modal realism, since when arguing on morality he seemed to espouse it: see his comment here.

Thanks for the link. That does seem inconsistent.

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

This comment should help you understand why I disagree. Does it make sense?

Comment author: V_V 03 October 2012 03:01:02PM 2 points [-]

This comment should help you understand why I disagree. Does it make sense?

I don't claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can't.

Comment author: Wei_Dai 03 October 2012 09:30:00PM *  0 points [-]

I don't claim that all injunctions can be rephrased as epistemic statements. I claim that decision-theoretic injunctions can be rephrased as epistemic statements. Moral injunctions can't.

I'm confused by your reply because the comment I linked to tried to explain why I don't think "You should two-box in Newcomb's problem" can be rephrased as an epistemic statement (as you claimed earlier). Did you read it, and if so, can you explain why you disagree with its reasoning?

ETA: Sorry, I didn't notice your comment in the other subthread where you gave your definitions of "decision-theoretic" vs "moral" injunctions. Your reply makes more sense with those definitions in mind, but I think it shows that the comment I linked to didn't get my point across. So I'll try it again here. You said earlier:

I don't think that "You should two-box in Newcomb's problem." is actually a normative statement, even if it contains a "should": you can rephrase it epistemically as "If you two-box in Newcomb's problem then you will maximize your expected utility".

A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of "maximize your expected utility", and so when C says to E "you should two-box in Newcomb's problem" he is not just saying "If you two-box in Newcomb's problem then you will maximize your expected utility according to the CDT formula" since E wouldn't care about that. So my point is that "you should two-box in Newcomb's problem" is usually not a "decision-theoretical injunction" in your sense of the phrase, but rather a normative statement as I claimed.

Comment author: V_V 04 October 2012 12:07:59PM *  0 points [-]

A causal decision theorist (C) and an evidential decision theorist (E) have different definitions of "maximize your expected utility", and so when C says to E "you should two-box in Newcomb's problem" he is not just saying "If you two-box in Newcomb's problem then you will maximize your expected utility according to the CDT formula" since E wouldn't care about that. So my point is that "you should two-box in Newcomb's problem" is usually not a "decision-theoretical injunction" in your sense of the phrase, but rather a normative statement as I claimed.

I was assuming implicitely that we were talking in the context of EDT.

In general, you can say "Two-boxing in Newcomb's problem is the optimal action for you", where the definition of "optimal action" depends on the decision theory you use.

If you use EDT, then "optimal action" means "maximizes expected utility", hence the statement above is false (that is, it is inconsistent with the axioms of EDT and Newcomb's problem).

If you use CDT, then "optimal action" means "maximizes expected utility under a causality assumption". Hence the statement above is technically true, although not very useful, since the axioms that define Newcomb's problem specifically violate the causality assumption.

So, which decision theory should you use? An answer like "you should use the decision theory that determines the optimal action without any assumption that violates the problem constraints" seems irreducible to an epistemic statement. But is that actually correct?

If you are studing actual agents, then the point is moot, since these agents already have a decision theory (in practice it will be an approximation of either EDT or CDT, or something else), but what if you want to improve yourself, or build an artificial agent?

Then you evaluate the new decision theory according to the decision theory that you already have. Then, assuming that in principle your current decision theory can be described epistemically, you can say, for instance: "A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for me".

If you want to suggest a decision theory to somebody who is not you, you can say: "A decision theory that determines the optimal action without any assumption that violates the problem constraints is optimal for you", or, more properly but less politely: "You using a decision theory that determines the optimal action without any assumption that violates the problem constraints are optimal for me".

Comment author: Wei_Dai 04 October 2012 11:12:26PM *  2 points [-]

Then you evaluate the new decision theory according to the decision theory that you already have.

I had similar thoughts before, but eventually changed my mind. Unfortunately it's hard to convince people that their solution to some problem isn't entirely satisfactory without having a better solution at hand. (For example, this post of mine pointing out a problem with using probability theory to deal with indexical uncertainty sat at 0 points for months before I made my UDT post which suggested a different solution.) So instead of trying harder to convince people now, I think I will instead try harder to figure out a better answer by myself (and others who already share my views).

Comment author: [deleted] 03 October 2012 10:11:00AM 1 point [-]

He says that counterfactuals do have a truth value, though IMO he's a bit vague about what that is (or maybe it's me who can't fully understand what he says).

Comment author: pragmatist 03 October 2012 12:18:38AM 2 points [-]

Michael Lynch has a functionalist theory of truth (described in this book) that responds to concerns like yours. His claim is that there is a "truth role" that is constant across all domains of discourse where we talk about truth and falsity of propositions. The truth role is characterized by three properties:

  1. Objectivity: The belief that p is true if and only if with respect to the belief that p, things are as they are believed to be.

  2. Norm of belief: It is prima facie correct to believe that p if and only if the proposition that p is true.

  3. End of inquiry: Other things being equal, true beliefs are a worthy goal of inquiry.

Lynch claims that, in different domains of discourse, there are different properties that play this truth role. For instance, when we're doing science it's plausible that the appropriate realizer of the truth role is some kind of correspondence notion. On the other hand, when we're doing mathematics, one might think that the truth role is played by some sort of theoretical coherence property. Mathematical truths, according to Lynch, satisfy the truth role, but not by virtue of correspondence to some state of affairs in our external environment. He has a similar analysis of moral truths.

I'm not sure whether Lynch's particular description of the truth role is right, but the functionalist approach (truth is a functional property, and the function can be performed by many different realizers) is very attractive to me.

Comment author: Peterdjones 16 November 2012 08:27:41AM 0 points [-]

Me too, thanks for this.

Comment author: faul_sname 02 October 2012 11:22:07PM 2 points [-]

If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.

It seems that way to me. Specifically, in that case I think you're saying that Alice (wrongly) expects that her decision is causally independent from the money Omega put in the boxes, and as such thinks that her expected utility is higher from grabbing both boxes.

Comment author: amcknight 02 October 2012 10:31:26PM 5 points [-]

I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.

Comment author: Benquo 02 October 2012 07:46:33PM *  1 point [-]

I don't think 2 is answered even if you say that the mathematical objects are themselves real. Consider a geometry that labels "true" everything that follows from its axioms. If this geometry is consistent, then we want to say that it is true, which implies that everything it labels as "true", is. And the axioms themselves follow from the axioms, so the mathematical system says that they're true. But you can also have another valid mathematical system, where one of those axioms is negated. This is a problem because it implies that something can be both true and not true.

Because of this, the sense in which mathematical propositions can be true can't be the same sense in which "snow is white" can be true, even if the objects themselves are real. We have to be equivocating somewhere on "truth".

Comment author: DuncanS 02 October 2012 10:23:29PM 4 points [-]

It's easy to overcome that simply by being a bit more precise - you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.

It is a different sense of true in that it isn't necessarily related to sensory experience - only to the interrelationships of ideas.

Comment author: Peterdjones 02 October 2012 08:49:38PM *  0 points [-]

You are tacitly assuming that Platonists have to hold that what is formally true (proveable, derivable from axioms) is actuallty true. But a significant part of the content of Platonism is that mathematical statements are only really true if they correspond to the organisation of Plato's heaven. Platonists can say, "I know you proved that, but it isn't actually true". So there are indeed different notions of truth at play here.

Which is not to defend Platonism. The notion of a "real truth" which can't be publically assessed or agreed upon in the way that formal proof can be is quite problematical.

Comment author: anotherblackhat 02 October 2012 06:31:22PM 1 point [-]

The "All possible worlds" picture doesn't include the case of a marble in both the basket and the box.

Comment author: thomblake 02 October 2012 06:39:48PM 8 points [-]

I think there was only one marble in the universe.

Comment author: faul_sname 02 October 2012 11:24:10PM *  4 points [-]

Technically, if you put the basket in the box (or vice versa), you could still have a marble in both the basket and the box with only one marble in the universe.

Comment author: DaFranker 03 October 2012 02:13:32PM 0 points [-]

This is definitely not the only method to achieve this, if you take "all possible worlds" literally and start playing with laws of physics.

Comment author: thomblake 03 October 2012 01:54:29PM 5 points [-]

You're technically correct. THE BEST KIND OF CORRECT.

Comment author: Armok_GoB 02 October 2012 08:18:37PM 7 points [-]

This sentence is hilarious out of context.

Comment author: wedrifid 02 October 2012 08:36:00PM 5 points [-]

This sentence is hilarious out of context.

Also presumably a true one, assuming he aims the 'was' correctly.

Comment author: thomblake 03 October 2012 01:57:49PM 0 points [-]

assuming he aims the 'was' correctly.

And the 'marble'. I would assume the word came about long after we started making things that could be described by it - tracking down the 'first' one might be really tricky. It could be as bad as trying to find the time when there was only one human.

Comment author: wedrifid 03 October 2012 02:36:50PM *  1 point [-]

And the 'marble'. I would assume the word came about long after we started making things that could be described by it - tracking down the 'first' one might be really tricky. It could be as bad as trying to find the time when there was only one human.

Possibly harder, given the possibility that objects more closely resembling an archetypal marble than the first marbles actively created probably existed elsewhere by chance. In fact, given the simplicity of the item and the material, marble-like objects probably existed a long time ago in a galaxy far far away. Humans on the other hand are sufficiently complex, arbitrary and anthropically selected that we can with reasonable confidence narrow 'first human' down one of the direct ancestors of the surviving humans (or maybe the cousin of one of those ancestors if we are being cautious).

ie. In addition to the 'where do you draw the line' question you also have the 'if a marble-equivalent-object falls in forest and there is nobody there to hear it or ascribe it it purpose is it really a marble'? Then, unless you decide that spheres made out of marble aren't 'marbles' unless proximate intelligent agents intend them to be you are left with an extremely complex and abstract application of theoretical physics, cosmology, geology and statistics.

I would probably start making an estimate by looking at when second generation planets first formed.

Comment author: DaFranker 03 October 2012 02:45:42PM 0 points [-]

I think this is gracefully resolved by adding the conditional that the object must have come into shape by causal intervention of a human mind which predicted creation of this physical form.

That just might be too many conditions and too complex a proposition, though.

Comment author: wedrifid 03 October 2012 02:53:37PM *  0 points [-]

I think this is gracefully resolved by adding the conditional that the object must have come into shape by causal intervention of a human mind which predicted creation of this physical form.

It has to be resolved one way or the other. They are both coherent questions, they just shouldn't be confused.

Comment author: DaFranker 03 October 2012 03:03:35PM 1 point [-]

True. I hadn't interpreted that as the point you were making, but in retrospect your comment makes sense if you had already thought of this.

Comment author: philh 03 October 2012 12:43:02AM 0 points [-]

To be precise, it is a presumably-true sentence about a presumably-true belief.

Comment author: earthwormchuck163 02 October 2012 08:22:34PM 2 points [-]

I would like to thank you for bringing my attention to that sentence without any context.

Comment author: [deleted] 02 October 2012 05:48:06PM *  6 points [-]

Highly Advanced Epistemology 101 for Beginners

The joke flew right over my head and I found myself typing "Redundant wording. Advanced Epistemology for Beginners sounds better."

Comment author: [deleted] 02 October 2012 05:38:11PM 1 point [-]

Highly Advanced Epistemology 101 for Beginners

What does it tell about me that I mentally weighed "Highly Advanced" on a scale pan and "101" and "for Beginners" on the other pan?

I would have inverted the colours in the “All possible worlds” diagram (but with a black border around it) -- light-on-black reminds me of stars, and thence of the spatially-infinite-universe-including-pretty-much-anything idea, which is not terribly relevant here, whereas a white ellipse with a black border reminds me of a classical textbook Euler-Venn diagram.

an infinite family of truth-conditions: • The sentence 'snow is white' is true if and only if snow is white. • The sentence 'the sky is blue' is true if and only if the sky is blue.

What does it tell about me that I immediately thought ‘what about sentences whose meaning depends on the context’? :-)

What does it tell about me that on seeing the right-side part of the picture just about the koan, my System 1 expected to see infinite regress and was disappointed when the innermost frame didn't included a picture of the guy, and that my System 2 then thought ‘what kind of issue EY is neglecting does this correspond to’?

my beliefs determine my experimental predictions, but only reality gets to determine my experimentalresults.

What does it tell about me that I immediately thought ‘what about placebo and stuff’ (well, technically its aliefs that matter there, not beliefs, but not all of the readers will know the distinction)?

Comment author: Normal_Anomaly 11 December 2012 10:10:42PM 0 points [-]

What does it tell about me that I immediately thought ‘what about placebo and stuff’

Your beliefs about the functionality of a "medicine," and the parts of your physiology that make the placebo effect work, are both part of reality. Your beliefs can, in a few (really annoying!) cases, affect their own truth or falsity, but whenever this happens there's a causal chain leading from the neural structure in your head to the part of reality in question that's every bit as valid as the causal chain in the shoelace example.

Comment author: [deleted] 12 December 2012 12:08:25PM 0 points [-]

in a few (really annoying!) cases

I think that if you're human, these cases are way more common than ISTM certain people realize. So in such discussions I'd always make clear if I'm talking about actual humans, about future AIs, or about idealized Cartesian agents whose cognitive algorithms cannot affect the world in any way, shape or form until they act on them.

Comment author: Normal_Anomaly 14 December 2012 12:49:15AM 0 points [-]

Can I have a couple examples other than placebo affect? Preferably only one of which is in the class "confidence that something will work makes you better at it"? Partly because it's useful to ask for examples, partly because it sounds useful to know about situations like this.

Comment author: [deleted] 15 December 2012 12:17:17AM *  0 points [-]

Actually, pretty much all I had in mind was in the class "confidence that something will work makes you better at it" -- but looking up “Self-fulfilling prophecy” on Wikipedia reminded me of the Observer-expectancy effect (incl. the Clever Hans effect and similar). Some of Bostrom's information hazards also are relevant.

Comment author: MixedNuts 02 October 2012 05:46:37PM 0 points [-]

what about sentences whose meaning depends on the context

Ehn, the truth value depends on context too. "That girl over there heard what this guy just said" is true if that girl over there heard what this guy just said, false if she didn't, and meaningless if there's no girl or no guy or he didn't say anything.

what kind of issue EY is neglecting does this correspond to

Common knowledge, in general?

what about placebo and stuff

Beliefs are a strict subset of reality.

Comment author: [deleted] 02 October 2012 07:07:45PM 0 points [-]

what kind of issue EY is neglecting does this correspond to

Common knowledge, in general?

I was thinking more about stuff like, “but reality does also include my map, so a map of reality ought to include a map of itself” (which, as you mentioned, is related to my point about placebo-like effects).

Comment author: [deleted] 02 October 2012 04:19:31PM 2 points [-]

Suppose I have two different non-meaningful statements, A and B. Is it possible to tell them apart? On what basis? On what basis could we recognize non-meaningful statements as tokens of language at all?

Comment author: faul_sname 02 October 2012 11:27:26PM 0 points [-]

How are you encoding the non-meaningful statements? If they're encoded as characters in a string, then yes we can tell them apart (e.g. "fiurgrel" !== "dkaldjas").

Why do you want to tell them apart?

Comment author: MixedNuts 02 October 2012 05:39:32PM 6 points [-]

Connotation. The statement has no well-defined denotation, but people say it to imply other, meaningful things. Islam is a religion of peace!

Comment author: [deleted] 02 October 2012 07:10:45PM 1 point [-]

Good answer. So, if I've understood you, you're saying that we can recognize meaningless statements as items of language (and as distinct from one another even) because they consist of words that are elsewhere and in different contexts meaningful.

So for example I may have a function "...is green." where we can fill this in with true objects "the tree", false objects "the sky" and objects with render the resulting sentence meaningless, like "three". The function can be meaningfully filled out, and 'three' can be the objet of a meaningful sentence ('three is greater than two') but in this connection the resulting sentence is meaningless.

Does that sound right to you?

Comment author: Peterdjones 02 October 2012 08:06:08PM 0 points [-]

OTOH, there is no reason to go along with the idea that denotion (or empirical consequence) is essential to meaning. You could instead use you realisation that you actually can tell the difference between untestable statements to conclude that they are in fact meaningful, whatever warmed-over Logical Positivism may say.

Comment author: MixedNuts 06 October 2012 08:53:38AM 0 points [-]

It's not useful to know they are meaningful if you don't know the meaning.

Comment author: Peterdjones 08 October 2012 12:11:10PM 0 points [-]

You do know the meaning. Knowing the meaning is what tells you there is no denotation. You know there is no King of France because you know what "King" and "France" mean.

Comment author: wedrifid 06 October 2012 09:00:46AM 1 point [-]

It's not useful to know they are meaningful if you don't know the meaning.

I wouldn't agree with this. Knowing whether or not something is meaningful is potentially quite a lot of information.

Comment author: shminux 02 October 2012 05:15:01PM 1 point [-]

Is it possible to tell them apart?

Why would you want to?

Comment author: Peterdjones 02 October 2012 08:10:39PM 0 points [-]

What an odd thing to say. I can tell the difference between untestable sentences, and that's all I need to refute the LP verification principle. Stipulating a defintion of "meaning" that goes beyond linguistic tractability doens't solve anything , and stipulating that people shouldn't want to understand sentences about invisible gorillas doens't either.

Comment author: shminux 02 October 2012 08:32:57PM 2 points [-]

invisible gorillas

Seems like we are not on the same page re the definition of meaningful. I expect "invisible gorillas" to be a perfectly meaningful term in some contexts.

Comment author: Peterdjones 02 October 2012 08:34:41PM 1 point [-]

I don't follow that, because it is not clear whether you are using the vanilla, linguistic notion of "meaning" or the stipulated LPish version,

Comment author: shminux 02 October 2012 09:24:53PM *  0 points [-]

I am not a philosopher and not a linguist, to me meaning of a word or a sentence is the information that can be extracted from it by the recipient, which can be a person or a group of people, or a computer, maybe even an AI. Thus it is not something absolute. I suppose it is closest to an internal interpretation. What is your definition?

Comment author: Peterdjones 03 October 2012 09:18:16AM 1 point [-]

I am specifically trying not to put forward an idiosyncratic definition.

Comment author: Eugine_Nier 02 October 2012 05:33:49PM 1 point [-]

See this.

Comment author: shminux 02 October 2012 06:09:33PM 0 points [-]

Not sure how this is relevant, feel free to elaborate.

Comment author: incariol 02 October 2012 03:08:01PM *  1 point [-]

So... could this style of writing, with koans and pictures, be applied to transforming the majority of sequences into an even greater didactic tool?

Besides the obvious problems, I'm not sure how this would stand with Eliezer - they are, after all, his masterpiece.

Comment author: thomblake 02 October 2012 03:10:53PM 3 points [-]

his masterpiece

Really, more like his student work. It was "Blog every day so I will have actually written something" not "Blog because that is the ultimate expression of my ideas".

Comment author: Eliezer_Yudkowsky 02 October 2012 06:34:21PM 4 points [-]

Yep. The main problem would be that I'd been writing for year and years before then, and, alas for our unfair universe, also have a certain amount of unearned talent; finding somebody who can pick up the Sequences and improve them without making them worse, despite their obvious flaws as they stand, is an extremely nontrivial hiring problem.

Comment author: Bo102010 03 October 2012 01:18:07AM -2 points [-]

Not to mention that any candidate up to the task likely has more lucrative alternatives...

Comment author: Larks 02 October 2012 02:39:26PM *  2 points [-]

Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as: ...

Is this true? Maybe there's a formal reason why, but it seems we can informally represent such ideas without the abstract idea of truth. For example, if we grant quantification over propositions,

Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.

becomes

  • Generalized across possible maps and possible cities, if your map of a city says "p" if and only iff p, navigating according to that map is more likely to get you to the airport on time.

To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

becomes

  • To draw a map of a city such that the map says "p" if and only if p, someone has to go out and look at the buildings; there's no way you'd end up with a map that says "p" if and only if p by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.

True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

becomes

  • Beliefs of the form "p", where p, are more likely than beliefs of the form "p", where it is not the case that p, to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should incrementally contain more assertions "p" where p, and fewer assertions "p" where not p, over time.
Comment author: Eliezer_Yudkowsky 02 October 2012 06:31:37PM 4 points [-]

Generalized across possible maps and possible cities, if your map of a city says "p" if and only iff p

If you can generalize over the correspondence between p and the quoted version of p, you have generalized over a correspondence schema between territory and map, ergo, invoked the idea of truth, that is, something mathematically isomorphic to in-general Tarskian truth, whether or not you named it.

Comment author: endoself 02 October 2012 05:17:53PM *  3 points [-]

Well, yeah, we can taboo 'truth'. You are still using the titular "useful idea" though by quantifying over propositions and making this correspondence. The idea that there are these things that are propositions and that they can appear both in quotation marks and also appear unquoted, directly in our map, is a useful piece of understanding to have.

Comment author: selylindi 02 October 2012 02:26:01PM 10 points [-]

nit to pick: Rod and cone cells don't send action potentials.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:33:25PM 2 points [-]

Can you amplify? I'd thought I'd looked this up.

Comment author: shminux 02 October 2012 06:51:01PM 19 points [-]

Photoreceptor cells produce graded potential, not action potential. It goes through a bipolar cell and a ganglion cell before finally spiking, in a rather processed form.

Comment author: Eliezer_Yudkowsky 02 October 2012 06:51:19PM 2 points [-]

Ah, thanks!

Comment author: yli 02 October 2012 02:22:42PM *  25 points [-]

I don't like the "post-utopian" example. I can totally expect differing sensory experiences depending on whether a writer is post-utopian or not. For example, if they're post-utopian, when reading their biography I would more strongly expect reading about them having been into utopian ideas when they were young, but having then changed their mind. And when reading their works, I would more strongly expect seeing themes of the imperfectability of the world and weltschmerz.

Comment author: Eliezer_Yudkowsky 03 October 2012 09:32:29PM 6 points [-]

I've edited the OP to try and compartmentalize off the example a bit more.

Comment author: TimS 03 October 2012 10:12:34PM 1 point [-]

Do you also think the label "Impressionist painter" is meaningless?

Comment author: Eliezer_Yudkowsky 03 October 2012 10:29:07PM 5 points [-]

I have no idea what Impressionism is (I am not necessarily proud of this ignorance, since for all I know it does mean something important). Do you think that a panel of artists would be able to tell who was and wasn't "Impressionist" and mostly agree with each other? That does seem like a good criterion for whether there's sensory data that they're reacting to.

Comment author: Kaj_Sotala 04 October 2012 05:21:42AM *  13 points [-]

Apparently even computers agree with those judgments (or at least cluster "impressionists" in their own group - I didn't read the paper, but I expect that the cluster labels were added manually).

ETA: Got the paper. Excerpts:

The dataset includes 994 paintings representing 34 painters, such that each painter has at least 19 images in the dataset. The painters represent several different schools of art such as Early, High, and Northern Renaissance, Mannerism, Baroque, Rococo, Romanticism, Impressionism, Post and Neo Impressionism, Abstract Expressionism, Surrealism, and Fauvism, as commonly defined by art historians. The images were downloaded from various online sources, and normalized to a size of 640,000 pixels while preserving the original aspect ratio. The paintings that were selected for the experiment are assumed to be all in their original condition.

[...] To make the analysis more meaningful for comparing similarities between artistic styles of painters, we selected for each painter paintings that reflect the signature artistic style of that painter. For instance, in Wassily Kandinsky collection we included only paintings representing his abstract expressionism signature artistic style, and did not include his earlier work such as “The-Blue-Rider”, which embodies a different artistic style.

The dataset is used such that in each run 17 different paintings per artist are randomly selected to determine the Fisher discriminant scores of the features, and two images from each painter are used to determine the distances between the images using the WND method [Shamir 2008; Shamir et al. 2008, 2009, 2010]. The experiment is repeated automatically 100 times, and the arithmetic means of the distances across all runs are computed. [...]

The image analysis method is based on the WND-CHARM scheme [Shamir 2008; Shamir et al. 2008], which was originally developed for biomedical image analysis [Shamir et al. 2008, 2009]. The CHARM [Shamir, 2008; Shamir et al. 2010] set of numerical image content descriptors is a comprehensive set of 4027 features that reflect very many aspects of the visual content such as shapes (Euler number, Otsu binary object statistics), textures (Haralick, Tamura), edges (Prewitt gradient statistics), colors [Shamir 2006], statistical distribution of the pixel intensities (multiscale histograms, first four moments), fractal features [Wu et al. 1992], and polynomial decomposition of the image (Chebyshev statistics). These content descriptors are described more thoroughly in Shamir [2008] and Shamir et al. [2008, 2009, 2010]. This scheme of numerical image content descriptors was originally developed for complex morphological analysis of biomedical imaging, but was also found useful for the analysis of visual art [Shamir et al. 2010; Shamir 2012].

An important feature of the set of numerical image content descriptors is that the color descriptors are based on a first step of classifying each pixel into one of 10 color classes based on a fuzzy logic model that mimics the human intuition of colors [Shamir 2006]. This transformation to basic color classes ensures that further analysis of the color information is not sensitive to specific pigments that were not available to some of the classical painters in the dataset, or to the condition and restoration of some of the older paintings used in this study.

[...] As the figure shows, the classical artists are placed in the lower part of the phylogeny, while the modern artists are clustered in the upper part. A clear distinction between those groups at the center reflects the difference between classical realism and modern artistic styles that evolved during and after the 19th century.

Inside those two broad groups, it is noticeable that the computer was able to correctly cluster artists that belong in the same artistic movements, and placed these clusters on the graph in a fashion that is largely in agreement with the analysis of art historians. For instance, the bottom center cluster includes the High Renaissance artists Raphael, Da Vinci, and Michelangelo, indicating that the computer analysis could identify that these artists belong in the same school of art and have similar artistic styles [O’Mahony 2006].

The Early Renaissance artists Ghirlandaio, Francesca, and Botticelli are clustered together left to the High Renaissance painters, and the Northern Renaissance artists Bruegel, Van Eyck, and Durer are placed above the High Renaissance. Further to the right, close the High Renaissance, the algorithm placed three painters associated with the Mannerism movement, Veronese, Tintoretto, and El Greco, who were inspired by Renaissance artists such as Michelangelo [O’Mahony 2006]. Below the Mannerism painters the algorithm automatically grouped three Baroque artists, Vermeer, Rubens, and Rembrandt. Interestingly, Goya, Rococo, and Romanticism artist is placed between the Mannerism and the Baroque schools. The Romanticism artists, Gericault and Delacroix, who were inspired by Baroque painters such as Rubens [Gariff 2008], are clustered next to the Baroque group.

The upper part of the phylogeny features the modern artists. The Abstract Expressionists Kandinsky, Rothko, and Pollock are grouped together, as it has been shown that abstract paintings can be automatically differentiated from figural paintings with high accuracy [Shamir et al. 2010]. Surrealists Dali, Ernst, and de Chirico are also clustered by the computer analysis. An interesting observation is Fauvists Matisse and Derain are placed close to each other, between the Neo Impressionists and Abstract Expressionists clusters.

The neighboring clusters of Neo Impressionists Seurat and Signac and Post Impressionists Cezanne and Gauguin are also in agreement with the perception of art historians, as well as the cluster of Impressionists Renoir and Monet. These two artists are placed close to Vincent Van Gogh, who is associated with the Post Impressionism artistic movement. The separation of Van Gogh from the other Post Impressionist painters can be explained by the influence of Monet and Renoir on his artistic style [Walther and Metzger 2006], or by his unique painting style reflected by low-level image features that are similar to the style of Jackson Pollock [Shamir 2012], and could affect the automatic placement of Van Gogh on the phylogeny.

Comment author: TimS 03 October 2012 10:39:44PM 8 points [-]

I'm no art geek, but Impressionism is an art "movement" from the late 1800s. A variety of artists (Monet, Renoir, etc) began using similar visual styles that influenced what they decided to paint and how they depicted images.

Art critics think that artistic "movements" are a meaningful way of analyzing paintings, approximately at the level of usefulness that a biologist might apply to "species" or "genus." Or historian of philosophy might talk about the school of thought know today as "Logical Positivism."

Do you think movements is a reasonable unit of analysis (in art, in literature, in philosophy)? If no, why not? If yes, why are you so hostile to the usage of labels like "post-utopian" or "post-colonialist"?

Comment author: Viliam_Bur 05 October 2012 10:50:24AM *  4 points [-]

Art critics think that artistic "movements" are a meaningful way of analyzing paintings, approximately at the level of usefulness that a biologist might apply to "species" or "genus."

The pictures made within an artistic movement have something similar. We should classify them by that something, not only by the movement. Although the name of the movement can be used as a convenient label for the given cluster of picture-space.

If I give you a picture made by unknown author, you can't classify it by author's participation in given movements. But you can classify it by the contents of the picture itself. So even if we use the movement as a label for the cluster, it is better if we can also describe typical properties of picture within that cluster.

Just like when you find a random dog on a street, you can classify it as "dog" species, without taking a time machine and finding out whether the ancestors of this specific dogs really were domesticated wolves. You can teach "dogs are domesticated wolves" at school, but this is not how you recognize dogs in real life.

So how exactly would you recognize "impressionist" paintings, or "post-utopian" books in real life, when the author is unknown? Without teaching this, you are not truly teaching impressionism or post-utopianism.

(In case of "impressionism", my rule of thumb is that the picture looks nice and realistic from distance, but when you stand close to it, the details become somehow ugly. My interpretation of "impressionism" is: work of authors who obviously realized that milimeter precision for a wall painting is an overkill, and you can make pictures faster and cheaper if you just optimize it for looking correct from a typical viewing distance.)

Comment author: TheOtherDave 05 October 2012 02:12:58PM 2 points [-]

I agree with you that there are immediately obvious properties that I use to classify an object into a category, without reference to various other historical and systemic facts about the object. For example, as you say, I might classify a work of art as impressionist based on the precision with which it is rendered, or classify an animal as a dog based on various aspects of its appearance and behavior, or classify food as nutritious based on color, smell, and so forth.

It doesn't follow that it's somehow better to do so than to classify the object based on the less obvious historical or systemic facts.

If I categorize an object as nutritious based on those superficial properties, and later perform a lab analysis and discover that the object will kill me if I eat it, I will likely consider my initial categorization a mistake.

If I share your rule of thumb about "impressionism", and then later realize that some works of art that share the property of being best viewed from a distance are consistently classed by art students as "pointilist" rather than "impressionist", and I further realize that when I look at a bunch of classed-as-pointilist and classed-as-impressionist paintings it's clear to me that paintings in each class share a family resemblance that they don't share with paintings in the other class, I will likely consider my initial rule of thumb a mistake.

Sometimes, the categorization I perform based on properties that aren't immediately apparent is more reliable than the one I perform "in real life."