Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Skill: The Map is Not the Territory

46 Post author: Eliezer_Yudkowsky 06 October 2012 09:59AM

Followup to: The Useful Idea of Truth (minor post)

So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:

"The map is not the territory."

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.

But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:

Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly?  How exactly does it help, on what sort of problem?

...

...

...

Skill 1: The conceivability of being wrong.

In the story, Gilbert Gosseyn is most liable to be reminded of this proverb when some belief is uncertain; "Your belief in that does not make it so." It might sound basic, but this is where some of the earliest rationalist training starts - making the jump from living in a world where the sky just is blue, the grass just is green, and people from the Other Political Party just are possessed by demonic spirits of pure evil, to a world where it's possible that reality is going to be different from these beliefs and come back and surprise you. You might assign low probability to that in the grass-is-green case, but in a world where there's a territory separate from the map it is at least conceivable that reality turns out to disagree with you. There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble, first in a world like X, then in a world like not-X, in cases where they are tempted to entirely neglect the possibility that they might be wrong. "He hates me!" and other beliefs about other people's motives seems to be a domain in which "I believe that he hates me" or "I hypothesize that he hates me" might work a lot better.

Probabilistic reasoning is also a remedy for similar reasons: Implicit in a 75% probability of X is a 25% probability of not-X, so you're hopefully automatically considering more than one world. Assigning a probability also inherently reminds you that you're occupying an epistemic state, since only beliefs can be probabilistic, while reality itself is either one way or another.

Skill 2: Perspective-taking on beliefs.

What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you. They aren't disagreeing with you because they're obstinate, they're disagreeing because the world feels different to them - even if the two of you are in fact embedded in the same reality.

This is one of the secret writing rules behind Harry Potter and the Methods of Rationality. When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist. In HPMOR, when you enter Draco Malfoy's viewpoint, you are plunged into Draco Malfoy's subjective universe, in which Death Eaters have reasons for everything they do and Dumbledore is an exogenous reasonless evil. Since I'm not trying to show off postmodernism, everyone is still recognizably living in the same underlying reality, and the justifications of the Death Eaters only sound reasonable to Draco, rather than having been optimized to persuade the reader. It's not like the characters literally have their own universes, nor is morality handed out in equal portions to all parties regardless of what they do. But different elements of reality have different meanings and different importances to different characters.

Joshua Greene has observed - I think this is in his Terrible, Horrible, No Good, Very Bad paper - that most political discourse rarely gets beyond the point of lecturing naughty children who are just refusing to acknowledge the evident truth. As a special case, one may also appreciate internally that being wrong feels just like being right, unless you can actually perform some sort of experimental check.

Skill 3: You are less bamboozleable by anti-epistemology or motivated neutrality which explicitly claims that there's no truth.

This is a negative skill - avoiding one more wrong way to do it - and mostly about quoted arguments rather than positive reasoning you'd want to conduct yourself. Hence the sort of thing we want to put less emphasis on in training. Nonetheless, it's easier not to fall for somebody's line about the absence of objective truth, if you've previously spent a bit of time visualizing Sally and Anne with different beliefs, and separately, a marble for those beliefs to be compared-to. Sally and Anne have different beliefs, but there's only one way-things-are, the actual state of the marble, to which the beliefs can be compared; so no, they don't have 'different truths'.  A real belief (as opposed to a belief-in-belief) will feel true, yes, so the two have different feelings-of-truth, but the feeling-of-truth is not the territory.

To rehearse this, I suppose, you'd try to notice this kind of anti-epistemology when you ran across it, and maybe respond internally by actually visualizing two figures with thought bubbles and their single environment. Though I don't think most people who understood the core insight would require any further persuasion or rehearsal to avoid contamination by the fallacy.

Skill 4: World-first reasoning about decisions a.k.a. the Tarski Method aka Litany of Tarski.

Suppose you're considering whether to wash your white athletic socks with a dark load of laundry, and you're worried the colors might bleed into the socks, but on the other hand you really don't want to have to do another load just for the white socks. You might find your brain selectively rationalizing reasons why it's not all that likely for the colors to bleed - there's no really new dark clothes in there, say - trying to persuade itself that the socks won't be ruined. At which point it may help to say:

"If my socks will stain, I want to believe my socks will stain;
If my socks won't stain, I don't want to believe my socks will stain;
Let me not become attached to beliefs I may not want."

To stop your brain trying to persuade itself, visualize that you are either already in the world where your socks will end up discolored, or already in the world where your socks will be fine, and in either case it is better for you to believe you're in the world you're actually in. Related mantras include "That which can be destroyed by the truth should be" and "Reality is that which, when we stop believing in it, doesn't go away". Appreciating that belief is not reality can help us to appreciate the primacy of reality, and either stop arguing with it and accept it, or actually become curious about it.

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.  For example, let's say that you've been driving for a while, haven't reached your hotel, and are starting to wonder if you took a wrong turn... in which case you'd have to go back and drive another 40 miles in the opposite direction, which is an unpleasant thing to think about, so your brain tries to persuade itself that it's not lost.  Anna and I use the form of the skill where we visualize the world where we are lost and keep driving.

Note that in principle, this is only one quadrant of a 2 x 2 matrix:

  In reality, you're heading in the right direction In reality, you're totally lost
You believe you're heading in the right direction No need to change anything - just keep doing what you're doing, and you'll get to the conference hotel Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea
You believe you're lost Alas!  You spend 5 whole minutes of your life pulling over and asking for directions you didn't need After spending 5 minutes getting directions, you've got to turn around and drive 40 minutes the other way.

 

Michael "Valentine" Smith says that he practiced this skill by actually visualizing all four quadrants in turn, and that with a bit of practice he could do it very quickly, and that he thinks visualizing all four quadrants helped.

(Mainstream status here.)

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Rationality: Appreciating Cognitive Algorithms"

Previous post: "The Useful Idea of Truth"

Comments (171)

Comment author: thomblake 04 October 2012 01:59:09PM 28 points [-]

"grass is green" and "sky is blue" are always funny examples to me, since whenever I hear them I go check, and they're usually not true. Right now from my window, I can see brown grass and a white/gray sky.

So they're especially good examples, as people will actually use them as paradigms of indisputably true empirical propositions, and even those seem almost always to be a mismatch between the map and the territory.

Comment author: Error 22 March 2013 11:34:23AM 3 points [-]

I wish I could upvote this twice, just for pointing out an obvious error that I've never previously twigged on. I shall try to keep it close to the front of memory the next time I feel really certain about something.

Comment author: [deleted] 04 October 2012 02:27:52AM 21 points [-]

I've been enjoying the new set of Sequences. I wasn't around when the earlier Sequences were being written; It's like the difference between reading a series of books all in one go, versus being part of the culture, reading them one at a time, and engaging in discussion in between. So thanks to Eliezer for posting them!

I really liked how there was an ending koan in the last post. It prompted discussion. I tried to think of a good prompt to post for this one, but couldn't. Anyone have some good ideas?

Also, Skill #2 made me think of this optical illusion

Comment author: johnlawrenceaspden 08 October 2012 01:41:25PM 3 points [-]

I was planning to paint my boat today. There's already a coat of paint on it, drying. If I overpaint today, that's optimal. If I wait till tomorrow, then I'll have to sand it down first.

It looks like it might rain, but the forecast is good. I don't know what effect rain will have on newly applied paint, or indeed on the current partly dried surface.

Do I spend the afternoon painting the boat or carry on sitting in a coffee shop reading Less Wrong?

Comment author: RichardKennaway 08 October 2012 04:32:39PM 7 points [-]

LessWrong will still be there tomorrow. The optimal opportunity to paint the boat won't be.

Comment author: CCC 08 October 2012 01:44:38PM 3 points [-]

Is it possible to protect the boat from rain in some manner, such as leaving it under a roof?

Comment author: johnlawrenceaspden 08 October 2012 04:48:13PM 1 point [-]

Impractical, as it happens. I eventually solved the problem by going home, changing into painting clothes, cleaning brushes, arranging tools and stirring paint. At that point it started raining heavily. So I undid all that in the rain, changed back into dry clothes, went back to the coffee shop and am now reading Less Wrong again. I think I just failed rationality for ever.

Comment author: CCC 09 October 2012 12:27:43PM 3 points [-]

I don't think it's possible to fail rationality "for ever", as long as you are in a state where you can make observations, record memories, formulate goals, plan and take actions. Though you do seem to have been a bit unfortunate in the timing of the precipitation.

Comment author: wedrifid 09 October 2012 12:43:29PM 3 points [-]

I don't think it's possible to fail rationality "for ever"

Merely humanly impossible. If you are a more pure agent just assign probability "1" to enough things and you'll be set.

Comment author: CCC 10 October 2012 01:31:34PM 0 points [-]

Hmmm. It seems that I should add "as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001" to my list of exceptions. (It won't fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).

Comment author: Eugine_Nier 10 October 2012 10:57:45PM 2 points [-]

That's not the only problem. An agent that assigns equal probability to all possible experiences will never update.

Comment author: CCC 11 October 2012 07:07:27AM 1 point [-]

Oh, that's sneaky.

Perhaps a perfect agent should occasionally - very occasionally - perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?

Comment author: Eugine_Nier 12 October 2012 12:56:37AM 0 points [-]

Nice try, but random perturbations won't help here.

Comment author: arundelo 09 October 2012 02:00:23PM 1 point [-]

You may already know this, but the phrase "fail x forever" is a thing.

Comment author: [deleted] 04 October 2012 07:54:55PM 3 points [-]

I couldn't think of a koan-y question, but here is a discussion prompt.

Let's make a Worksheet!

Let's come up with some practice examples of the 2x2 matrix (such as the "Being Lost or Not" example in the OP), that people can fill out. The examples should be short (single paragraph) everyday type problems that people can relate to. Submit examples in the comments. I'll take the best and put them in a worksheet in Google docs, and link to it here.

That way, when people in the future come and read this post, they have an activity to help them practice it. Also, people can use them at meetups if they want. Worksheets, of course, aren't the BEST way to learn, but they're better than nothing.

Comment author: DaFranker 04 October 2012 08:23:15PM 13 points [-]

You're at work, and you find yourself wanting very badly to make a certain, particularly funny-but-possibly-taken-as-offensive remark to your boss. The comment feels particularly witty, quick-minded and insightful.

(trying to think of stuff that's fairly common and happens relatively often in everyday life)

Comment author: Alejandro1 04 October 2012 08:34:42PM 9 points [-]

You are leaving your home in the morning, to return in the evening; your day will involve quite a bit of walking and public transport. It is now warm and sunny, but you know that a temperature drop with heavy rains is forecasted for the afternoon. Looking out at the window and thinking of the walk in the sun and the crowded bus, you don't feel like carrying around a coat and umbrella. You start thinking maybe the forecast is wrong...

Comment author: army1987 05 October 2012 07:35:14PM *  3 points [-]

I put a pocket umbrella and/or a foldable raincoat into my handbag. Duh.

Comment author: DaFranker 05 October 2012 07:41:18PM 3 points [-]

Carrying around a handbag in the first place happens to be something that I find annoying and risky. I'm prone to leaving it in easy-to-notice, easy-to-steal places or outright forgetting it in some public location.

Comment author: army1987 05 October 2012 08:07:29PM 0 points [-]

Now that I think about that, that happened to me exactly once (as far as I can remember) with a handbag, though it happens to me very often¹ with other items such as keys, jackets, sweatshirts and sometimes my iPod. (I usually² eventually manage to recover them, but not always.) I guess that's because I'm more likely to immediately notice that I'm missing my bag than that I'm missing my keys.


  1. Around once per month in average.

  2. Around 90% of the times.

Comment author: Alejandro1 05 October 2012 07:41:35PM 2 points [-]

Yes, that is clearly the optimal solution. I was assuming you don't own those two items, or that you don't have a handbag the right size or don't want to use it--more plausible for a man that for a woman, I guess.

Comment author: [deleted] 06 October 2012 06:53:12PM 5 points [-]

What immediately comes to mind for me:

You are knitting a fitted garment. Let's say it's a sweater. You've been knitting for awhile, and you''re starting to get concerned it won't fit the intended recipient. You can't tell for sure, because your needle is too short to fully stretch it out, but you just have this feeling. This feeling you hope is wrong, because you don't want to rip out and re-do all the ribbing you've just knit...

Comment author: EvelynM 08 October 2012 01:36:23AM 1 point [-]

That's time for a new set of knitting needles, and empiricisim. I have 60in cables.

Comment author: shminux 05 October 2012 08:27:46PM *  4 points [-]

You are an ex-smoker overcome with a sudden craving after a particularly bad day, and your helpful friend offers you a cigarette "have just this one smoke!" to relieve tension. You know that anything less than a complete abstinence has a chance of kickstarting the habit.

Comment author: apotheon 07 October 2012 02:18:55AM *  -3 points [-]

If a stressful day is enough to give you a craving difficult to resist, I think that saying "anything less than complete abstinence has a chance of kickstarting the habit" is a misleading statement of how it works. It might be more accurate to say that every cigarette you have is one cigarette closer to having a habit you need to kick. It seems, in fact, that there's sort of a gradient of average craving from abstinence all the way up to two packs a day, with variances around those averages. It seems a bit obfuscatory to suggest that "complete abstinence" is the deciding factor, especially when considering the question "When does complete abstinence start? Why doesn't it start after the next cigarette?" After all, the "real" complete abstinence has already failed, if you had to quit smoking in the first place.

. . . but that's kind of off the topic of the worksheet example.

Comment author: Maelin 04 October 2012 08:52:51AM 3 points [-]

Sharing this sentiment. I'm particularly impressed with the cartoon diagrams. They're visually very appealing, and they encapsulate an idea in a way that takes just enough thought to untangle that I feel like it makes me engage with the conceptual message.

Comment author: DaFranker 04 October 2012 02:10:58PM 1 point [-]

Same here, I'm certainly happy that this new sequence is starting. I devoured the old sequences, but being forced to stop and digest these makes them feel more impactful.

I'd be curious to see how much more powerful the sequences could be if they all had Koans, too, especially if they were wrapped up in an interactive shell and you had to answer them before the rest of the article (and/or the next one(s)) would show up. Not as good as a Bayesian Dojo, but there doesn't seem to be enough Beisusenseitachi around to really be effective on that front.

Comment author: Eliezer_Yudkowsky 03 October 2012 10:05:58PM 11 points [-]

Mainstream status:

"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.

Skill 3 in the form "Trust not those who claim there is no truth" is widely advocated by modern skeptics fighting anti-epistemology.

Payoff matrices as used in the grid-visualization method are ancient; using the grid-visualization method in response to a temptation to rationalize was invented on LW as far as I currently know, as was the Litany of Tarski. (Not to be confused with Alfred Tarski's original truth-schemas.)

Comment author: lukeprog 04 October 2012 07:27:42AM 11 points [-]

"The conceivability of being wrong" aka "Consider the opposite" is the standard recommended debiasing technique in psychology. See e.g. Larrick (2004).

Comment author: Vaniver 03 October 2012 11:16:10PM *  7 points [-]

"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.

The most famous expression of this that I'm aware of originates with Lord Cromwell:

I beseech you, in the bowels of Christ, think it possible you may be mistaken.

Arguably, Socrates's claims of ignorance are a precursor, but they may stray dangerously close to anti-epistemology. I'm not a good enough classical scholar to identify anything closer.

The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.

The grid-visualization method seems like a relatively straightforward application of the normal-form game, with your beliefs as your play and the state of the world as your opponent's play. The advocacy to visualize it might come from LW, but actually applying game theory to life has a (somewhat) long and storied tradition.

[edit] I agree that doing it in response to a temptation to rationalize is probably new to LW; doing it in response to uncertainty in general isn't.

Comment author: [deleted] 03 October 2012 10:48:07PM 4 points [-]

The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.

I've seen it before used in the treatment of pascals wager: Believe in god x god exists = heavan, believe in god x god not exists = wasted life.... etc.

Can't cite specific texts, but it was definately pre-LW for me, from people who had not heard of LW.

Comment author: Eliezer_Yudkowsky 03 October 2012 10:54:51PM 4 points [-]

Ah yes, sorry. Payoff matrices are ancient; the Tarski Method is visualizing one in response to a temptation to rationalize. Edited.

Comment author: MaoShan 04 October 2012 02:13:46AM 0 points [-]

That sounds like a good idea in two ways: It gives you practice at visualizing the alternatives (which is always good if it can be honed to greater availability/reflex by practice), and by choosing those specific situations, you are automatically providing real-world examples in which to apply it; that way, it is a practical skill.

Comment author: Manfred 03 October 2012 11:02:37PM *  0 points [-]

The intent seems different there, and that shapes the details. Pascal's wager isn't about how you act because of your beliefs - the belief is considered to be the action, and the outcomes are declared by fiat (or perhaps, fide) at the start of the problem, rather than modeled in your head as part of the purpose of the exercise.

Comment author: pragmatist 04 October 2012 06:24:00AM *  3 points [-]

The Litany of Tarski has connections to certain versions of the direction-of-fit model of beliefs and desires. The model is usually considered a descriptive attempt at cashing out the difference between the functional role played by beliefs and desires. Both beliefs and desires are intentional states, they have propositional content (we believe that p, we desire that p). According to the direction-of-fit model, the crucial difference between beliefs and desires is the relation between the content of these states and the world -- specifically, the direction of fit between the content and the world differs. In the case of beliefs, subjects try to fit the content to the world, whereas in the case of desires, subjects try to fit the world to the content.

However, some philosophers treat the direction-of-fit model not as descriptive but as normative. The model tells us that the representational contents of our beliefs and desires should be kept rigorously separate (don't let your conception of how the world is be contaminated by your conception of how you would like it to be) and that we should have different attitudes to the contents of these mental states. Here's Mark Platts, from his book Ways of Meaning:

Beliefs aim at being true, and their being true is their fitting the world; falsity is a decisive failing in a belief, and false beliefs should be discarded; beliefs should be changed to fit with the world, not vice versa. Desires aim at realization, and their realization is the world fitting with them; the fact that the indicative content of a desire is not realized is not yet a failing in the desire, and not yet any reason to discard the desire; the world, crudely, should be changed to fit with our desires, and not vice versa.

Also related (but not referring to the map/territory distinction as explicitly) is what Ken Binmore calls "Aesop's principle" (in reference to the fable in which a fox who cannot reach some grapes decides that the grapes must be sour). From his book Rational Decisions:

[An agent's] preferences, her beliefs, and her assessments of what is feasible should all be independent of each other.

For example, the kind of pessimism that might make [the agent] predict that it is bound to rain now that she has lost her umbrella is irrational. Equally irrational is the kind of optimism that Voltaire was mocking when he said that if God didn't exit, it would be necessary to invent Him.

I should note that Binmore is talking about terminal preferences here. Of course, instrumental preferences need not (indeed, should not) be independent of our beliefs about the world and our assessments of what is feasible.

Comment author: bryjnar 04 October 2012 11:13:27AM 0 points [-]

As someone else engaged with mainstream philosophy, I'd like to mention that I personally think that direction of fit is one of the biggest red herrings in modern philosophy. It's pretty much just an unhelpful metaphor. Just sayin'.

Comment author: Decius 06 October 2012 12:13:50AM 2 points [-]

I never saw it as a real 'model', just a way of clarifying definitions, and making statements such as "I believe that {anything not a matter of fact}" null. It provides a way to distinguish between "I don't believe in invisible dragons in my basement." and "I don't believe in {immoral action}". I suspect the original intention was to validate a philosopher who got fed up with someone who hid behind 'I don't believe in that' in a discussion, after which the philosopher responded with evidence that the subject under discussion was factual.

Comment author: pragmatist 04 October 2012 12:44:01PM 1 point [-]

It's really not my area at all, so I don't really have any well-developed opinions on this. My comment wasn't meant to be an endorsement of the model, I was just pointing out a similarity with a view in the mainstream literature. From a pretty uninformed perspective, it does seem to me that the direction-to-fit thing doesn't really get at what's important about the distinct functional roles of belief and desire, so I'm inclined to agree with your assessment.

Comment author: bryjnar 04 October 2012 06:46:31PM 0 points [-]

Yeah, I did realise that you weren't necessarily supporting it, I just wanted to make it clear that it's not orthodoxy in mainstream philosophy! Sorry if it came off as a bit critical.

Comment author: Unnamed 04 October 2012 05:18:50AM *  1 point [-]

What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you.

In psychology, this is called construal. A person's beliefs, emotions, behaviors, etc. depend on their construal (understanding/interpretation) of the world.

Comment author: MarkL 07 October 2012 10:49:43PM 1 point [-]

Some versions of cognitive behavioral therapy ask you to write down the pros and cons of holding a particular belief.

Comment author: Morendil 04 October 2012 09:04:19AM *  21 points [-]

Implicit in a 75% probability of X is a 25% probability of not-X

This may strike everyone as obvious...

My experience with the GJP suggests that it's not. Some people there, for instance, are on record as assigning a 75% probability to the proposition "The number of registered Syrian conflict refugees reported by the UNHCR will exceed 250,000 at any point before 1 April 2013".

Currently this number is 242,000, the trend in the past few months has been an increase of 1000 to 2000 a day, and the UNHCR have recently provided estimates that this number will eventually reach 700,000. This was clear as early as August. The kicker is that the 242K number is only the count of people who are fully processed by the UNHCR administration and officially in their database; there are tens of thousands more in the camp who only have "appointments to be registered".

It's hard for me to understand why people are not updating to, maybe not 100%, but at least 99%, and that these are the only answers worth considering. To state your probability as 85% or 91% (as some have quite recently) is to say, "There is a one in ten chance that the Syrian conflict will suddenly stop and all these people will go home, all in the next few days before the count goes over."

This is kind of like saying "There is a one in ten chance Santa Claus will be the one distributing the presents this year."

It's really, really weird that in a contest aimed at people who understand the notion of probability and calibration, people presumed to be would-be rationalists, you'd get this kind of "Clack".

I can only speculate as to what's going on there, but I think it must be along the following lines: queried for a probability, people are translating something like "Sure, it's gonna happen" into a biggish number, and reporting that. They are totally failing to flip the question around and visualize what would have to happen to make it true. (Perhaps, too, people have been so strongly cautioned by Tetlock's writing against being overconfident that they reflexively shy away from the extreme numbers.)

My experience there casts some doubt on the statement "Probabilistic thinking is a remedy (...) so you're hopefully automatically considering more than one world."

At the very least, we must make a distinction between "express your beliefs in numerical terms and label these numbers 'probabilities'" on the one hand, and "actually organize your thinking so as to respect the axioms of probability" on the other. Just because you use "75%" as a shorthand for "I'm pretty sure" doesn't mean you are thinking probabilistically; you must train the skill of seeing that for some events "25%" also counts as "I'm pretty sure".

Comment author: MagnetoHydroDynamics 05 October 2012 04:25:56PM 1 point [-]

I think you are entirely right, that people don't visualize.

Comment author: Omegaile 07 October 2012 06:48:08AM 2 points [-]

I think you are 75% right.

Comment author: MagnetoHydroDynamics 08 October 2012 11:45:35AM 3 points [-]

Let's do 1000 trials and see if it converges, verify that p<0.05, write a paper and publish.

Comment author: bentarm 11 October 2012 10:58:09PM 0 points [-]

My experience with the GJP suggests that it's not. Some people there, for instance, are on record as assigning a 75% probability to the proposition "The number of registered Syrian conflict refugees reported by the UNHCR will exceed 250,000 at any point before 1 April 2013".

I am a registered participant in one of the Good Judgement Project teams. I have literally no idea what my estimates of the probabilities are for quite a few of the events for which I have 'current' predictions. Depending on what you mean by 'some people', you might just be picking up on the fact that some people just don't care as much about the accuracy of their predictions on GJP as you do.

Comment author: Morendil 11 October 2012 11:27:18PM *  0 points [-]

some people just don't care as much about the accuracy of their predictions on GJP

Agreed. Insofar as GJP is a contest, and the objective is to win, my remarks should be read with the implied proviso "assuming you care about winning". In the prelude to the post where I discuss my GJP participation in more detail I used an analogy with playing Poker. I acknowledge that some people play Poker for the thrill of the game, and don't actually mind losing their money - and there are variable levels of motivation all the way up to dedicated players.

Comment author: [deleted] 09 October 2012 03:09:19AM 7 points [-]

I enjoy having posts which show how to apply rational thought processes to everyday situations, so thank you.

However, there is a failure mode on the 2x2 matrix method, that I think should be mentioned-- it ignores probabilities of various options, and focuses solely on their payoff (example given below). I think when making the 2x2 matrix, there should be an explicit step where you assign probabilities to the beliefs in question, and keep those probabilities in mind when making your decision.

I think this is obvious to most long-time LWers, but worry about someone new coming across this decision method, and utilizing it without thinking it through.

Here is an example of how this can backfire, otherwise:

Your new babysitter seems perfect in every way: Clean background check, and her organization skills helps offset your absent-mindedness. One day, you notice your priceless family heirloom diamond earrings aren't where you normally keep them. The probability is much higher that you accidentally misplaced them (you have a habit of doing that), but there is a small suspicion on your part that the babysitter might have taken them.

You BELIEVE she took them, in REALITY she took them- You fire the babysitter and have to find another.

You BELIEVE she took them, in REALITY you misplaced them- You fire the babysitter who was innocent after all.

You BELIEVE you misplaced them, in REALITY she took them- Your babysitter isn't as good or honest as you think she is! Not only might she continue stealing from you, but more importantly, you continue to leave your child under the care of a dishonest person. BAD THINGS MIGHT HAPPEN TO YOUR BABY!

You BELIEVE you misplaced them, in REALITY you misplaced them- You keep your nice babysitter. Perhaps you come across your earrings later.

Comment author: lukeprog 05 October 2012 08:21:48AM 7 points [-]

It's too bad that these how-to posts tend to be not as popular as the philosophical posts. Good philosophy is important but I doubt it can produce rationalists of the quality that can be produced by consistent rationalist skills-training over months and years.

Comment author: aaronsw 05 October 2012 08:27:40PM 7 points [-]

Philosophy posts are useful if they're interesting whereas how-to's are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.

Comment author: Eliezer_Yudkowsky 06 October 2012 01:18:47AM 9 points [-]

Philosophy posts are enjoyable if they're interesting. They're useful if they're right.

Comment author: chaosmosis 06 October 2012 03:11:25AM 2 points [-]

They can also inspire tangentially related thoughts which are enjoyable or useful. This is why Calculus is helpful even to people who don't do math for a living or for fun.

Comment author: Eliezer_Yudkowsky 06 October 2012 07:52:54AM 3 points [-]

...I honestly can't remember anymore what it's like to look at the world without knowing calculus. How do you figure out how any rate of change relates to anything else?

Comment author: wedrifid 06 October 2012 09:06:23AM *  4 points [-]

...I honestly can't remember anymore what it's like to look at the world without knowing calculus. How do you figure out how any rate of change relates to anything else?

By, basically, intuitively grasping the most rudimentary aspects of and implications of calculus. (Or by learning the relationship explicitly or by learning one such relationship and intuitively extrapolating principles from one domain to another.)

Comment author: Pentashagon 08 October 2012 06:05:37PM 1 point [-]

It might be good practice to imagine maps without calculus since so many people use them. I wouldn't be surprised if beliefs in things like global warming were divided by the knows-calculus line. How could you even explain climate change to someone who didn't understand that Temperature = dEnergy_in/dt - dEnergy_out/dt + C?

Comment author: TheOtherDave 08 October 2012 06:20:48PM 0 points [-]

How could you even explain climate change to someone who didn't understand that Temperature = dEnergyin/dt - dEnergyout/dt + C?

I would probably start by talking about electric heaters and how they convert energy to heat, and generalize a little to talk about the atmosphere being kind of like that. The harder part is explaining that the same energy input can cause not only temperature increases, but changes to wind and precipitation patterns.

Comment author: wedrifid 06 October 2012 09:12:03AM 2 points [-]

Philosophy posts are enjoyable if they're interesting. They're useful if they're right.

Philosophy being right isn't enough to make it necessarily useful. There is a potentially unbounded space of philosophical concepts to explore and most of them are not of instrumental use at this particular time. We can't say much more than "They are useful if they are right and they are, well, in some way useful".

(I hesitate before pointing out the other side of the equation where a philosophy can be useful while actually being wrong because in such cases, and when unbounded processing capability is assumed, there is always going to be a 'right' philosophical principle that is at least as useful even if it is more complex, along the lines of randomized algorithms being not-better-than more thought out deterministic ones.)

Comment author: RobinZ 05 October 2012 03:56:29PM 6 points [-]

Thinking about the map-territory distinction reminds me of Knoll's Law of Media Accuracy:

Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge.

Comment author: Randy_M 08 October 2012 08:09:09PM 5 points [-]

"Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea"

This works as a rhetorical device, but if one were to try to accurately weigh two options against each other, it might pay not to use reductio ad absurdium and have something like "Continue on in the wrong direction until the ETA were passed or events made the incorrect direction obvious, then try a new route, having lost up to ETA." Which is still bad, but if no safe/available places to stop for directions presented themselves, might not be the worst option. But of course, by using the skill in the article, it would be a considered risk, and not an unexpected occurance.

Anyway, useful and easy to follow piece and I look forward to the next.

Comment author: Kaj_Sotala 04 October 2012 10:06:03AM 5 points [-]

When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist. In HPMOR, when you enter Draco Malfoy's viewpoint, you are plunged into Draco Malfoy's subjective universe, in which Death Eaters have reasons for everything they do and Dumbledore is an exogenous reasonless evil.

This is an awesome trick, and I'll have to use it more explicitly when writing various characters. (I already did somewhat, but I'm not sure if I've explicitly thought of it in these terms.)

Comment author: gwern 08 October 2012 06:31:45PM *  4 points [-]

David Weber places a lot of emphasis on this too; I wrote down what I could remember of his discussion of the topic at ICON 2012:

Then Weber went onto a tangent I really appreciated: while working 4 assistantships at a university, he would tell his class that Hitler's actions were all highly rational & understandable if one understood his world view. An important writing rule: have no simplistic villains. The villains must have good reasons for everything they do.

Weber gave an example: the Mesan genetic slavers in his Honor novels. They are breeding a master race, and during the centuries, they have blighted the lives of billions - but they are all still human. So he described a scene from a book:

The leader and his wife are preparing for dinner in their rooms. The wife - "Oh honey, don't wear that red shirt." The husband: "but that's my favorite shirt!" Wife: "I know, and hopefully the geneticists can do something about your taste. And you're not wearing the red shirt."

(Everyone laughed).

A good writer makes bad guys comprehensible; hence, some fans come to opposite conclusions about Weber's politics, based sometimes, he said, on the same exact passages from his novels.

Comment author: ArisKatsaris 04 October 2012 10:37:43AM *  7 points [-]

I think that part of this advice can be restated as "every character must think themselves the protagonist of their own lives" which I think I remember Orson Scott Card giving; though Eliezer's advice more explicitly focuses on how this affects their models of the universe.

A decade back, I was conciously attempting to use OSC's (if that's who I got it from) advice in a piece of Gargoyles fanfiction "Names and Forms" set in mythological-era Crete. In that story I had a character who saw everything through the prism of ethnic relations (Eteocretans vs Achaeans vs Lycians), and there's another who because of his partly-divine heritage couldn't help thinking about how gods and human and gargoyles interact with each other, and Daedalus in his cameo appearance treated everything as just puzzles to be solved, whether it's a case of murder or a case of how-to-build-a-folding-chair... (Note: It's not a piece of rationalist fanfiction, nor does it involve anything particularly relevant to LessWrong-related topics.)

Comment author: Morendil 04 October 2012 10:49:55AM 3 points [-]

I think that part of this advice can be restated as "every character must think themselves the protagonist of their own lives" which I think I remember Orson Scott Card giving

That's a very nice way of stating it, and in application to real life is one of my personal mantras. It helps me a lot, for instance in avoiding fundamental attribution error.

Comment author: Pentashagon 08 October 2012 06:14:15PM 1 point [-]

It's also an awesome trick for interacting with real people who have an actual subjective world-view different from mine.

Unfortunately my mind can only effectively hold one human-size worldview at a time and so I am often confused by other people's actions or at best I second-guess my imagined cause of their behavior.

Comment author: Jonathan_Graehl 06 October 2012 10:11:55PM *  4 points [-]

The "koan" prompts are nice.

But please be responsible in employing them. Whatever the prompted reader generates as their own idea, and finds also in the following text, will be believed without the usual skepticism (at least, I noticed this "of course!" feeling). So be sure to write only true responses :)

Comment author: AlexMennen 05 October 2012 08:42:29PM 4 points [-]

My koan answer: a map-territory distinction can help you update in response to information about cognitive biases that could be affecting you. For instance, if I learn that people tend to be biased towards thinking that people from the Other Political Party are possessed by demonic spirits of pure evil, with a map-territory distinction, I can adjust my confidence that Republicans are possessed by demonic spirits of pure evil downwards, since I know that the cognitive bias means that my map is likely to be skewed from reality in a predictable direction.

Comment author: shminux 05 October 2012 08:56:57PM 0 points [-]

I can adjust my confidence that Republicans are possessed by demonic spirits of pure evil

If you assign a non-infinitesimal probability to this literal case, odds are that your map is so bad, you don't have much to update to begin with.

Comment author: AlexMennen 06 October 2012 12:53:49AM 1 point [-]

Yes, I was not being literal.

Comment author: RichardKennaway 04 October 2012 07:04:11AM *  4 points [-]

There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble

Or with this teaching aid designed by Korzybski. He called the skill "consciousness of abstraction" and distinguishes more levels than "map" and "reality".

Comment author: buybuydandavis 06 October 2012 10:03:47PM *  1 point [-]

I've found myself pointing people to Korzybski a lot lately.

It has been troubling me for a while that EY starts with a couple of the most basic statements of Korzybski, and then busies himself reinventing the wheel, instead of at least starting from what Korzybski and the General Semantics crowd has already worked out.

EY is clearing brush through the wilderness, while there's a paved road 10 feet away, and you're the first person on the list who has seemed to notice.

There have been other smart people in the world. You can stand on the shoulders of giants, stand on the shoulders of stacks of midgets, or you can just keep on jumping in the air and flapping your arms.

Comment author: RichardKennaway 07 October 2012 08:51:01AM *  7 points [-]

Korzybski, for all his merits, is turgid, repetitive, and full of out of date science. The last is not his fault: he was as up to date for his time as Eliezer is now, but, for example, he was writing before the Bayesian revolution in statistics and mostly before the invention of the computer. Neither topic makes any appearance in his magnum opus, "Science and Sanity". I wouldn't recommend him except for historical interest. People should know about him, which is why I referenced him, and his work did start a community that continues to this day. However, having been a member of one of the two main general semantics organisations years back, I cannot say that he and they produced anything to compare with Eliezer's work here. If Eliezer is reinventing the wheel, compared with Korzybski he's making it round instead of square, and has thought of adding axle bearings and pneumatic tyres.

Some things should be reinvented.

Comment author: buybuydandavis 07 October 2012 09:33:53AM *  2 points [-]

EY talks about things they don't, but on the Map is Not the Territory, I don't see that EY or the usual discussions here have met Korzybski's level for consciousness of abstraction, let alone surpassed it. General Semantics provides a tidy metamodel of abstracting, identifies and names important concepts within the model, and adds some basic tools and practices for semantic hygiene. I find them generally useful, and I generally recommend them.

For consciousness of abstraction, where and how has EY exceeded Korzybski? What are new and improved bits? Where was K wrong, and EY right?

Comment author: RichardKennaway 07 October 2012 11:36:50AM *  8 points [-]

On second thoughts, when I said "[not] anything to compare with" that was wildly exaggerated. Of course they're comparable -- we are comparing them, and they are not so far apart that the result is a slam-dunk. But I don't want to get into a blue vs. green dingdong (despite having already veered in that direction in the grandparent).

Here are some brief remarks towards a comparison on the issues that occur to me. I'm sure there's a lot more to be said on this, but that would be a top-level post that would take (at least for me) weeks to write, with many hours of re-studying the source materials.

  1. Clarity of exposition. There really is no contest here: E wins hands down, and I have "Science and Sanity" in front of me.

  2. Informed by current science. Inevitably, E wins this one as well, just by being informed of another half-century of science. That doesn't just mean better examples to illustrate the same ideas, but new ideas to build on. I already mentioned Bayesian reasoning and computers, both unavailable to K.

  3. Consciousness of abstraction. Grokking, to use Heinlein's word, the map-territory distinction. Both E and K have hammered on this one. K refined it more, treating not merely of map/territory, but our capability for unlimited levels of abstraction, maps-of-maps-of-maps-of-etc to any depth. The more levels, the further removed from contact with reality, and the more scope for losing touch with it. Nested thought-bubbles have appeared in Eliezer's writings, but as far as I recall the spotlight has never been turned on the phenomenon.

  4. The "cortico-thalamic pause". The name is based on what I suspect is outdated neuroscience, but the idea is still around, with the currently fashionable name of "System 1 vs. System 2". The idea is current on LessWrong, but I don't recall if Eliezer himself has written anything on it. The technique consists of giving yourself time to respond rationally to whatever has just happened, time to perceive it clearly and consider (the "cortical" part) without emotional distraction (the "thalamic" part) what the situation is or might be and what to do about it, deploying consciousness of abstraction in order to be mindful of one's own flaws and see the emotional responses for what they are. This is in the Null-A books as well, so map ≠ territory isn't the only real-world actionable idea there.

  5. The unity of "body" and "mind", of "emotion" and "intellect", of "senses" and "thought", of "heredity" and "environment", etc. Our usual language artificially splits these apart (K uses the word "elementalistic"), when in reality they are indissoluble, and we require "non-elementalistic" language to speak accurately of them, hence his coining of the term "semantic reaction" to refer to the response of the organism-as-a-whole to an event. Not a topic that E has devoted attention to as a topic, but on the elementalistic splitting of "choice" from "physical law" there is this.

  6. Something to protect. K was motivated by the state of the world around him, seeing "the human dangers of the abuse of neuro-semantic and neuro-linguistic mechanisms", the neglect of those dangers in the democratic West, and their exploitation by totalitarian governments ("Science and Sanity", introduction to 2nd edition, 1941). "We humans after these millions of years should have learned how to utilize the 'intelligence' which we supposedly have, with some predictability, etc., and use it constructively, not destructively, as, for example, the Nazis are doing under the guidance of specialists." E was originally motivated by the Friendly AGI problem. I do not know to what extent he is motivated by the ordinary, pre-Singularity benefits that "raising the sanity waterline" would bring.

Etc., as Korzybski would say. Additions to the list welcome.

Comment author: buybuydandavis 08 October 2012 05:16:58AM *  3 points [-]

Thanks for the elaboration. I agree with the comparative aspects.

For 1), I'd say that although Korzybski was a painfully tedious windbag in Science and Sanity, I've seen lots of summaries that were concise and well written, though I don't remember a comprehensive summary of Science and Sanity that fits the bill.

I was mainly getting at 3), with order of abstraction, multi ordinal terms, and the concrete practices of semantic hygiene such as indexing, etc,. and hyphenated non-elementalism.

I'd add to your list that Korzybski's aversion to the izzes of identity and predication, along with his intensional vs. extensional distinction, really complement Tabooing a Word and Replacing the Symbol with the Substance. AK elaborates the full evaluative response - the intensional response - of a flesh and blood creature, identifies particularly problematic semantic practices which maladaptively evoke that response, and EY gives the practical method for semantic hygiene in terms of what you should be doing instead.

AK always keeps in views the abstracting nervous system in a way that EY doesn't, and it think that added reductionism helps. A reductionist model which includes the salient points of human abstraction provides a generative method to make sense of the series of narratives that EY provides on different points on rationality.

Also, AK's insistence on a physical structural differential, and knowledge based in the structure of various sensory modalities is really a gusher of good ideas.

AK stays closer to the wetware, and whatever the relative limits of science available to him, I think that reductionist focus works to provide a deep model for thinking about abstraction. Focus on a reductionist physical reality, and all sorts of supposed conundrums for speciation, life, and mind evaporate.

I've been going off on this because there's just a ton of material from AK on semantic hygiene, which I take as a core method of getting Less Wrong, and all I usually see mentioned on this list is "The Map is not the Territory". That's maybe a country in the world of AK, and I think people should do some travelling and see the rest of his world. There's a lot more to see.

Comment author: Eliezer_Yudkowsky 07 October 2012 09:10:22AM 1 point [-]

S. I. Hayakawa was a way better writer - that's where I got all my reprocessed Korzybski as a kid, and that's where I point people: Language in Thought and Action instead of Science and Sanity. I tried once to read the latter book as a kid, after being referred to it by Null-A. I was probably about... eleven years old? Thirteen? I gave up very, very rapidly, which I did not do for physics texts with math in them.

Comment author: buybuydandavis 07 October 2012 11:17:50AM 3 points [-]

I won't argue with the literary analysis; K was stupendously tedious. I can't think of anyone more tiresome, although I have a feeling that his style was in vogue with various systematizers in the first half of the 20th century. I remember similar pain in reading Buckminster Fuller and Lugwig Von Mises, though I couldn't finish Fuller (tried him in my teens), and Von Mises wasn't quite as awful. Someone in the body awareness field as well - Joseph Pilates or Alexander. Less sure on the last one.

I trudged through Science and Sanity, often gritting my teeth, and think it was worth it.

My impression of Hayakawa is that he takes the conclusions but leaves out the metamodel which generates the conclusions and ties them together. I felt that K gave me a way of thinking, while Hayakawa packaged a lot of results, but left out the way of thinking. I read K first, so Hayakawa tasted like relatively weak tea and didn't leave a big impression.

K was more meaty particularly on the Science/Mathematics side. Mathematics as an abstraction of functional relations of actions in the world - I don't know if it was literally tossing pebbles in a bucket, but it was close. It was the physical action of counting. Science as a semantic enterprise - finding new semantic structures to model world. Space-Time as providing a static view of dynamic change. There was something good on differential equations too, something like reductionist locality turning nonlinear relations into linear relations. It's been almost 20 years now, so I'm a little hazy.

Anyway, I'd recommend at least having a serious chat with someone well versed in the mathematical and scientific side of Korzybski and Science and Sanity, as there is a lot of good stuff in there that doesn't get a lot of attention even from the General Semantics crowd, who, like Hayakawa, focus on the verbal aspects of the theory.

Comment author: buybuydandavis 08 October 2012 11:50:22PM *  1 point [-]

Thank you for this response. This has removed a confusion I've had since I've come to the site.

You say in the article:

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it,

At least in my recollection, you refer to AK as the inventor of "The Map is not the Territory" when you bring it up, and that always gave me the impression that you had read him. But then I would be puzzled because many of the other things he said were appropriate to the conversation, and you wouldn't bring up those at all. And you didn't even mention Hayakawa in the article.

When someone mentions an author as the originator of an idea they're talking about, I assume he has read them, and bring that context to a reading of what they have written in turn. It would have been helpful to me if you had identified Hayakawa and Langauge in Thought in Action as where you had been exposed to the idea, distinguishing that from where Hayakawa had gotten the idea - AK. Maybe there aren't a lot of people who have actually read AK, but I think it would be a good general practice to make your sources clear to your readers.

Comment author: RichardKennaway 07 October 2012 09:39:23AM 1 point [-]

For me it was Heinlein --> Korzybski --> van Vogt in my early teens. I doggedly ploughed through Korzybski, but the curious thing is, in my early twenties I reread him, and found him, not exactly light reading, but far clearer than he had been on my first attempt.

Comment author: JulianMorrison 03 October 2012 11:03:50PM 7 points [-]

Two beliefs, one world is an oversimplification and misses an important middle step.

Two beliefs, two sets of evidence that may but need not overlap, and one world, is closer.

This becomes an issue when for example, one observer is differently socially situated than the other* and so one will say "pshaw, I have no evidence of such a thing" when the other says "it is my everyday life". They disagree, and they are both making good use of the evidence reality presents to each of them differently.

(* Examples of such social situational differences omitted to minimize politics, but can be provided on request.)

Comment author: JulianMorrison 03 October 2012 11:45:30PM 3 points [-]

Expanding a little on this, it's not a counter argument, but a caveat to "Trust not those who claim there is no truth". When people say things like "western imperialist science", sometimes they are talking jibber-jabber, but sometimes they are pointing out that the victors write the ontologies and in an anthropocene world, their ideas are literally made concrete.

Comment author: JackV 05 October 2012 09:41:24AM 3 points [-]

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.

I find just that description really, really useful. I knew about the Litany of Tarski (or Diax's Rake, or believing something just because you wanted it to be true) and have the habit of trying to preemptively prevent it. But that description makes it a lot easier to grok it at a gut level.

Comment author: RichardKennaway 04 October 2012 07:23:27AM 3 points [-]
Comment author: beoShaffer 04 October 2012 03:24:53AM *  3 points [-]

When I was trying to solve the koan I focused on a few interrelated subproblems of skill one. It seems like this sort of thinking is particularly useful for reminding yourself to consider the outside view and/or the difference between confidence levels inside and outside an argument.
Also, I think the koan left out something pretty important.
Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly? How exactly does it hurt, on what sort of problem?

.

.

.

.

.

It looks pretty solid for describing unbounded epistemic rationality. It's slightly iffier from a bounded instrumental perspective in that it probably imposes some mental cost to apply it and their are many circumstances were its not noticably helpful. There's also the matter of political situations and similar were its -arguably- good to be generally overconfident.

Comment author: Morendil 04 October 2012 09:43:00AM 5 points [-]

How exactly does it hurt, on what sort of problem?

Beliefs are part of reality too. The image "thought bubble containing a belief, and a reality outside it" is a good map, but it's not itself the territory.

In particular, the mantra "Reality is that which, when we stop believing in it, doesn't go away" can be harmful in areas such as psychology and sociology, and in domains which have a large component of these, such as finance, politics or software engineering. In these domains you must account for phenomena such as self-fulfilling or self-cancelling prophecies. Concrete example: stock market crashes.

Comment author: [deleted] 04 October 2012 01:20:31PM 1 point [-]

So you're saying if stop believing in stock market crashes, they go away?

I think what you mean is that if you intervened to change everyone's beliefs away from "oh shit, sell!", then stock market crashes would not happen. That is a different matter than talking about just my or your belief.

Comment author: Morendil 04 October 2012 02:31:57PM 5 points [-]

So you're saying if stop believing in stock market crashes, they go away?

More often it works the other way around: the fact that someone stops believing in an overinflated stock market (i.e. claims a "bubble" is about to burst) acts as a self-fulfilling prophecy, causing others to also stop believing which -if this information cascade propagates enough- will cause a crash, therefore bringing reality in line with the original belief.

But information cascades can also cause booms, as I understand it more likely of individual stocks.

The "someone" above is underspecified: it can be one particularly influential person - Nate Silver recounts how Amazon stock surged 25% after Henry Blodget hyped it up in 1998. But it can also be a larger group, who, looking at small fluctuations in the market, panic and start a stampede.

My point is that "thought bubbles" in general are part of reality. Your believing in things has causal influence on reality (another concrete example: romantic relationships - the concept "love", which can be cashed out in terms of blood levels of various hormones, is one of those things that go away because people stop believing in it). It is generally bad epistemic practice to overstate this influence, but it can also be bad to understate it.

Comment author: [deleted] 04 October 2012 02:56:36PM 1 point [-]

Agreed.

My point was that your examples were a part of reality in a way that the ideal belief-of-observer used in the "reality is that which..." mantra isn't.

Comment author: [deleted] 04 October 2012 01:17:02PM 2 points [-]

There's also the matter of political situations and similar were its -arguably- good to be generally overconfident.

No. It may be good to talk shit like you're overconfident. Actually being overconfident is just unnecesarily shooting yourself in the foot.

Comment author: RichardKennaway 04 October 2012 10:21:06AM 2 points [-]

Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory

If you can ever gain by being ignorant, you can gain more by better knowledge still.

Cf. E.T. Jaynes: "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought", quoted here.

Comment author: Alicorn 03 October 2012 10:45:07PM 3 points [-]

'Luminosity' and 'Harry Potter and the Methods of Rationality'

Not the Hamlet one?

Comment author: Eliezer_Yudkowsky 03 October 2012 10:56:31PM 7 points [-]

Fair and added. Also there's a lovely new bit of Munchkin fiction called Harry Potter and the Natural 20 (the author has confirmed this was explicitly HPMOR-inspired) but I don't know if it's 'explicit rationalist fiction' yet, although it's possibly already a good fic to teach Munchkinism in particular.

Comment author: Vaniver 03 October 2012 11:47:47PM 6 points [-]

Harry Potter and the Natural 20

I thought it was starting poorly, but then I got to:

"Someone send for Dumbledore, this kid needs help."

"I'm right here in front of you."

"No, not you, the other Dumbledore."

"Oh," said Aberforth, slightly disappointed. "Nobody ever wants to send for me."

Comment author: chaosmosis 04 October 2012 03:55:18AM 0 points [-]

This means I'll try it, thanks for that quote.

Comment author: gwern 04 October 2012 02:54:12AM 0 points [-]

I thought there were a lot of quotable bits; fun fic.

Comment author: chaosmosis 04 October 2012 04:07:29AM 3 points [-]

Oh yes.

"Er, before we, uh, um, start choosing one," Milo stammered awkwardly. "There's something I've been, ah, meaning to ask of you, Mr. Ollivander."

"Yes?" he said softly. Gods, but this guy is weird.

"Your store name – I mean, Ollivanders: Makers of Fine Wands Since 382 BCE – well, it's just that, er…"

"Yes?"

"Shouldn't – shouldn't Ollivanders have an apostrophe in it?" Milo said, and instantly regretted it.

Mr. Ollivander chuckled, slowly and irregularly. It was a disconcertingly unnatural sound.

"Not if it's plural," Ollivander said.

Milo swallowed nervously.

Comment author: gwern 04 October 2012 04:28:15AM 1 point [-]

That was good, but the blood was better.

Comment author: Armok_GoB 04 October 2012 12:43:35AM 4 points [-]

There are also like 3 different MLP ones!

Comment author: beoShaffer 04 October 2012 01:11:52AM 3 points [-]

Given all the rationalist fiction that is surfacing, may I suggest the wording: "in fact the only explicitly rationalist fiction I know of that is not a result of Less Wrong."

Comment author: Eliezer_Yudkowsky 04 October 2012 07:56:12AM 1 point [-]

Fair and edited. Also I left out "David's Sling".

Comment author: Alicorn 06 October 2012 10:04:55PM 0 points [-]

Now that is a lovely fic. I want more of it. Why must things be works in progress?

Comment author: gwern 06 October 2012 11:30:14PM 4 points [-]
Comment author: Alicorn 07 October 2012 12:50:30AM *  0 points [-]

I don't think that's really a good response to this complaint.

Comment author: gwern 07 October 2012 01:10:43AM *  9 points [-]

Yeah, but 40 years ago you wouldn't be saying 'gosh what I really need is a good munchkin HP/D&D crossover!'

You'd be saying something like, 'that P.G. Wodehouse/Piers Anthony/etc., what a hilarious writer! If only he'd write his next book faster!' or 'I'm really looking forward to the new anthology of G.K Chesterton's uncollected Father Brown tales!'

EDIT: Thanks for ninjaing your comment so my response looks like a complete non sequitur. -_-

Comment author: simplicio 07 October 2012 01:34:59AM 3 points [-]

I'd STILL like Wodehouse to write a few more. Unfortunately...

Comment author: Alicorn 07 October 2012 04:34:15AM *  2 points [-]

Well, 40 years ago I wasn't born. I tend not to like old fiction. I would be less happy and enjoy fiction less in a world where that was all I had to read, although perhaps I wouldn't know what I was missing (there may even in reality be some genre I haven't found yet that I would adore and am the poorer for not having located yet).

I edited my comment because my first writing was based solely on seeing what article you linked to and then I searched for the specific law you named and decided my reply was inapt. Sorry.

Comment author: gwern 07 October 2012 10:55:33PM 7 points [-]

I would be less happy and enjoy fiction less in a world where that was all I had to read, although perhaps I wouldn't know what I was missing (there may even in reality be some genre I haven't found yet that I would adore and am the poorer for not having located yet).

This is pretty much what my entire article is about: there is something like 300 million books out there, like >90% of which is 'old', with no real reason to expect an incredible quality imbalance (fantasy humor is an old genre, so old that practitioners like Robert Asprin have died), and yet, the reading ratio is perhaps quite the inverse with 90% of reading being new books and someone like you can tell me in all apparent seriousness 'I don't like old fiction, I would be less happy in a world in which that was all I had!'

Comment author: katydee 07 October 2012 11:43:47PM *  5 points [-]

Counterargument: Old writing was written in accordance with old ideas.

The inferential distance between a modern reader and an old writer is likely to be larger than the inferential distance between a modern reader and a modern writer. For this reason, modern writing is generally both easier and more relatable for the modern reader, and we should not be surprised that most modern readers read modern writing.

The exceptions-- old works that are considered classic and revered even by modern readers-- are (nominally) those that have touched something timeless, and therefore ring true across the ages.

Comment author: gwern 08 October 2012 12:22:09AM 3 points [-]

Is this distance sufficient to explain the recentism bias? Can you give an example of how a great SF novel like Dune has 'inferential distance' so severe as to explain why more people are at any point buying the (incredibly shitty terrible) NYT-bestselling sequels by Kevin J. Anderson & Brian Herbert than the original?

Comment author: RichardKennaway 07 October 2012 11:40:04PM 2 points [-]

Books, music, and all other art forms, unlike apples, are not fungible, not even items of the same "quality" (however defined).

BTW, I have that collection of the complete Bach in 160 CDs (and have listened to all of it at least twice). And I'm collecting the complete Masaahi Suzuki recordings of the Bach cantatas (which are completely different from the Leonhardt/Harnoncourt performances in the Bach 2000 set), and I might spring for the John Eliot Gardiner cantatas if he manages to issue them as a complete set. I also went to this performance yesterday of an art form dating back all of 60 years (the drums are from the long-long-ago, but this use of them is not), and buy everything Greg Egan writes as soon as it comes out.

Yes, no-one can read/listen to/view more than the tiniest fraction of what there is, but to read nothing old, or to read nothing new, are selection rules that have only simplicity in their favour. There is no one-dimensional scale of "quality".

Comment author: gwern 08 October 2012 12:15:29AM 4 points [-]

Books, music, and all other art forms, unlike apples, are not fungible, not even items of the same "quality" (however defined).

A point which applies equally to old and new. And ultimately every choice comes down to read or don't read...

Yes, no-one can read/listen to/view more than the tiniest fraction of what there is, but to read nothing old, or to read nothing new, are selection rules that have only simplicity in their favour. There is no one-dimensional scale of "quality".

I think you're deprecating them too quickly. Let's take the 90% guess at face-value: if you are selecting primarily from just the most recent 10% and quality - however multidimensional you choose to define it - then you need to somehow make up for throwing out 9/10ths of all the best books, the ones which happened to be old!

It'd be like running a machine learning or statistical algorithm which starts by throwing out 90% of the data from consideration; yeah, maybe that's a good idea, but you're going to have a hard time selecting from the remaining 10% so much better that it makes up for it.

Comment author: Alicorn 07 October 2012 11:08:48PM 2 points [-]

Yes, I read your article. I just disagree with you about most of it.

I like some fiction-by-people-now-dead, but I don't like elderly "classics", and if a ban on new books had been implemented at any point in the past I would be the poorer for not having things that have come out since then, even if you grandfathered in series-in-progress. This is not ridiculous just because you think some "quality" metric is holding steady.

There are other things to like about books than your invented bullshit "quality" metric. You know what? I like books that were written originally in my language. That doesn't include Shakespeare; my language updates constantly and books don't. I like fanfiction, and active living fandoms where people will write each other presents according to specific prompts because someone really wanted something really specific that didn't exist a minute ago and riff on and respond to and parody each other in prose around a shared touchstone. That couldn't exist if there were some ban on new material and all these people spent their time quilting instead. I like books with fancy tech in them, and exactly what can get past my suspension-of-disbelief filter changes alongside real technology. I can read Heinlein even with slide rules in space, but damn, that would get old. Hell, I like writing. I like a lot of things that you see no value in and wish to slay. Please step back with the pointy objects.

Comment author: gwern 07 October 2012 11:33:47PM 3 points [-]

Hell, I like writing. I like a lot of things that you see no value in and wish to slay. Please step back with the pointy objects.

Calm down, it's just an essay...

I like fanfiction, and active living fandoms where people will write each other presents according to specific prompts because someone really wanted something really specific that didn't exist a minute ago and riff on and respond to and parody each other in prose around a shared touchstone. That couldn't exist if there were some ban on new material and all these people spent their time quilting instead.

I dunno, people used to get a lot out of quilting and knitting - the phrase 'knitting circle' comes to mind. But your contempt for various subcultures aside:

So, 'writing is not about writing'; which is pretty much one of the major themes - whatever is justifying all this new fiction, it's not nebulous claims about sliderules in space or new books being 'better' than old ones or reading like Shakespeare (most of those 300m books are, uh, not from Elizebethan times -_-).

Community is as good an explanation as any I've seen.

Comment author: Jonathan_Graehl 12 October 2012 01:41:51AM 0 points [-]

Not that gwern was wrong in any way in his general point, but I also tremendously enjoyed this particular crossover and second everyone's recommendation (at least, if you've ever attempted "roleplaying" of the non-sexual type).

Comment author: RomeoStevens 04 October 2012 01:36:48AM *  1 point [-]

is Hamlet still available online? I don't see it.

Comment author: Alicorn 04 October 2012 01:48:13AM 3 points [-]

Under normal circumstances, you have to buy it.

Comment author: [deleted] 04 October 2012 10:10:45PM *  6 points [-]

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century.

It surprises less if you realize that other proverbs have conveyed the same idea—I think, more aptly: "Theory is gray, but the golden tree of life is green." --Johann Wolfgang von Goethe

The Goethe quote (substitute "reality" for "tree of life" to be more prosaic) brings out that the difference between the best theory and reality is reality's greater richness.

On the other hand, there are two distinct points conflated by the "map versus territory" standard offer: 1) the map leaves things out (by design) and 2) the map gets things wrong (by error).

Because of this conflation, "map versus territory" is one of the most abusable cliches around, perhaps second only to "the exception that proves the rule."

Comment author: army1987 05 October 2012 06:29:23PM *  2 points [-]

My favourite one is

There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.

-- Hamlet Act 1, scene 5, 166–167

Comment author: Decius 06 October 2012 12:05:57AM 2 points [-]

It's important to distinguish "The map is not the territory" from "The map is not a perfect representation of the territory.".

The major difference is that beliefs cannot easily be used as direct or indirect concrete objects; I cannot look in my belief of what's in the basket and find (or not find) a marble. I cannot test my beliefs by experimentation to find if they correspond to reality; I must test reality to find if my beliefs correspond to it.

Comment author: [deleted] 04 October 2012 03:04:08AM *  2 points [-]

If my socks will stain, I want to believe my socks will stain; If my socks won't stain, I don't want to believe my socks will stain; Let me not become attached to beliefs I may not want.

That was beautiful. I will definitely keep that mantra in mind.

Comment author: Tyrrell_McAllister 03 October 2012 11:37:05PM 2 points [-]

(Mainstream status here.)

When I follow this link, I get the text

You aren't allowed to do that.

Comment author: Vladimir_Nesov 04 October 2012 01:10:51AM 1 point [-]

Fixed.

Comment author: Vaniver 03 October 2012 11:43:47PM *  1 point [-]

Notice the link's text has Eliezer_Yudkowsky-drafts in it.

Comment author: Johnicholas 06 October 2012 03:22:34PM 1 point [-]

There are some aspects of maps - for example, edges, blank spots, and so on, that seem, if not necessary, extremely convenient to keep as part of the map. However, if you use these features of a map in the same way that you use most features of a map - to guide your actions - then you will not be guided well. There's something in the sequences like "the world is not mysterious" about people falling into the error of moving from blank/cloudy spots on the map to "inherently blank/cloudy" parts of the world.

The slogan "the map is not the territory" might encourage focusing on the delicate corrections necessary to act upon SOME aspects of one's representation of the world, but not act on other aspects which are actually intrinsic to the representation.

Comment author: loldrup 06 October 2012 01:06:01PM 1 point [-]

Thanks for the clear illustration

Comment author: Error 22 March 2013 11:58:48AM 0 points [-]

Under what circumstances is it helpful to consciously think of the distinction between the map and the territory

I thought about this before reading the rest of the post, and came up with: "When I find myself surprised by something." Surprise may indicate that something improbable has happened, but may also indicate an error in my estimation of what's probable. Given that the observation appears improbable to begin with (or I wouldn't be surprised), I should suspect the map first.

Comment author: jslocum 27 February 2013 03:39:50PM 0 points [-]

I find myself to be particularly susceptible to the pitfalls avoided by skill 4. I'll have to remember to explicitly invoke the Tarski method next time I find myself in the act of attempting to fool myself.

One scenario not listed here in which I find it particularly useful to explicitly think about my own map is in cases where the map is blurry (e.g. low precision knowledge: "the sun will set some time between 5pm and 7pm") or splotchy (e.g. explicit gaps in my knowledge: "I know where the red and blue cups are, but not the green cup"). When I bring my map's flaws explicitly into my awareness, it allows me to make plans which account for the uncertainty of my knowledge, and come up with countermeasures.

Comment author: Stuart_Armstrong 05 October 2012 01:39:37PM 0 points [-]

one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler

You might consider Mark Clifton's novel "Eight Keys to Eden" (1960) as another rationalist fiction (though it's more debatable). Available from Gutenberg at http://www.gutenberg.org/ebooks/27595

Comment author: thomblake 04 October 2012 02:26:28PM 0 points [-]

The illustrations are great. I wish there were one or two more in this post.

Comment author: Kaj_Sotala 04 October 2012 10:05:11AM 0 points [-]

This time, I wrote down my answer to the koan: the basic idea was correct, but there weren't as many examples of subskills as Eliezer listed.

It helps to realize that there may be mistakes in the process of constructing a map, and that you may need to correct them. If there is a problem where it's important to be right, like when figuring out whether you should invest in a company, or if you are feeling bad about your life and wonder whether it's justified, you need to be able to make the map-territory distinction in order to evaluate the accuracy of your beliefs.

Though I'm somewhat pleased in that I don't, at least, remember Eliezer ever explicitly making the jump from beliefs to emotions and applying "are your emotions correct" as a special case of "the map is not the territory"; I can't claim that to be original to me (I think I might have gotten it from Jasen Murray or Michael Vassar or some book), but at least I've helped popularize it on LW somewhat.

Comment author: jsalvatier 07 October 2012 03:22:14AM 0 points [-]

In your verbal description it says 40 miles, but in the matrix it says 40 minutes.

Comment author: sboo 14 October 2012 06:13:00AM 0 points [-]

60mph?

Comment deleted 10 October 2012 01:50:54AM *  [-]
Comment author: Eliezer_Yudkowsky 10 October 2012 02:27:10AM 0 points [-]

Deleted due to the attempt to evade the -5 penalty.

Comment author: Eugine_Nier 10 October 2012 06:35:43AM 4 points [-]

I thought part of the point of the -5 penalty was to keep interesting discussions from happening down stream of downvoted comments. In that case isn't responding to heavily downvoted comments in a different thread exactly what should happen?

Comment author: wedrifid 10 October 2012 06:42:33AM *  2 points [-]

I thought part of the point of the -5 penalty was to keep interesting discussions from happening down stream of downvoted comments. In that case isn't responding to heavily downvoted comments in a different thread exactly what should happen?

I assumed that either Eliezer just didn't like the subject or that the comment actually quoted a -5 comment. Hang on. This can be checked. We can see from Eliezer's page which author Eliezer was replying to and look at that user's page.

(From what I can tell everything the user in question has written has been downvoted.)

Comment author: Risto_Saarelma 10 October 2012 07:18:24AM 3 points [-]

I understood the system actually stops the thread starter from replying to replies to their own comment if they have less than +5 total karma. Stop people talking to people talking to them and they will go for a circumvent.

Maybe just let people accrue more negative karma when replying to downvoted threads rather than stopping them when they hit the arbitrary zero point?