Followup to: The Useful Idea of Truth (minor post)

So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:

"The map is not the territory."

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.

But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:

Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly?  How exactly does it help, on what sort of problem?

...

...

...

Skill 1: The conceivability of being wrong.

In the story, Gilbert Gosseyn is most liable to be reminded of this proverb when some belief is uncertain; "Your belief in that does not make it so." It might sound basic, but this is where some of the earliest rationalist training starts - making the jump from living in a world where the sky just is blue, the grass just is green, and people from the Other Political Party just are possessed by demonic spirits of pure evil, to a world where it's possible that reality is going to be different from these beliefs and come back and surprise you. You might assign low probability to that in the grass-is-green case, but in a world where there's a territory separate from the map it is at least conceivable that reality turns out to disagree with you. There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble, first in a world like X, then in a world like not-X, in cases where they are tempted to entirely neglect the possibility that they might be wrong. "He hates me!" and other beliefs about other people's motives seems to be a domain in which "I believe that he hates me" or "I hypothesize that he hates me" might work a lot better.

Probabilistic reasoning is also a remedy for similar reasons: Implicit in a 75% probability of X is a 25% probability of not-X, so you're hopefully automatically considering more than one world. Assigning a probability also inherently reminds you that you're occupying an epistemic state, since only beliefs can be probabilistic, while reality itself is either one way or another.

Skill 2: Perspective-taking on beliefs.

What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you. They aren't disagreeing with you because they're obstinate, they're disagreeing because the world feels different to them - even if the two of you are in fact embedded in the same reality.

This is one of the secret writing rules behind Harry Potter and the Methods of Rationality. When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist. In HPMOR, when you enter Draco Malfoy's viewpoint, you are plunged into Draco Malfoy's subjective universe, in which Death Eaters have reasons for everything they do and Dumbledore is an exogenous reasonless evil. Since I'm not trying to show off postmodernism, everyone is still recognizably living in the same underlying reality, and the justifications of the Death Eaters only sound reasonable to Draco, rather than having been optimized to persuade the reader. It's not like the characters literally have their own universes, nor is morality handed out in equal portions to all parties regardless of what they do. But different elements of reality have different meanings and different importances to different characters.

Joshua Greene has observed - I think this is in his Terrible, Horrible, No Good, Very Bad paper - that most political discourse rarely gets beyond the point of lecturing naughty children who are just refusing to acknowledge the evident truth. As a special case, one may also appreciate internally that being wrong feels just like being right, unless you can actually perform some sort of experimental check.

Skill 3: You are less bamboozleable by anti-epistemology or motivated neutrality which explicitly claims that there's no truth.

This is a negative skill - avoiding one more wrong way to do it - and mostly about quoted arguments rather than positive reasoning you'd want to conduct yourself. Hence the sort of thing we want to put less emphasis on in training. Nonetheless, it's easier not to fall for somebody's line about the absence of objective truth, if you've previously spent a bit of time visualizing Sally and Anne with different beliefs, and separately, a marble for those beliefs to be compared-to. Sally and Anne have different beliefs, but there's only one way-things-are, the actual state of the marble, to which the beliefs can be compared; so no, they don't have 'different truths'.  A real belief (as opposed to a belief-in-belief) will feel true, yes, so the two have different feelings-of-truth, but the feeling-of-truth is not the territory.

To rehearse this, I suppose, you'd try to notice this kind of anti-epistemology when you ran across it, and maybe respond internally by actually visualizing two figures with thought bubbles and their single environment. Though I don't think most people who understood the core insight would require any further persuasion or rehearsal to avoid contamination by the fallacy.

Skill 4: World-first reasoning about decisions a.k.a. the Tarski Method aka Litany of Tarski.

Suppose you're considering whether to wash your white athletic socks with a dark load of laundry, and you're worried the colors might bleed into the socks, but on the other hand you really don't want to have to do another load just for the white socks. You might find your brain selectively rationalizing reasons why it's not all that likely for the colors to bleed - there's no really new dark clothes in there, say - trying to persuade itself that the socks won't be ruined. At which point it may help to say:

"If my socks will stain, I want to believe my socks will stain;
If my socks won't stain, I don't want to believe my socks will stain;
Let me not become attached to beliefs I may not want."

To stop your brain trying to persuade itself, visualize that you are either already in the world where your socks will end up discolored, or already in the world where your socks will be fine, and in either case it is better for you to believe you're in the world you're actually in. Related mantras include "That which can be destroyed by the truth should be" and "Reality is that which, when we stop believing in it, doesn't go away". Appreciating that belief is not reality can help us to appreciate the primacy of reality, and either stop arguing with it and accept it, or actually become curious about it.

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.  For example, let's say that you've been driving for a while, haven't reached your hotel, and are starting to wonder if you took a wrong turn... in which case you'd have to go back and drive another 40 miles in the opposite direction, which is an unpleasant thing to think about, so your brain tries to persuade itself that it's not lost.  Anna and I use the form of the skill where we visualize the world where we are lost and keep driving.

Note that in principle, this is only one quadrant of a 2 x 2 matrix:

  In reality, you're heading in the right direction In reality, you're totally lost
You believe you're heading in the right direction No need to change anything - just keep doing what you're doing, and you'll get to the conference hotel Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea
You believe you're lost Alas!  You spend 5 whole minutes of your life pulling over and asking for directions you didn't need After spending 5 minutes getting directions, you've got to turn around and drive 40 minutes the other way.

 

Michael "Valentine" Smith says that he practiced this skill by actually visualizing all four quadrants in turn, and that with a bit of practice he could do it very quickly, and that he thinks visualizing all four quadrants helped.

(Mainstream status here.)

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Rationality: Appreciating Cognitive Algorithms"

Previous post: "The Useful Idea of Truth"

New to LessWrong?

New Comment
179 comments, sorted by Click to highlight new comments since: Today at 11:25 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"grass is green" and "sky is blue" are always funny examples to me, since whenever I hear them I go check, and they're usually not true. Right now from my window, I can see brown grass and a white/gray sky.

So they're especially good examples, as people will actually use them as paradigms of indisputably true empirical propositions, and even those seem almost always to be a mismatch between the map and the territory.

6Error11y
I wish I could upvote this twice, just for pointing out an obvious error that I've never previously twigged on. I shall try to keep it close to the front of memory the next time I feel really certain about something.
2Chriswaterguy8y
As an experiment, a couple raised their child without telling them what colour the sky was. When they eventually asked, the child... thought about it. Eventually... "white". (I'd assumed it was a clear sky. Just realised it's a pointless story if it was cloudy.) Why Isn't the Sky Blue? - starts with colours in Homer.

Implicit in a 75% probability of X is a 25% probability of not-X

This may strike everyone as obvious...

My experience with the GJP suggests that it's not. Some people there, for instance, are on record as assigning a 75% probability to the proposition "The number of registered Syrian conflict refugees reported by the UNHCR will exceed 250,000 at any point before 1 April 2013".

Currently this number is 242,000, the trend in the past few months has been an increase of 1000 to 2000 a day, and the UNHCR have recently provided estimates that this number will eventually reach 700,000. This was clear as early as August. The kicker is that the 242K number is only the count of people who are fully processed by the UNHCR administration and officially in their database; there are tens of thousands more in the camp who only have "appointments to be registered".

It's hard for me to understand why people are not updating to, maybe not 100%, but at least 99%, and that these are the only answers worth considering. To state your probability as 85% or 91% (as some have quite recently) is to say, "There is a one in ten chance that the Syrian conflict will suddenly stop and all... (read more)

9bentarm11y
I am a registered participant in one of the Good Judgement Project teams. I have literally no idea what my estimates of the probabilities are for quite a few of the events for which I have 'current' predictions. Depending on what you mean by 'some people', you might just be picking up on the fact that some people just don't care as much about the accuracy of their predictions on GJP as you do.
1Morendil11y
Agreed. Insofar as GJP is a contest, and the objective is to win, my remarks should be read with the implied proviso "assuming you care about winning". In the prelude to the post where I discuss my GJP participation in more detail I used an analogy with playing Poker. I acknowledge that some people play Poker for the thrill of the game, and don't actually mind losing their money - and there are variable levels of motivation all the way up to dedicated players.
0[anonymous]11y
I think you are entirely right, that people don't visualize.
2Omegaile11y
I think you are 75% right.
3[anonymous]11y
Let's do 1000 trials and see if it converges, verify that p<0.05, write a paper and publish.

I've been enjoying the new set of Sequences. I wasn't around when the earlier Sequences were being written; It's like the difference between reading a series of books all in one go, versus being part of the culture, reading them one at a time, and engaging in discussion in between. So thanks to Eliezer for posting them!

I really liked how there was an ending koan in the last post. It prompted discussion. I tried to think of a good prompt to post for this one, but couldn't. Anyone have some good ideas?

Also, Skill #2 made me think of this optical illusion

5johnlawrenceaspden11y
I was planning to paint my boat today. There's already a coat of paint on it, drying. If I overpaint today, that's optimal. If I wait till tomorrow, then I'll have to sand it down first. It looks like it might rain, but the forecast is good. I don't know what effect rain will have on newly applied paint, or indeed on the current partly dried surface. Do I spend the afternoon painting the boat or carry on sitting in a coffee shop reading Less Wrong?
9Richard_Kennaway11y
LessWrong will still be there tomorrow. The optimal opportunity to paint the boat won't be.
2CCC11y
Is it possible to protect the boat from rain in some manner, such as leaving it under a roof?
1johnlawrenceaspden11y
Impractical, as it happens. I eventually solved the problem by going home, changing into painting clothes, cleaning brushes, arranging tools and stirring paint. At that point it started raining heavily. So I undid all that in the rain, changed back into dry clothes, went back to the coffee shop and am now reading Less Wrong again. I think I just failed rationality for ever.
4CCC11y
I don't think it's possible to fail rationality "for ever", as long as you are in a state where you can make observations, record memories, formulate goals, plan and take actions. Though you do seem to have been a bit unfortunate in the timing of the precipitation.
2arundelo11y
You may already know this, but the phrase "fail x forever" is a thing.
2wedrifid11y
Merely humanly impossible. If you are a more pure agent just assign probability "1" to enough things and you'll be set.
0CCC11y
Hmmm. It seems that I should add "as long as you are able to reassign all priors of 1 to priors of 0.999999999, and all priors of 0 to priors of 0.000000001" to my list of exceptions. (It won't fix the agent immediately, but it will place the agent in a situation of being able to fix itself, given sufficient observations and updates).
1Eugine_Nier11y
That's not the only problem. An agent that assigns equal probability to all possible experiences will never update.
1CCC11y
Oh, that's sneaky. Perhaps a perfect agent should occasionally - very occasionally - perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
-2Eugine_Nier11y
Nice try, but random perturbations won't help here.
0CCC11y
I think that this re-emphasises the importance of good priors.
5daenerys11y
I couldn't think of a koan-y question, but here is a discussion prompt. Let's make a Worksheet! Let's come up with some practice examples of the 2x2 matrix (such as the "Being Lost or Not" example in the OP), that people can fill out. The examples should be short (single paragraph) everyday type problems that people can relate to. Submit examples in the comments. I'll take the best and put them in a worksheet in Google docs, and link to it here. That way, when people in the future come and read this post, they have an activity to help them practice it. Also, people can use them at meetups if they want. Worksheets, of course, aren't the BEST way to learn, but they're better than nothing.

You're at work, and you find yourself wanting very badly to make a certain, particularly funny-but-possibly-taken-as-offensive remark to your boss. The comment feels particularly witty, quick-minded and insightful.

(trying to think of stuff that's fairly common and happens relatively often in everyday life)

You are leaving your home in the morning, to return in the evening; your day will involve quite a bit of walking and public transport. It is now warm and sunny, but you know that a temperature drop with heavy rains is forecasted for the afternoon. Looking out at the window and thinking of the walk in the sun and the crowded bus, you don't feel like carrying around a coat and umbrella. You start thinking maybe the forecast is wrong...

3A1987dM11y
I put a pocket umbrella and/or a foldable raincoat into my handbag. Duh.
3Alejandro111y
Yes, that is clearly the optimal solution. I was assuming you don't own those two items, or that you don't have a handbag the right size or don't want to use it--more plausible for a man that for a woman, I guess.
2DaFranker11y
Carrying around a handbag in the first place happens to be something that I find annoying and risky. I'm prone to leaving it in easy-to-notice, easy-to-steal places or outright forgetting it in some public location.
0A1987dM11y
Now that I think about that, that happened to me exactly once (as far as I can remember) with a handbag, though it happens to me very often¹ with other items such as keys, jackets, sweatshirts and sometimes my iPod. (I usually² eventually manage to recover them, but not always.) I guess that's because I'm more likely to immediately notice that I'm missing my bag than that I'm missing my keys. ---------------------------------------- 1. Around once per month in average. 2. Around 90% of the times.
6[anonymous]11y
What immediately comes to mind for me: You are knitting a fitted garment. Let's say it's a sweater. You've been knitting for awhile, and you''re starting to get concerned it won't fit the intended recipient. You can't tell for sure, because your needle is too short to fully stretch it out, but you just have this feeling. This feeling you hope is wrong, because you don't want to rip out and re-do all the ribbing you've just knit...
1EvelynM11y
That's time for a new set of knitting needles, and empiricisim. I have 60in cables.
4shminux11y
You are an ex-smoker overcome with a sudden craving after a particularly bad day, and your helpful friend offers you a cigarette "have just this one smoke!" to relieve tension. You know that anything less than a complete abstinence has a chance of kickstarting the habit.
-2apotheon11y
If a stressful day is enough to give you a craving difficult to resist, I think that saying "anything less than complete abstinence has a chance of kickstarting the habit" is a misleading statement of how it works. It might be more accurate to say that every cigarette you have is one cigarette closer to having a habit you need to kick. It seems, in fact, that there's sort of a gradient of average craving from abstinence all the way up to two packs a day, with variances around those averages. It seems a bit obfuscatory to suggest that "complete abstinence" is the deciding factor, especially when considering the question "When does complete abstinence start? Why doesn't it start after the next cigarette?" After all, the "real" complete abstinence has already failed, if you had to quit smoking in the first place. . . . but that's kind of off the topic of the worksheet example.
5Maelin11y
Sharing this sentiment. I'm particularly impressed with the cartoon diagrams. They're visually very appealing, and they encapsulate an idea in a way that takes just enough thought to untangle that I feel like it makes me engage with the conceptual message.
2DaFranker11y
Same here, I'm certainly happy that this new sequence is starting. I devoured the old sequences, but being forced to stop and digest these makes them feel more impactful. I'd be curious to see how much more powerful the sequences could be if they all had Koans, too, especially if they were wrapped up in an interactive shell and you had to answer them before the rest of the article (and/or the next one(s)) would show up. Not as good as a Bayesian Dojo, but there doesn't seem to be enough Beisusenseitachi around to really be effective on that front.

Mainstream status:

"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.

Skill 3 in the form "Trust not those who claim there is no truth" is widely advocated by modern skeptics fighting anti-epistemology.

Payoff matrices as used in the grid-visualization method are ancient; using the grid-visualization method in response to a temptation to rationalize was invented on LW as far as I currently know, as was the Litany of Tarski. (Not to be confused with Alfred Tarski's original truth-schemas.)

"The conceivability of being wrong" aka "Consider the opposite" is the standard recommended debiasing technique in psychology. See e.g. Larrick (2004).

"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.

The most famous expression of this that I'm aware of originates with Lord Cromwell:

I beseech you, in the bowels of Christ, think it possible you may be mistaken.

Arguably, Socrates's claims of ignorance are a precursor, but they may stray dangerously close to anti-epistemology. I'm not a good enough classical scholar to identify anything closer.

The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.

The grid-visualization method seems like a relatively straightforward application of the normal-form game, with your beliefs as your play and the state of the world as your opponent's play. The advocacy to visualize it might come from LW, but actually applying game theory to life has a (somewhat) long and storied tradition.

[edit] I agree that doing it in response to a temptation to rationalize is probably new to LW; doing it in response to uncertainty in general isn't.

4[anonymous]11y
I've seen it before used in the treatment of pascals wager: Believe in god x god exists = heavan, believe in god x god not exists = wasted life.... etc. Can't cite specific texts, but it was definately pre-LW for me, from people who had not heard of LW.
5Eliezer Yudkowsky11y
Ah yes, sorry. Payoff matrices are ancient; the Tarski Method is visualizing one in response to a temptation to rationalize. Edited.
0MaoShan11y
That sounds like a good idea in two ways: It gives you practice at visualizing the alternatives (which is always good if it can be honed to greater availability/reflex by practice), and by choosing those specific situations, you are automatically providing real-world examples in which to apply it; that way, it is a practical skill.
2Manfred11y
The intent seems different there, and that shapes the details. Pascal's wager isn't about how you act because of your beliefs - the belief is considered to be the action, and the outcomes are declared by fiat (or perhaps, fide) at the start of the problem, rather than modeled in your head as part of the purpose of the exercise.
2pragmatist11y
The Litany of Tarski has connections to certain versions of the direction-of-fit model of beliefs and desires. The model is usually considered a descriptive attempt at cashing out the difference between the functional role played by beliefs and desires. Both beliefs and desires are intentional states, they have propositional content (we believe that p, we desire that p). According to the direction-of-fit model, the crucial difference between beliefs and desires is the relation between the content of these states and the world -- specifically, the direction of fit between the content and the world differs. In the case of beliefs, subjects try to fit the content to the world, whereas in the case of desires, subjects try to fit the world to the content. However, some philosophers treat the direction-of-fit model not as descriptive but as normative. The model tells us that the representational contents of our beliefs and desires should be kept rigorously separate (don't let your conception of how the world is be contaminated by your conception of how you would like it to be) and that we should have different attitudes to the contents of these mental states. Here's Mark Platts, from his book Ways of Meaning: Also related (but not referring to the map/territory distinction as explicitly) is what Ken Binmore calls "Aesop's principle" (in reference to the fable in which a fox who cannot reach some grapes decides that the grapes must be sour). From his book Rational Decisions: I should note that Binmore is talking about terminal preferences here. Of course, instrumental preferences need not (indeed, should not) be independent of our beliefs about the world and our assessments of what is feasible.
2bryjnar11y
As someone else engaged with mainstream philosophy, I'd like to mention that I personally think that direction of fit is one of the biggest red herrings in modern philosophy. It's pretty much just an unhelpful metaphor. Just sayin'.
2Decius11y
I never saw it as a real 'model', just a way of clarifying definitions, and making statements such as "I believe that {anything not a matter of fact}" null. It provides a way to distinguish between "I don't believe in invisible dragons in my basement." and "I don't believe in {immoral action}". I suspect the original intention was to validate a philosopher who got fed up with someone who hid behind 'I don't believe in that' in a discussion, after which the philosopher responded with evidence that the subject under discussion was factual.
0pragmatist11y
It's really not my area at all, so I don't really have any well-developed opinions on this. My comment wasn't meant to be an endorsement of the model, I was just pointing out a similarity with a view in the mainstream literature. From a pretty uninformed perspective, it does seem to me that the direction-to-fit thing doesn't really get at what's important about the distinct functional roles of belief and desire, so I'm inclined to agree with your assessment.
0bryjnar11y
Yeah, I did realise that you weren't necessarily supporting it, I just wanted to make it clear that it's not orthodoxy in mainstream philosophy! Sorry if it came off as a bit critical.
1Unnamed11y
In psychology, this is called construal. A person's beliefs, emotions, behaviors, etc. depend on their construal (understanding/interpretation) of the world.
0MarkL11y
Some versions of cognitive behavioral therapy ask you to write down the pros and cons of holding a particular belief.

It's too bad that these how-to posts tend to be not as popular as the philosophical posts. Good philosophy is important but I doubt it can produce rationalists of the quality that can be produced by consistent rationalist skills-training over months and years.

Philosophy posts are useful if they're interesting whereas how-to's are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.

Philosophy posts are enjoyable if they're interesting. They're useful if they're right.

2wedrifid11y
Philosophy being right isn't enough to make it necessarily useful. There is a potentially unbounded space of philosophical concepts to explore and most of them are not of instrumental use at this particular time. We can't say much more than "They are useful if they are right and they are, well, in some way useful". (I hesitate before pointing out the other side of the equation where a philosophy can be useful while actually being wrong because in such cases, and when unbounded processing capability is assumed, there is always going to be a 'right' philosophical principle that is at least as useful even if it is more complex, along the lines of randomized algorithms being not-better-than more thought out deterministic ones.)
2chaosmosis11y
They can also inspire tangentially related thoughts which are enjoyable or useful. This is why Calculus is helpful even to people who don't do math for a living or for fun.
3Eliezer Yudkowsky11y
...I honestly can't remember anymore what it's like to look at the world without knowing calculus. How do you figure out how any rate of change relates to anything else?
5wedrifid11y
By, basically, intuitively grasping the most rudimentary aspects of and implications of calculus. (Or by learning the relationship explicitly or by learning one such relationship and intuitively extrapolating principles from one domain to another.)
4Pentashagon11y
It might be good practice to imagine maps without calculus since so many people use them. I wouldn't be surprised if beliefs in things like global warming were divided by the knows-calculus line. How could you even explain climate change to someone who didn't understand that Temperature = dEnergy_in/dt - dEnergy_out/dt + C?
2TheOtherDave11y
I would probably start by talking about electric heaters and how they convert energy to heat, and generalize a little to talk about the atmosphere being kind of like that. The harder part is explaining that the same energy input can cause not only temperature increases, but changes to wind and precipitation patterns.

I enjoy having posts which show how to apply rational thought processes to everyday situations, so thank you.

However, there is a failure mode on the 2x2 matrix method, that I think should be mentioned-- it ignores probabilities of various options, and focuses solely on their payoff (example given below). I think when making the 2x2 matrix, there should be an explicit step where you assign probabilities to the beliefs in question, and keep those probabilities in mind when making your decision.

I think this is obvious to most long-time LWers, but worry about someone new coming across this decision method, and utilizing it without thinking it through.

Here is an example of how this can backfire, otherwise:

Your new babysitter seems perfect in every way: Clean background check, and her organization skills helps offset your absent-mindedness. One day, you notice your priceless family heirloom diamond earrings aren't where you normally keep them. The probability is much higher that you accidentally misplaced them (you have a habit of doing that), but there is a small suspicion on your part that the babysitter might have taken them.

You BELIEVE she took them, in REALITY she took them- You f... (read more)

Two beliefs, one world is an oversimplification and misses an important middle step.

Two beliefs, two sets of evidence that may but need not overlap, and one world, is closer.

This becomes an issue when for example, one observer is differently socially situated than the other* and so one will say "pshaw, I have no evidence of such a thing" when the other says "it is my everyday life". They disagree, and they are both making good use of the evidence reality presents to each of them differently.

(* Examples of such social situational differences omitted to minimize politics, but can be provided on request.)

5JulianMorrison11y
Expanding a little on this, it's not a counter argument, but a caveat to "Trust not those who claim there is no truth". When people say things like "western imperialist science", sometimes they are talking jibber-jabber, but sometimes they are pointing out that the victors write the ontologies and in an anthropocene world, their ideas are literally made concrete.

Thinking about the map-territory distinction reminds me of Knoll's Law of Media Accuracy:

Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge.

When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist

... (read more)

I think that part of this advice can be restated as "every character must think themselves the protagonist of their own lives" which I think I remember Orson Scott Card giving; though Eliezer's advice more explicitly focuses on how this affects their models of the universe.

A decade back, I was conciously attempting to use OSC's (if that's who I got it from) advice in a piece of Gargoyles fanfiction "Names and Forms" set in mythological-era Crete. In that story I had a character who saw everything through the prism of ethnic relations (Eteocretans vs Achaeans vs Lycians), and there's another who because of his partly-divine heritage couldn't help thinking about how gods and human and gargoyles interact with each other, and Daedalus in his cameo appearance treated everything as just puzzles to be solved, whether it's a case of murder or a case of how-to-build-a-folding-chair... (Note: It's not a piece of rationalist fanfiction, nor does it involve anything particularly relevant to LessWrong-related topics.)

2Morendil11y
That's a very nice way of stating it, and in application to real life is one of my personal mantras. It helps me a lot, for instance in avoiding fundamental attribution error.
5gwern11y
David Weber places a lot of emphasis on this too; I wrote down what I could remember of his discussion of the topic at ICON 2012:
0Chriswaterguy8y
The other writer who also does this extremely well is Vikram Seth, in A Suitable Boy.
0Pentashagon11y
It's also an awesome trick for interacting with real people who have an actual subjective world-view different from mine. Unfortunately my mind can only effectively hold one human-size worldview at a time and so I am often confused by other people's actions or at best I second-guess my imagined cause of their behavior.

There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble

Or with this teaching aid designed by Korzybski. He called the skill "consciousness of abstraction" and distinguishes more levels than "map" and "reality".

3buybuydandavis11y
I've found myself pointing people to Korzybski a lot lately. It has been troubling me for a while that EY starts with a couple of the most basic statements of Korzybski, and then busies himself reinventing the wheel, instead of at least starting from what Korzybski and the General Semantics crowd has already worked out. EY is clearing brush through the wilderness, while there's a paved road 10 feet away, and you're the first person on the list who has seemed to notice. There have been other smart people in the world. You can stand on the shoulders of giants, stand on the shoulders of stacks of midgets, or you can just keep on jumping in the air and flapping your arms.

Korzybski, for all his merits, is turgid, repetitive, and full of out of date science. The last is not his fault: he was as up to date for his time as Eliezer is now, but, for example, he was writing before the Bayesian revolution in statistics and mostly before the invention of the computer. Neither topic makes any appearance in his magnum opus, "Science and Sanity". I wouldn't recommend him except for historical interest. People should know about him, which is why I referenced him, and his work did start a community that continues to this day. However, having been a member of one of the two main general semantics organisations years back, I cannot say that he and they produced anything to compare with Eliezer's work here. If Eliezer is reinventing the wheel, compared with Korzybski he's making it round instead of square, and has thought of adding axle bearings and pneumatic tyres.

Some things should be reinvented.

4buybuydandavis11y
EY talks about things they don't, but on the Map is Not the Territory, I don't see that EY or the usual discussions here have met Korzybski's level for consciousness of abstraction, let alone surpassed it. General Semantics provides a tidy metamodel of abstracting, identifies and names important concepts within the model, and adds some basic tools and practices for semantic hygiene. I find them generally useful, and I generally recommend them. For consciousness of abstraction, where and how has EY exceeded Korzybski? What are new and improved bits? Where was K wrong, and EY right?

On second thoughts, when I said "[not] anything to compare with" that was wildly exaggerated. Of course they're comparable -- we are comparing them, and they are not so far apart that the result is a slam-dunk. But I don't want to get into a blue vs. green dingdong (despite having already veered in that direction in the grandparent).

Here are some brief remarks towards a comparison on the issues that occur to me. I'm sure there's a lot more to be said on this, but that would be a top-level post that would take (at least for me) weeks to write, with many hours of re-studying the source materials.

  1. Clarity of exposition. There really is no contest here: E wins hands down, and I have "Science and Sanity" in front of me.

  2. Informed by current science. Inevitably, E wins this one as well, just by being informed of another half-century of science. That doesn't just mean better examples to illustrate the same ideas, but new ideas to build on. I already mentioned Bayesian reasoning and computers, both unavailable to K.

  3. Consciousness of abstraction. Grokking, to use Heinlein's word, the map-territory distinction. Both E and K have hammered on this one. K refined it mo

... (read more)
4buybuydandavis11y
Thanks for the elaboration. I agree with the comparative aspects. For 1), I'd say that although Korzybski was a painfully tedious windbag in Science and Sanity, I've seen lots of summaries that were concise and well written, though I don't remember a comprehensive summary of Science and Sanity that fits the bill. I was mainly getting at 3), with order of abstraction, multi ordinal terms, and the concrete practices of semantic hygiene such as indexing, etc,. and hyphenated non-elementalism. I'd add to your list that Korzybski's aversion to the izzes of identity and predication, along with his intensional vs. extensional distinction, really complement Tabooing a Word and Replacing the Symbol with the Substance. AK elaborates the full evaluative response - the intensional response - of a flesh and blood creature, identifies particularly problematic semantic practices which maladaptively evoke that response, and EY gives the practical method for semantic hygiene in terms of what you should be doing instead. AK always keeps in views the abstracting nervous system in a way that EY doesn't, and it think that added reductionism helps. A reductionist model which includes the salient points of human abstraction provides a generative method to make sense of the series of narratives that EY provides on different points on rationality. Also, AK's insistence on a physical structural differential, and knowledge based in the structure of various sensory modalities is really a gusher of good ideas. AK stays closer to the wetware, and whatever the relative limits of science available to him, I think that reductionist focus works to provide a deep model for thinking about abstraction. Focus on a reductionist physical reality, and all sorts of supposed conundrums for speciation, life, and mind evaporate. I've been going off on this because there's just a ton of material from AK on semantic hygiene, which I take as a core method of getting Less Wrong, and all I usually see mentio
2Eliezer Yudkowsky11y
S. I. Hayakawa was a way better writer - that's where I got all my reprocessed Korzybski as a kid, and that's where I point people: Language in Thought and Action instead of Science and Sanity. I tried once to read the latter book as a kid, after being referred to it by Null-A. I was probably about... eleven years old? Thirteen? I gave up very, very rapidly, which I did not do for physics texts with math in them.
5buybuydandavis11y
I won't argue with the literary analysis; K was stupendously tedious. I can't think of anyone more tiresome, although I have a feeling that his style was in vogue with various systematizers in the first half of the 20th century. I remember similar pain in reading Buckminster Fuller and Lugwig Von Mises, though I couldn't finish Fuller (tried him in my teens), and Von Mises wasn't quite as awful. Someone in the body awareness field as well - Joseph Pilates or Alexander. Less sure on the last one. I trudged through Science and Sanity, often gritting my teeth, and think it was worth it. My impression of Hayakawa is that he takes the conclusions but leaves out the metamodel which generates the conclusions and ties them together. I felt that K gave me a way of thinking, while Hayakawa packaged a lot of results, but left out the way of thinking. I read K first, so Hayakawa tasted like relatively weak tea and didn't leave a big impression. K was more meaty particularly on the Science/Mathematics side. Mathematics as an abstraction of functional relations of actions in the world - I don't know if it was literally tossing pebbles in a bucket, but it was close. It was the physical action of counting. Science as a semantic enterprise - finding new semantic structures to model world. Space-Time as providing a static view of dynamic change. There was something good on differential equations too, something like reductionist locality turning nonlinear relations into linear relations. It's been almost 20 years now, so I'm a little hazy. Anyway, I'd recommend at least having a serious chat with someone well versed in the mathematical and scientific side of Korzybski and Science and Sanity, as there is a lot of good stuff in there that doesn't get a lot of attention even from the General Semantics crowd, who, like Hayakawa, focus on the verbal aspects of the theory.
1buybuydandavis11y
Thank you for this response. This has removed a confusion I've had since I've come to the site. You say in the article: At least in my recollection, you refer to AK as the inventor of "The Map is not the Territory" when you bring it up, and that always gave me the impression that you had read him. But then I would be puzzled because many of the other things he said were appropriate to the conversation, and you wouldn't bring up those at all. And you didn't even mention Hayakawa in the article. When someone mentions an author as the originator of an idea they're talking about, I assume he has read them, and bring that context to a reading of what they have written in turn. It would have been helpful to me if you had identified Hayakawa and Langauge in Thought in Action as where you had been exposed to the idea, distinguishing that from where Hayakawa had gotten the idea - AK. Maybe there aren't a lot of people who have actually read AK, but I think it would be a good general practice to make your sources clear to your readers.
0Richard_Kennaway11y
For me it was Heinlein --> Korzybski --> van Vogt in my early teens. I doggedly ploughed through Korzybski, but the curious thing is, in my early twenties I reread him, and found him, not exactly light reading, but far clearer than he had been on my first attempt.

"Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea"

This works as a rhetorical device, but if one were to try to accurately weigh two options against each other, it might pay not to use reductio ad absurdium and have something like "Continue on in the wrong direction until the ETA were passed or events made the incorrect direction obvious, then try a new route, having lost up to ETA." Which is still bad, but if no safe/available places to stop for directions presented themselves, might n... (read more)

The "koan" prompts are nice.

But please be responsible in employing them. Whatever the prompted reader generates as their own idea, and finds also in the following text, will be believed without the usual skepticism (at least, I noticed this "of course!" feeling). So be sure to write only true responses :)

My koan answer: a map-territory distinction can help you update in response to information about cognitive biases that could be affecting you. For instance, if I learn that people tend to be biased towards thinking that people from the Other Political Party are possessed by demonic spirits of pure evil, with a map-territory distinction, I can adjust my confidence that Republicans are possessed by demonic spirits of pure evil downwards, since I know that the cognitive bias means that my map is likely to be skewed from reality in a predictable direction.

0shminux11y
If you assign a non-infinitesimal probability to this literal case, odds are that your map is so bad, you don't have much to update to begin with.
1AlexMennen11y
Yes, I was not being literal.
[-][anonymous]11y50

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century.

It surprises less if you realize that other proverbs have conveyed the same idea—I think, more aptly: "Theory is gray, but the golden tree of life is green." --Johann Wolfgang von Goethe

The Goethe quote (substitute "reality" for "tree of life" to be more prosaic) brings out that the difference between the best theory and reality is reality's... (read more)

2A1987dM11y
My favourite one is -- Hamlet Act 1, scene 5, 166–167

(Mainstream status here.)

When I follow this link, I get the text

You aren't allowed to do that.

2Vladimir_Nesov11y
Fixed.
2Vaniver11y
Notice the link's text has Eliezer_Yudkowsky-drafts in it.

'Luminosity' and 'Harry Potter and the Methods of Rationality'

Not the Hamlet one?

Fair and added. Also there's a lovely new bit of Munchkin fiction called Harry Potter and the Natural 20 (the author has confirmed this was explicitly HPMOR-inspired) but I don't know if it's 'explicit rationalist fiction' yet, although it's possibly already a good fic to teach Munchkinism in particular.

9Vaniver11y
I thought it was starting poorly, but then I got to:
0chaosmosis11y
This means I'll try it, thanks for that quote.
0gwern11y
I thought there were a lot of quotable bits; fun fic.
5chaosmosis11y
Oh yes.
1gwern11y
That was good, but the blood was better.
6Armok_GoB11y
There are also like 3 different MLP ones!
5beoShaffer11y
Given all the rationalist fiction that is surfacing, may I suggest the wording: "in fact the only explicitly rationalist fiction I know of that is not a result of Less Wrong."
2Eliezer Yudkowsky11y
Fair and edited. Also I left out "David's Sling".
2Alicorn11y
Now that is a lovely fic. I want more of it. Why must things be works in progress?
4gwern11y
Gresham's law.
2Alicorn11y
I don't think that's really a good response to this complaint.

Yeah, but 40 years ago you wouldn't be saying 'gosh what I really need is a good munchkin HP/D&D crossover!'

You'd be saying something like, 'that P.G. Wodehouse/Piers Anthony/etc., what a hilarious writer! If only he'd write his next book faster!' or 'I'm really looking forward to the new anthology of G.K Chesterton's uncollected Father Brown tales!'

EDIT: Thanks for ninjaing your comment so my response looks like a complete non sequitur. -_-

4Alicorn11y
Well, 40 years ago I wasn't born. I tend not to like old fiction. I would be less happy and enjoy fiction less in a world where that was all I had to read, although perhaps I wouldn't know what I was missing (there may even in reality be some genre I haven't found yet that I would adore and am the poorer for not having located yet). I edited my comment because my first writing was based solely on seeing what article you linked to and then I searched for the specific law you named and decided my reply was inapt. Sorry.
3gwern11y
This is pretty much what my entire article is about: there is something like 300 million books out there, like >90% of which is 'old', with no real reason to expect an incredible quality imbalance (fantasy humor is an old genre, so old that practitioners like Robert Asprin have died), and yet, the reading ratio is perhaps quite the inverse with 90% of reading being new books and someone like you can tell me in all apparent seriousness 'I don't like old fiction, I would be less happy in a world in which that was all I had!'
8katydee11y
Counterargument: Old writing was written in accordance with old ideas. The inferential distance between a modern reader and an old writer is likely to be larger than the inferential distance between a modern reader and a modern writer. For this reason, modern writing is generally both easier and more relatable for the modern reader, and we should not be surprised that most modern readers read modern writing. The exceptions-- old works that are considered classic and revered even by modern readers-- are (nominally) those that have touched something timeless, and therefore ring true across the ages.
2gwern11y
Is this distance sufficient to explain the recentism bias? Can you give an example of how a great SF novel like Dune has 'inferential distance' so severe as to explain why more people are at any point buying the (incredibly shitty terrible) NYT-bestselling sequels by Kevin J. Anderson & Brian Herbert than the original?
2katydee11y
"At any point" seems highly unlikely, since the sequels didn't exist during the same timespan as the original. I would be surprised if the number of readers of any given Dune sequel were greater than the number of readers of Dune itself; such would indeed constitute evidence in favor of unreasonable recentism. However, I think that the fact that the sequels are bought more often now is more likely to be the result of sampling bias rather than an actual reflection of the popularity of the original relative to its sequels.
3gwern11y
Well, that's where the sales figures comes into play and why I mentioned them. If every reader first buys Dune and only later - maybe - buys any sequel or prequel, then we would expect Dune to always outrank any of the others. To the extent that Dune does not appear on the rankings... The flow of buyers will reflect popularity. Of course, some readers will not buy Dune and will read it a different way, but this is equally true of the sequels/prequels! Filesharing networks and libraries stock them too.
3katydee11y
I expect that Dune is much, much more common in libraries than any of its sequels, or at least is checked out more often. This is supported by a quick search of my local library catalog, which reveals that the library system here has zero to two copies of any given Dune sequel, nearly all of which are currently available, but six copies of Dune, only one of which is currently available. The other library I sometimes visit appears to have zero to one copy of each Dune sequel, nearly all of which are currently available, but four copies of Dune, zero of which are available. Obviously, this is a limited sample, but I expect that similar trends generally prevail.
0hairyfigment11y
Why would you think this? Besides what katydee says about libraries, I've gotten many SF books from my parents' stash over the years. To the point where I had to stop myself from generalizing and rejecting your claim out of hand.
7Alicorn11y
Yes, I read your article. I just disagree with you about most of it. I like some fiction-by-people-now-dead, but I don't like elderly "classics", and if a ban on new books had been implemented at any point in the past I would be the poorer for not having things that have come out since then, even if you grandfathered in series-in-progress. This is not ridiculous just because you think some "quality" metric is holding steady. There are other things to like about books than your invented bullshit "quality" metric. You know what? I like books that were written originally in my language. That doesn't include Shakespeare; my language updates constantly and books don't. I like fanfiction, and active living fandoms where people will write each other presents according to specific prompts because someone really wanted something really specific that didn't exist a minute ago and riff on and respond to and parody each other in prose around a shared touchstone. That couldn't exist if there were some ban on new material and all these people spent their time quilting instead. I like books with fancy tech in them, and exactly what can get past my suspension-of-disbelief filter changes alongside real technology. I can read Heinlein even with slide rules in space, but damn, that would get old. Hell, I like writing. I like a lot of things that you see no value in and wish to slay. Please step back with the pointy objects.
-1gwern11y
Calm down, it's just an essay... I dunno, people used to get a lot out of quilting and knitting - the phrase 'knitting circle' comes to mind. But your contempt for various subcultures aside: So, 'writing is not about writing'; which is pretty much one of the major themes - whatever is justifying all this new fiction, it's not nebulous claims about sliderules in space or new books being 'better' than old ones or reading like Shakespeare (most of those 300m books are, uh, not from Elizebethan times -_-). Community is as good an explanation as any I've seen.

Calm down, it's just an essay...

I intensely resent this as a debate tactic. Your ability to ask me to calm down is unrelated to what emotions I'm having, whether I'm expressing them appropriately, or whether they are justified; it's a fully general silencing tactic. If I resorted to abuse or similar it might be warranted, but I haven't (unless you count "bullshit", but that's not what you quoted). I do in fact feel attacked by the suggestion that huge swaths of things valuable to me are worthless and ought to be done away with! You did in fact suggest that! I'm a human, and you cannot necessarily poke me without getting growled at.

Do you finish every book you pick up? I don't. I put them down if they don't reach a certain threshold of engagingness &c. The bigger the pile of books next to me, the pickier I can be: I can hold out for perfect 10s instead of sitting through lots of 8's because I can only get so many things out of the library at once. This includes pickiness for things other than "quality". If I want to go on a binge of mediocre YA paranormal romance (I did, a few months ago), I am fully equipped to find only the half-dozen most-Ali... (read more)

I do in fact feel attacked by the suggestion that huge swaths of things valuable to me are worthless and ought to be done away with!

Unless you enjoy being outraged at a low threshold by something outside your control, this is a trait that you should be dissatisfied with and attempt to modify, not something to be stated as immovable fact. I, note however, that acting like that trait is an immovable fact makes for more favorable status dynamics and a better emotion-bargaining position...

Unless you enjoy being outraged at a low threshold by something outside your control, this is a trait that you should be dissatisfied with and attempt to modify

Does not follow. I prefer to feel in ways that reflect the world around me. As long as I also think this sort of thing is an attack, feeling that way is in accord with that preference whether it makes me happier or not. As long as I don't care to occupy a pushover role where I make myself okay with whatever happens to be going on so that people don't have to account for my values, drawing a line beyond which I will not self-modify makes perfect sense; and in fact I do not want to occupy that pushover role.

I note however, that acting like that trait is an immovable fact makes for more favorable status dynamics and a better emotion-bargaining position...

I derive some of my status from cultivating the ability to modify myself as I please; I'd actually sacrifice some of that if I declared this unchangeable. And I do not declare it unchangeable! I just have other values than happiness.

3Athrelon11y
In any normal social context it would be reasonable to assume that this an overconfident statement deliberately made without caveats in order to enhance bargaining power. Which is fine - humans are selfish. This being LW where there's a good chance that this was intended literally - this sort of rigidity was exactly why "learning how to lose" is a skill.
3wedrifid11y
That isn't true. There are times where overconfidence is used to enhance bargaining power. But people just really not liking people doing things that hurt them is just considered normal and healthy human behavior. No, it isn't. Learning to lose is an independent skill to knowing what 'lose' means and not liking to lose.
2wedrifid11y
Have 7.34 status points for not wireheading (more than you reflectively desire to wirehead). Some things you can counter-signal.
5wedrifid11y
I'd add that it is also a general discrediting tactic. It seems to have been rather effective in this case. According to my analysis of the conversation your comments don't seem any more intemperate, mind-killed or confrontational---in some ways they seem less so. You expressed disagreement with reasoning on something that is significantly subjective. Yet there are indications that perception has been swayed such that you are considered to have been emotional and irrational while gwern is noble and to be honored for what seems to be just claiming the moral high ground and exploiting that advantage.
4gwern11y
I don't like arguing with angry or growling people, so I'm going to stop here.
-5[anonymous]11y
1Jonathan_Graehl11y
To the extent that people can go on a subgenre binge and be right to do so perhaps we can afford a few writers for relatively virgin genres. Otherwise I find gwern's argument that we'd be nearly as happy reading 20+ year old books pretty compelling (oddly, I don't buy a similar argument for movies, due only in part to movie-making tech advances).
1Richard_Kennaway11y
Books, music, and all other art forms, unlike apples, are not fungible, not even items of the same "quality" (however defined). BTW, I have that collection of the complete Bach in 160 CDs (and have listened to all of it at least twice). And I'm collecting the complete Masaahi Suzuki recordings of the Bach cantatas (which are completely different from the Leonhardt/Harnoncourt performances in the Bach 2000 set), and I might spring for the John Eliot Gardiner cantatas if he manages to issue them as a complete set. I also went to this performance yesterday of an art form dating back all of 60 years (the drums are from the long-long-ago, but this use of them is not), and buy everything Greg Egan writes as soon as it comes out. Yes, no-one can read/listen to/view more than the tiniest fraction of what there is, but to read nothing old, or to read nothing new, are selection rules that have only simplicity in their favour. There is no one-dimensional scale of "quality".
3gwern11y
A point which applies equally to old and new. And ultimately every choice comes down to read or don't read... I think you're deprecating them too quickly. Let's take the 90% guess at face-value: if you are selecting primarily from just the most recent 10% and quality - however multidimensional you choose to define it - then you need to somehow make up for throwing out 9/10ths of all the best books, the ones which happened to be old! It'd be like running a machine learning or statistical algorithm which starts by throwing out 90% of the data from consideration; yeah, maybe that's a good idea, but you're going to have a hard time selecting from the remaining 10% so much better that it makes up for it.
4simplicio11y
I'd STILL like Wodehouse to write a few more. Unfortunately...
0Jonathan_Graehl11y
Not that gwern was wrong in any way in his general point, but I also tremendously enjoyed this particular crossover and second everyone's recommendation (at least, if you've ever attempted "roleplaying" of the non-sexual type).
1RomeoStevens11y
is Hamlet still available online? I don't see it.
6Alicorn11y
Under normal circumstances, you have to buy it.
0[anonymous]11y
http://www.gutenberg.org/ebooks/100

Deleted due to the attempt to evade the -5 penalty.

6Eugine_Nier11y
I thought part of the point of the -5 penalty was to keep interesting discussions from happening down stream of downvoted comments. In that case isn't responding to heavily downvoted comments in a different thread exactly what should happen?
4wedrifid11y
I assumed that either Eliezer just didn't like the subject or that the comment actually quoted a -5 comment. Hang on. This can be checked. We can see from Eliezer's page which author Eliezer was replying to and look at that user's page. (From what I can tell everything the user in question has written has been downvoted.)
3Risto_Saarelma11y
I understood the system actually stops the thread starter from replying to replies to their own comment if they have less than +5 total karma. Stop people talking to people talking to them and they will go for a circumvent. Maybe just let people accrue more negative karma when replying to downvoted threads rather than stopping them when they hit the arbitrary zero point?

There are some aspects of maps - for example, edges, blank spots, and so on, that seem, if not necessary, extremely convenient to keep as part of the map. However, if you use these features of a map in the same way that you use most features of a map - to guide your actions - then you will not be guided well. There's something in the sequences like "the world is not mysterious" about people falling into the error of moving from blank/cloudy spots on the map to "inherently blank/cloudy" parts of the world.

The slogan "the map is not ... (read more)

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.

I find just that description really, really useful. I knew about the Litany of Tarski (or Diax's Rake, or believing something just because you wanted it to be true) and have the habit of trying to preemptively prevent it. But that description makes it a lot easier to grok it at a gut level.

When I was trying to solve the koan I focused on a few interrelated subproblems of skill one. It seems like this sort of thinking is particularly useful for reminding yourself to consider the outside view and/or the difference between confidence levels inside and outside an argument.
Also, I think the koan left out something pretty important.
Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than ju... (read more)

5Richard_Kennaway11y
If you can ever gain by being ignorant, you can gain more by better knowledge still. Cf. E.T. Jaynes: "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought", quoted here.
5Morendil11y
Beliefs are part of reality too. The image "thought bubble containing a belief, and a reality outside it" is a good map, but it's not itself the territory. In particular, the mantra "Reality is that which, when we stop believing in it, doesn't go away" can be harmful in areas such as psychology and sociology, and in domains which have a large component of these, such as finance, politics or software engineering. In these domains you must account for phenomena such as self-fulfilling or self-cancelling prophecies. Concrete example: stock market crashes.
0[anonymous]11y
So you're saying if stop believing in stock market crashes, they go away? I think what you mean is that if you intervened to change everyone's beliefs away from "oh shit, sell!", then stock market crashes would not happen. That is a different matter than talking about just my or your belief.
5Morendil11y
More often it works the other way around: the fact that someone stops believing in an overinflated stock market (i.e. claims a "bubble" is about to burst) acts as a self-fulfilling prophecy, causing others to also stop believing which -if this information cascade propagates enough- will cause a crash, therefore bringing reality in line with the original belief. But information cascades can also cause booms, as I understand it more likely of individual stocks. The "someone" above is underspecified: it can be one particularly influential person - Nate Silver recounts how Amazon stock surged 25% after Henry Blodget hyped it up in 1998. But it can also be a larger group, who, looking at small fluctuations in the market, panic and start a stampede. My point is that "thought bubbles" in general are part of reality. Your believing in things has causal influence on reality (another concrete example: romantic relationships - the concept "love", which can be cashed out in terms of blood levels of various hormones, is one of those things that go away because people stop believing in it). It is generally bad epistemic practice to overstate this influence, but it can also be bad to understate it.
0[anonymous]11y
Agreed. My point was that your examples were a part of reality in a way that the ideal belief-of-observer used in the "reality is that which..." mantra isn't.
4[anonymous]11y
No. It may be good to talk shit like you're overconfident. Actually being overconfident is just unnecesarily shooting yourself in the foot.

Then he'd probably ignore alicorn's scornful comment

Yes. It would probably also involve expressing agreement with part of what Alicorn said (ideally part that he could sincerely agree with) and perhaps paraphrasing another part back with an elaboration. That seems to work sometimes.

I don't think gwern's required to turn the other cheek, and you obviously don't think you are so required, either.

No, I don't (where all the negatives add up to agreement with this quote). That is just what would gain the immense respect for social grace (and plain grace).

It's important to distinguish "The map is not the territory" from "The map is not a perfect representation of the territory.".

The major difference is that beliefs cannot easily be used as direct or indirect concrete objects; I cannot look in my belief of what's in the basket and find (or not find) a marble. I cannot test my beliefs by experimentation to find if they correspond to reality; I must test reality to find if my beliefs correspond to it.

[-][anonymous]11y20

If my socks will stain, I want to believe my socks will stain; If my socks won't stain, I don't want to believe my socks will stain; Let me not become attached to beliefs I may not want.

That was beautiful. I will definitely keep that mantra in mind.

[...] while reality itself is either one way or another.

Is this true?

1kris_buote5y
Quantum mechanics doesn't seem so clear-cut.
-11Elo5y
[-][anonymous]9y10

Sometimes it still amazes me to contemplate that this proverb was invented at some point(...) to me this phrase sounds like a sheer background axiom of existence.

Because "the map is not the territory" is applied atheism. To a theist, the map in god's mind caused the territory to happen, so that map is even more real than the territory. And every human map is as accurate as it approaches the primordial divine map, the fact that it also happens to predict the terrain merely being a nice bonus. Even Einstein believed this. To invent "the map... (read more)

[-][anonymous]11y10

Thanks for the clear illustration

The illustrations are great. I wish there were one or two more in this post.

This time, I wrote down my answer to the koan: the basic idea was correct, but there weren't as many examples of subskills as Eliezer listed.

It helps to realize that there may be mistakes in the process of constructing a map, and that you may need to correct them. If there is a problem where it's important to be right, like when figuring out whether you should invest in a company, or if you are feeling bad about your life and wonder whether it's justified, you need to be able to make the map-territory distinction in order to evaluate the accuracy of your

... (read more)

With apologies for being so late to the party, I'm somewhat perplexed by a post entitled "The Map is Not the Territory" that then dismisses the originator with a pithy, "...some fellow named Korzybski..."  Given that the site deals with AI/ML and that Korzybski is also credited with developing General Semantics (full of implications for AI) I'm guessing this apparently pithy dismissal belies an appreciation for Korzybski hidden elsewhere.  I could be wrong tho. 

Under what circumstances is it helpful to consciously think of the distinction between the map and the territory

I thought about this before reading the rest of the post, and came up with: "When I find myself surprised by something." Surprise may indicate that something improbable has happened, but may also indicate an error in my estimation of what's probable. Given that the observation appears improbable to begin with (or I wouldn't be surprised), I should suspect the map first.

I find myself to be particularly susceptible to the pitfalls avoided by skill 4. I'll have to remember to explicitly invoke the Tarski method next time I find myself in the act of attempting to fool myself.

One scenario not listed here in which I find it particularly useful to explicitly think about my own map is in cases where the map is blurry (e.g. low precision knowledge: "the sun will set some time between 5pm and 7pm") or splotchy (e.g. explicit gaps in my knowledge: "I know where the red and blue cups are, but not the green cup"). When I bring my map's flaws explicitly into my awareness, it allows me to make plans which account for the uncertainty of my knowledge, and come up with countermeasures.

In your verbal description it says 40 miles, but in the matrix it says 40 minutes.

0sboo11y
60mph?

one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler

You might consider Mark Clifton's novel "Eight Keys to Eden" (1960) as another rationalist fiction (though it's more debatable). Available from Gutenberg at http://www.gutenberg.org/ebooks/27595

1TheWakalix6y
Interestingly, that seems to take an opposite view on "map and territory" from Vogt.

I don't think gwern's required to turn the other cheek

Someone like you can tell me in all apparent seriousness that Alicorn slapped first, but that doesn't make it so.

Now it may be that her edited comments contained emotional attacks with nonstop profanity, starting before the linked comment. But the record only shows her apologizing and getting the contemptuous line I quoted in response.