"grass is green" and "sky is blue" are always funny examples to me, since whenever I hear them I go check, and they're usually not true. Right now from my window, I can see brown grass and a white/gray sky.
So they're especially good examples, as people will actually use them as paradigms of indisputably true empirical propositions, and even those seem almost always to be a mismatch between the map and the territory.
Implicit in a 75% probability of X is a 25% probability of not-X
This may strike everyone as obvious...
My experience with the GJP suggests that it's not. Some people there, for instance, are on record as assigning a 75% probability to the proposition "The number of registered Syrian conflict refugees reported by the UNHCR will exceed 250,000 at any point before 1 April 2013".
Currently this number is 242,000, the trend in the past few months has been an increase of 1000 to 2000 a day, and the UNHCR have recently provided estimates that this number will eventually reach 700,000. This was clear as early as August. The kicker is that the 242K number is only the count of people who are fully processed by the UNHCR administration and officially in their database; there are tens of thousands more in the camp who only have "appointments to be registered".
It's hard for me to understand why people are not updating to, maybe not 100%, but at least 99%, and that these are the only answers worth considering. To state your probability as 85% or 91% (as some have quite recently) is to say, "There is a one in ten chance that the Syrian conflict will suddenly stop and all...
I've been enjoying the new set of Sequences. I wasn't around when the earlier Sequences were being written; It's like the difference between reading a series of books all in one go, versus being part of the culture, reading them one at a time, and engaging in discussion in between. So thanks to Eliezer for posting them!
I really liked how there was an ending koan in the last post. It prompted discussion. I tried to think of a good prompt to post for this one, but couldn't. Anyone have some good ideas?
Also, Skill #2 made me think of this optical illusion
You're at work, and you find yourself wanting very badly to make a certain, particularly funny-but-possibly-taken-as-offensive remark to your boss. The comment feels particularly witty, quick-minded and insightful.
(trying to think of stuff that's fairly common and happens relatively often in everyday life)
You are leaving your home in the morning, to return in the evening; your day will involve quite a bit of walking and public transport. It is now warm and sunny, but you know that a temperature drop with heavy rains is forecasted for the afternoon. Looking out at the window and thinking of the walk in the sun and the crowded bus, you don't feel like carrying around a coat and umbrella. You start thinking maybe the forecast is wrong...
"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.
Skill 3 in the form "Trust not those who claim there is no truth" is widely advocated by modern skeptics fighting anti-epistemology.
Payoff matrices as used in the grid-visualization method are ancient; using the grid-visualization method in response to a temptation to rationalize was invented on LW as far as I currently know, as was the Litany of Tarski. (Not to be confused with Alfred Tarski's original truth-schemas.)
"The conceivability of being wrong" aka "Consider the opposite" is the standard recommended debiasing technique in psychology. See e.g. Larrick (2004).
"The conceivability of being wrong" and "perspective-taking on beliefs" are old indeed; I wouldn't be the least bit surprised to find explicit precedent in Ancient Greece.
The most famous expression of this that I'm aware of originates with Lord Cromwell:
I beseech you, in the bowels of Christ, think it possible you may be mistaken.
Arguably, Socrates's claims of ignorance are a precursor, but they may stray dangerously close to anti-epistemology. I'm not a good enough classical scholar to identify anything closer.
The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.
The grid-visualization method seems like a relatively straightforward application of the normal-form game, with your beliefs as your play and the state of the world as your opponent's play. The advocacy to visualize it might come from LW, but actually applying game theory to life has a (somewhat) long and storied tradition.
[edit] I agree that doing it in response to a temptation to rationalize is probably new to LW; doing it in response to uncertainty in general isn't.
It's too bad that these how-to posts tend to be not as popular as the philosophical posts. Good philosophy is important but I doubt it can produce rationalists of the quality that can be produced by consistent rationalist skills-training over months and years.
Philosophy posts are useful if they're interesting whereas how-to's are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.
I enjoy having posts which show how to apply rational thought processes to everyday situations, so thank you.
However, there is a failure mode on the 2x2 matrix method, that I think should be mentioned-- it ignores probabilities of various options, and focuses solely on their payoff (example given below). I think when making the 2x2 matrix, there should be an explicit step where you assign probabilities to the beliefs in question, and keep those probabilities in mind when making your decision.
I think this is obvious to most long-time LWers, but worry about someone new coming across this decision method, and utilizing it without thinking it through.
Here is an example of how this can backfire, otherwise:
Your new babysitter seems perfect in every way: Clean background check, and her organization skills helps offset your absent-mindedness. One day, you notice your priceless family heirloom diamond earrings aren't where you normally keep them. The probability is much higher that you accidentally misplaced them (you have a habit of doing that), but there is a small suspicion on your part that the babysitter might have taken them.
You BELIEVE she took them, in REALITY she took them- You f...
Two beliefs, one world is an oversimplification and misses an important middle step.
Two beliefs, two sets of evidence that may but need not overlap, and one world, is closer.
This becomes an issue when for example, one observer is differently socially situated than the other* and so one will say "pshaw, I have no evidence of such a thing" when the other says "it is my everyday life". They disagree, and they are both making good use of the evidence reality presents to each of them differently.
(* Examples of such social situational differences omitted to minimize politics, but can be provided on request.)
Thinking about the map-territory distinction reminds me of Knoll's Law of Media Accuracy:
Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge.
...When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist
I think that part of this advice can be restated as "every character must think themselves the protagonist of their own lives" which I think I remember Orson Scott Card giving; though Eliezer's advice more explicitly focuses on how this affects their models of the universe.
A decade back, I was conciously attempting to use OSC's (if that's who I got it from) advice in a piece of Gargoyles fanfiction "Names and Forms" set in mythological-era Crete. In that story I had a character who saw everything through the prism of ethnic relations (Eteocretans vs Achaeans vs Lycians), and there's another who because of his partly-divine heritage couldn't help thinking about how gods and human and gargoyles interact with each other, and Daedalus in his cameo appearance treated everything as just puzzles to be solved, whether it's a case of murder or a case of how-to-build-a-folding-chair... (Note: It's not a piece of rationalist fanfiction, nor does it involve anything particularly relevant to LessWrong-related topics.)
Korzybski, for all his merits, is turgid, repetitive, and full of out of date science. The last is not his fault: he was as up to date for his time as Eliezer is now, but, for example, he was writing before the Bayesian revolution in statistics and mostly before the invention of the computer. Neither topic makes any appearance in his magnum opus, "Science and Sanity". I wouldn't recommend him except for historical interest. People should know about him, which is why I referenced him, and his work did start a community that continues to this day. However, having been a member of one of the two main general semantics organisations years back, I cannot say that he and they produced anything to compare with Eliezer's work here. If Eliezer is reinventing the wheel, compared with Korzybski he's making it round instead of square, and has thought of adding axle bearings and pneumatic tyres.
Some things should be reinvented.
On second thoughts, when I said "[not] anything to compare with" that was wildly exaggerated. Of course they're comparable -- we are comparing them, and they are not so far apart that the result is a slam-dunk. But I don't want to get into a blue vs. green dingdong (despite having already veered in that direction in the grandparent).
Here are some brief remarks towards a comparison on the issues that occur to me. I'm sure there's a lot more to be said on this, but that would be a top-level post that would take (at least for me) weeks to write, with many hours of re-studying the source materials.
Clarity of exposition. There really is no contest here: E wins hands down, and I have "Science and Sanity" in front of me.
Informed by current science. Inevitably, E wins this one as well, just by being informed of another half-century of science. That doesn't just mean better examples to illustrate the same ideas, but new ideas to build on. I already mentioned Bayesian reasoning and computers, both unavailable to K.
Consciousness of abstraction. Grokking, to use Heinlein's word, the map-territory distinction. Both E and K have hammered on this one. K refined it mo
"Just keep doing what you're doing, and you'll eventually drive your rental car directly into the sea"
This works as a rhetorical device, but if one were to try to accurately weigh two options against each other, it might pay not to use reductio ad absurdium and have something like "Continue on in the wrong direction until the ETA were passed or events made the incorrect direction obvious, then try a new route, having lost up to ETA." Which is still bad, but if no safe/available places to stop for directions presented themselves, might n...
The "koan" prompts are nice.
But please be responsible in employing them. Whatever the prompted reader generates as their own idea, and finds also in the following text, will be believed without the usual skepticism (at least, I noticed this "of course!" feeling). So be sure to write only true responses :)
My koan answer: a map-territory distinction can help you update in response to information about cognitive biases that could be affecting you. For instance, if I learn that people tend to be biased towards thinking that people from the Other Political Party are possessed by demonic spirits of pure evil, with a map-territory distinction, I can adjust my confidence that Republicans are possessed by demonic spirits of pure evil downwards, since I know that the cognitive bias means that my map is likely to be skewed from reality in a predictable direction.
Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century.
It surprises less if you realize that other proverbs have conveyed the same idea—I think, more aptly: "Theory is gray, but the golden tree of life is green." --Johann Wolfgang von Goethe
The Goethe quote (substitute "reality" for "tree of life" to be more prosaic) brings out that the difference between the best theory and reality is reality's...
Fair and added. Also there's a lovely new bit of Munchkin fiction called Harry Potter and the Natural 20 (the author has confirmed this was explicitly HPMOR-inspired) but I don't know if it's 'explicit rationalist fiction' yet, although it's possibly already a good fic to teach Munchkinism in particular.
Yeah, but 40 years ago you wouldn't be saying 'gosh what I really need is a good munchkin HP/D&D crossover!'
You'd be saying something like, 'that P.G. Wodehouse/Piers Anthony/etc., what a hilarious writer! If only he'd write his next book faster!' or 'I'm really looking forward to the new anthology of G.K Chesterton's uncollected Father Brown tales!'
EDIT: Thanks for ninjaing your comment so my response looks like a complete non sequitur. -_-
Calm down, it's just an essay...
I intensely resent this as a debate tactic. Your ability to ask me to calm down is unrelated to what emotions I'm having, whether I'm expressing them appropriately, or whether they are justified; it's a fully general silencing tactic. If I resorted to abuse or similar it might be warranted, but I haven't (unless you count "bullshit", but that's not what you quoted). I do in fact feel attacked by the suggestion that huge swaths of things valuable to me are worthless and ought to be done away with! You did in fact suggest that! I'm a human, and you cannot necessarily poke me without getting growled at.
Do you finish every book you pick up? I don't. I put them down if they don't reach a certain threshold of engagingness &c. The bigger the pile of books next to me, the pickier I can be: I can hold out for perfect 10s instead of sitting through lots of 8's because I can only get so many things out of the library at once. This includes pickiness for things other than "quality". If I want to go on a binge of mediocre YA paranormal romance (I did, a few months ago), I am fully equipped to find only the half-dozen most-Ali...
I do in fact feel attacked by the suggestion that huge swaths of things valuable to me are worthless and ought to be done away with!
Unless you enjoy being outraged at a low threshold by something outside your control, this is a trait that you should be dissatisfied with and attempt to modify, not something to be stated as immovable fact. I, note however, that acting like that trait is an immovable fact makes for more favorable status dynamics and a better emotion-bargaining position...
Unless you enjoy being outraged at a low threshold by something outside your control, this is a trait that you should be dissatisfied with and attempt to modify
Does not follow. I prefer to feel in ways that reflect the world around me. As long as I also think this sort of thing is an attack, feeling that way is in accord with that preference whether it makes me happier or not. As long as I don't care to occupy a pushover role where I make myself okay with whatever happens to be going on so that people don't have to account for my values, drawing a line beyond which I will not self-modify makes perfect sense; and in fact I do not want to occupy that pushover role.
I note however, that acting like that trait is an immovable fact makes for more favorable status dynamics and a better emotion-bargaining position...
I derive some of my status from cultivating the ability to modify myself as I please; I'd actually sacrifice some of that if I declared this unchangeable. And I do not declare it unchangeable! I just have other values than happiness.
There are some aspects of maps - for example, edges, blank spots, and so on, that seem, if not necessary, extremely convenient to keep as part of the map. However, if you use these features of a map in the same way that you use most features of a map - to guide your actions - then you will not be guided well. There's something in the sequences like "the world is not mysterious" about people falling into the error of moving from blank/cloudy spots on the map to "inherently blank/cloudy" parts of the world.
The slogan "the map is not ...
Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.
I find just that description really, really useful. I knew about the Litany of Tarski (or Diax's Rake, or believing something just because you wanted it to be true) and have the habit of trying to preemptively prevent it. But that description makes it a lot easier to grok it at a gut level.
When I was trying to solve the koan I focused on a few interrelated subproblems of skill one. It seems like this sort of thinking is particularly useful for reminding yourself to consider the outside view and/or the difference between confidence levels inside and outside an argument.
Also, I think the koan left out something pretty important.
Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than ju...
Then he'd probably ignore alicorn's scornful comment
Yes. It would probably also involve expressing agreement with part of what Alicorn said (ideally part that he could sincerely agree with) and perhaps paraphrasing another part back with an elaboration. That seems to work sometimes.
I don't think gwern's required to turn the other cheek, and you obviously don't think you are so required, either.
No, I don't (where all the negatives add up to agreement with this quote). That is just what would gain the immense respect for social grace (and plain grace).
It's important to distinguish "The map is not the territory" from "The map is not a perfect representation of the territory.".
The major difference is that beliefs cannot easily be used as direct or indirect concrete objects; I cannot look in my belief of what's in the basket and find (or not find) a marble. I cannot test my beliefs by experimentation to find if they correspond to reality; I must test reality to find if my beliefs correspond to it.
If my socks will stain, I want to believe my socks will stain; If my socks won't stain, I don't want to believe my socks will stain; Let me not become attached to beliefs I may not want.
That was beautiful. I will definitely keep that mantra in mind.
Error: The mainstream status on the bottom of the post links back to the post itself. Instead of comments.
Sometimes it still amazes me to contemplate that this proverb was invented at some point(...) to me this phrase sounds like a sheer background axiom of existence.
Because "the map is not the territory" is applied atheism. To a theist, the map in god's mind caused the territory to happen, so that map is even more real than the territory. And every human map is as accurate as it approaches the primordial divine map, the fact that it also happens to predict the terrain merely being a nice bonus. Even Einstein believed this. To invent "the map...
This time, I wrote down my answer to the koan: the basic idea was correct, but there weren't as many examples of subskills as Eliezer listed.
...It helps to realize that there may be mistakes in the process of constructing a map, and that you may need to correct them. If there is a problem where it's important to be right, like when figuring out whether you should invest in a company, or if you are feeling bad about your life and wonder whether it's justified, you need to be able to make the map-territory distinction in order to evaluate the accuracy of your
With apologies for being so late to the party, I'm somewhat perplexed by a post entitled "The Map is Not the Territory" that then dismisses the originator with a pithy, "...some fellow named Korzybski..." Given that the site deals with AI/ML and that Korzybski is also credited with developing General Semantics (full of implications for AI) I'm guessing this apparently pithy dismissal belies an appreciation for Korzybski hidden elsewhere. I could be wrong tho.
Under what circumstances is it helpful to consciously think of the distinction between the map and the territory
I thought about this before reading the rest of the post, and came up with: "When I find myself surprised by something." Surprise may indicate that something improbable has happened, but may also indicate an error in my estimation of what's probable. Given that the observation appears improbable to begin with (or I wouldn't be surprised), I should suspect the map first.
I find myself to be particularly susceptible to the pitfalls avoided by skill 4. I'll have to remember to explicitly invoke the Tarski method next time I find myself in the act of attempting to fool myself.
One scenario not listed here in which I find it particularly useful to explicitly think about my own map is in cases where the map is blurry (e.g. low precision knowledge: "the sun will set some time between 5pm and 7pm") or splotchy (e.g. explicit gaps in my knowledge: "I know where the red and blue cups are, but not the green cup"). When I bring my map's flaws explicitly into my awareness, it allows me to make plans which account for the uncertainty of my knowledge, and come up with countermeasures.
one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler
You might consider Mark Clifton's novel "Eight Keys to Eden" (1960) as another rationalist fiction (though it's more debatable). Available from Gutenberg at http://www.gutenberg.org/ebooks/27595
I don't think gwern's required to turn the other cheek
Someone like you can tell me in all apparent seriousness that Alicorn slapped first, but that doesn't make it so.
Now it may be that her edited comments contained emotional attacks with nonstop profanity, starting before the linked comment. But the record only shows her apologizing and getting the contemptuous line I quoted in response.
Followup to: The Useful Idea of Truth (minor post)
So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:
"The map is not the territory."
Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.
But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:
Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly? How exactly does it help, on what sort of problem?
...
...
...
Skill 1: The conceivability of being wrong.
In the story, Gilbert Gosseyn is most liable to be reminded of this proverb when some belief is uncertain; "Your belief in that does not make it so." It might sound basic, but this is where some of the earliest rationalist training starts - making the jump from living in a world where the sky just is blue, the grass just is green, and people from the Other Political Party just are possessed by demonic spirits of pure evil, to a world where it's possible that reality is going to be different from these beliefs and come back and surprise you. You might assign low probability to that in the grass-is-green case, but in a world where there's a territory separate from the map it is at least conceivable that reality turns out to disagree with you. There are people who could stand to rehearse this, maybe by visualizing themselves with a thought bubble, first in a world like X, then in a world like not-X, in cases where they are tempted to entirely neglect the possibility that they might be wrong. "He hates me!" and other beliefs about other people's motives seems to be a domain in which "I believe that he hates me" or "I hypothesize that he hates me" might work a lot better.
Probabilistic reasoning is also a remedy for similar reasons: Implicit in a 75% probability of X is a 25% probability of not-X, so you're hopefully automatically considering more than one world. Assigning a probability also inherently reminds you that you're occupying an epistemic state, since only beliefs can be probabilistic, while reality itself is either one way or another.
Skill 2: Perspective-taking on beliefs.
What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you. They aren't disagreeing with you because they're obstinate, they're disagreeing because the world feels different to them - even if the two of you are in fact embedded in the same reality.
This is one of the secret writing rules behind Harry Potter and the Methods of Rationality. When I write a character, e.g. Draco Malfoy, I don't just extrapolate their mind, I extrapolate the surrounding subjective world they live in, which has that character at the center; all other things seem important, or are considered at all, in relation to how important they are to that character. Most other books are never told from more than one character's viewpoint, but if they are, it's strange how often the other characters seem to be living inside the protagonist's universe and to think mostly about things that are important to the main protagonist. In HPMOR, when you enter Draco Malfoy's viewpoint, you are plunged into Draco Malfoy's subjective universe, in which Death Eaters have reasons for everything they do and Dumbledore is an exogenous reasonless evil. Since I'm not trying to show off postmodernism, everyone is still recognizably living in the same underlying reality, and the justifications of the Death Eaters only sound reasonable to Draco, rather than having been optimized to persuade the reader. It's not like the characters literally have their own universes, nor is morality handed out in equal portions to all parties regardless of what they do. But different elements of reality have different meanings and different importances to different characters.
Joshua Greene has observed - I think this is in his Terrible, Horrible, No Good, Very Bad paper - that most political discourse rarely gets beyond the point of lecturing naughty children who are just refusing to acknowledge the evident truth. As a special case, one may also appreciate internally that being wrong feels just like being right, unless you can actually perform some sort of experimental check.
Skill 3: You are less bamboozleable by anti-epistemology or motivated neutrality which explicitly claims that there's no truth.
This is a negative skill - avoiding one more wrong way to do it - and mostly about quoted arguments rather than positive reasoning you'd want to conduct yourself. Hence the sort of thing we want to put less emphasis on in training. Nonetheless, it's easier not to fall for somebody's line about the absence of objective truth, if you've previously spent a bit of time visualizing Sally and Anne with different beliefs, and separately, a marble for those beliefs to be compared-to. Sally and Anne have different beliefs, but there's only one way-things-are, the actual state of the marble, to which the beliefs can be compared; so no, they don't have 'different truths'. A real belief (as opposed to a belief-in-belief) will feel true, yes, so the two have different feelings-of-truth, but the feeling-of-truth is not the territory.
To rehearse this, I suppose, you'd try to notice this kind of anti-epistemology when you ran across it, and maybe respond internally by actually visualizing two figures with thought bubbles and their single environment. Though I don't think most people who understood the core insight would require any further persuasion or rehearsal to avoid contamination by the fallacy.
Skill 4: World-first reasoning about decisions a.k.a. the Tarski Method aka Litany of Tarski.
Suppose you're considering whether to wash your white athletic socks with a dark load of laundry, and you're worried the colors might bleed into the socks, but on the other hand you really don't want to have to do another load just for the white socks. You might find your brain selectively rationalizing reasons why it's not all that likely for the colors to bleed - there's no really new dark clothes in there, say - trying to persuade itself that the socks won't be ruined. At which point it may help to say:
"If my socks will stain, I want to believe my socks will stain;
If my socks won't stain, I don't want to believe my socks will stain;
Let me not become attached to beliefs I may not want."
To stop your brain trying to persuade itself, visualize that you are either already in the world where your socks will end up discolored, or already in the world where your socks will be fine, and in either case it is better for you to believe you're in the world you're actually in. Related mantras include "That which can be destroyed by the truth should be" and "Reality is that which, when we stop believing in it, doesn't go away". Appreciating that belief is not reality can help us to appreciate the primacy of reality, and either stop arguing with it and accept it, or actually become curious about it.
Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow. For example, let's say that you've been driving for a while, haven't reached your hotel, and are starting to wonder if you took a wrong turn... in which case you'd have to go back and drive another 40 miles in the opposite direction, which is an unpleasant thing to think about, so your brain tries to persuade itself that it's not lost. Anna and I use the form of the skill where we visualize the world where we are lost and keep driving.
Note that in principle, this is only one quadrant of a 2 x 2 matrix:
Michael "Valentine" Smith says that he practiced this skill by actually visualizing all four quadrants in turn, and that with a bit of practice he could do it very quickly, and that he thinks visualizing all four quadrants helped.
(Mainstream status here.)
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Rationality: Appreciating Cognitive Algorithms"
Previous post: "The Useful Idea of Truth"