Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A Dialogue On Doublethink

53 BrienneYudkowsky 11 May 2014 07:38PM

Followup to: Against Doublethink (sequence), Dark Arts of Rationality, Your Strength as a Rationalist


Doublethink

It is obvious that the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time. --Book IV of Plato's Republic

Can you simultaneously want sex and not want it? Can you believe in God and not believe in Him at the same time? Can you be fearless while frightened?

To be fair to Plato, this was meant not as an assertion that such contradictions are impossible, but as an argument that the soul has multiple parts. It seems we can, in fact, want something while also not wanting it. This is awfully strange, and it led Plato to conclude the soul must have multiple parts, for surely no one part could contain both sides of the contradiction.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong. Like when you try to convince yourself that staying up just a little longer playing 2048 won't have adverse effects on the presentation you're giving tomorrow, when you know full well that's exactly what's going to happen.

But it may be that cognitive dissonance is the exception in the face of contradictions, rather than the rule. How would you know? If it doesn't cause any emotional friction, the two propositions will just sit quietly together in your brain, never mentioning that it's logically impossible for both of them to be true. When we accept a contradiction wholesale without cognitive dissonance, it's what Orwell called "doublethink".

When you're a mere mortal trying to get by in a complex universe, doublethink may be adaptive. If you want to be completely free of contradictory beliefs without spending your whole life alone in a cave, you'll likely waste a lot of your precious time working through conundrums, which will often produce even more conundrums.

Suppose I believe that my husband is faithful, and I also believe that the unfamiliar perfume on his collar indicates he's sleeping with other women without my permission. I could let that pesky little contradiction turn into an extended investigation that may ultimately ruin my marriage. Or I could get on with my day and leave my marriage intact.

It's better to just leave those kinds of thoughts alone, isn't it? It probably makes for a happier life.

Against Doublethink

Suppose you believe that driving is dangerous, and also that, while you are driving, you're completely safe. As established in Doublethink, there may be some benefits to letting that mental configuration be.

There are also some life-shattering downsides. One of the things you believe is false, you see, by the law of the excluded middle. In point of fact, it's the one that goes "I'm completely safe while driving". Believing false things has consequences.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear. You won't have to put up with the inconvenience of a seatbelt. You will be happily unconcerned for a day, a week, a year. Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb. Or paralyzed from the neck down. Or dead. It's not inevitable, but it's possible; how probable is it? You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in. --Eliezer Yudkowsky, Doublethink (Choosing to be Biased)

What are beliefs for? Please pause for ten seconds and come up with your own answer.

Ultimately, I think beliefs are inputs for predictions. We're basically very complicated simulators that try to guess which actions will cause desired outcomes, like survival or reproduction or chocolate. We input beliefs about how the world behaves, make inferences from them to which experiences we should anticipate given various changes we might make to the world, and output behaviors that get us what we want, provided our simulations are good enough.

My car is making a mysterious ticking sound. I have many beliefs about cars, and one of them is that if my car makes noises it shouldn't, it will probably stop working eventually, and possibly explode. I can use this input to simulate the future. Since I've observed my car making a noise it shouldn't, I predict that my car will stop working. I also believe that there is something causing the ticking. So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working. My belief has thus led to the action of researching the ticking noise, planning some simple tests, and will probably lead to cleaning the sticky lifters.

If it's true that solving the ticking noise will keep my car running, then my beliefs will cash out in correctly anticipated experiences, and my actions will cause desired outcomes. If it's false, perhaps because the ticking can be solved without addressing a larger underlying problem, then the experiences I anticipate will not occur, and my actions may lead to my car exploding.

Doublethink guarantees that you believe falsehoods. Some of the time you'll call upon the true belief ("driving is dangerous"), anticipate future experiences accurately, and get the results you want from your chosen actions ("don't drive three times the speed limit at night while it's raining"). But some of the time, if you actually believe the false thing as well, you'll call upon the opposite belief, anticipate inaccurately, and choose the last action you'll ever take.

Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.

Contradictions may keep you happy as long as you never need to use them. Should you call upon them, though, to guide your actions, the debt on false beliefs will come due. You will drive too fast at night in the rain, you will crash, you will fly out of the car with no seat belt to restrain you, you will die, and it will be your fault.

Against Against Doublethink

What if Plato was pretty much right, and we sometimes believe contradictions because we're sort of not actually one single person?

It is not literally true that Systems 1 and 2 are separate individuals the way you and I are. But the idea of Systems 1 and 2 suggests to me something quite interesting with respect to the relationship between beliefs and their role in decision making, and modeling them as separate people with very different personalities seems to work pretty darn well when I test my suspicions.

I read Atlas Shrugged probably about a decade ago. I was impressed with its defense of capitalism, which really hammers home the reasons it’s good and important on a gut level. But I was equally turned off by its promotion of selfishness as a moral ideal. I thought that was *basically* just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.

Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters). --Scott of Slate Star Codex in All Debates Are Bravery Debates

If you're generous to a fault, "I should be more selfish" is probably a belief that will pay off in positive outcomes should you install it for future use. If you're selfish to a fault, the same belief will be harmful. So what if you were too generous half of the time and too selfish the other half? Well, then you would want to believe "I should be more selfish" with only the generous half, while disbelieving it with the selfish half.

Systems 1 and 2 need to hear different things. System 2 might be able to understand the reality of biases and make appropriate adjustments that would work if System 1 were on board, but System 1 isn't so great at being reasonable. And it's not System 2 that's in charge of most of your actions. If you want your beliefs to positively influence your actions (which is the point of beliefs, after all), you need to tailor your beliefs to System 1's needs.

For example: The planning fallacy is nearly ubiquitous. I know this because for the past three years or so, I've gotten everywhere five to fifteen minutes early. Almost every single person I meet with arrives five to fifteen minutes late. It is very rare for someone to be on time, and only twice in three years have I encountered the (rather awkward) circumstance of meeting with someone who also arrived early.

Before three years ago, I was also usually late, and I far underestimated how long my projects would take. I knew, abstractly and intellectually, about the planning fallacy, but that didn't stop System 1 from thinking things would go implausibly quickly. System 1's just optimistic like that. It responds to, "Dude, that is not going to work, and I have a twelve point argument supporting my position and suggesting alternative plans," with "Naaaaw, it'll be fine! We can totally make that deadline."

At some point (I don't remember when or exactly how), I gained the ability to look at the true due date, shift my System 1 beliefs to make up for the planning fallacy, and then hide my memory that I'd ever seen the original due date. I would see that my flight left at 2:30, and be surprised to discover on travel day that I was not late for my 2:00 flight, but a little early for my 2:30 one. I consistently finished projects on time, and only disasters caused me to be late for meetings. It took me about three months before I noticed the pattern and realized what must be going on.

I got a little worried I might make a mistake, such as leaving a meeting thinking the other person just wasn't going to show when the actual meeting time hadn't arrived. I did have a couple close calls along those lines. But it was easy enough to fix; in important cases, I started receiving Boomeranged notes from past-me around the time present-me expected things to start that said, "Surprise! You've still got ten minutes!"

This unquestionably improved my life. You don't realize just how inconvenient the planning fallacy is until you've left it behind. Clearly, considered in isolation, the action of believing falsely in this domain was instrumentally rational.

Doublethink, and the Dark Arts generally, applied to carefully chosen domains is a powerful tool. It's dumb to believe false things about really dangerous stuff like driving, obviously. But you don't have to doublethink indiscriminately. As long as you're careful, as long as you suspend epistemic rationality only when it's clearly beneficial to do so, employing doublethink at will is a great idea.

Instrumental rationality is what really matters. Epistemic rationality is useful, but what use is holding accurate beliefs in situations where that won't get you what you want?

Against Against Against Doublethink

There are indeed epistemically irrational actions that are instrumentally rational, and instrumental rationality is what really matters. It is pointless to believing true things if it doesn't get you what you want. This has always been very obvious to me, and it remains so.

There is a bigger picture.

Certain epistemic rationality techniques are not compatible with dark side epistemology. Most importantly, the Dark Arts do not play nicely with "notice your confusion", which is essentially your strength as a rationalist. If you use doublethink on purpose, confusion doesn't always indicate that you need to find out what false thing you believe so you can fix it. Sometimes you have to bury your confusion. There's an itsy bitsy pause where you try to predict whether it's useful to bury.

As soon as I finally decided to abandon the Dark Arts, I began to sweep out corners I'd allowed myself to neglect before. They were mainly corners I didn't know I'd neglected.

The first one I noticed was the way I responded to requests from my boyfriend. He'd mentioned before that I often seemed resentful when he made requests of me, and I'd insisted that he was wrong, that I was actually happy all the while. (Notice that in the short term, since I was probably going to do as he asked anyway, attending to the resentment would probably have made things more difficult for me.) This self-deception went on for months.

Shortly after I gave up doublethink, he made a request, and I felt a little stab of dissonance. Something I might have swept away before, because it seemed more immediately useful to bury the confusion than to notice it. But I thought (wordlessly and with my emotions), "No, look at it. This is exactly what I've decided to watch for. I have noticed confusion, and I will attend to it."

It was very upsetting at first to learn that he'd been right. I feared the implications for our relationship. But that fear didn't last, because we both knew the only problems you can solve are the ones you acknowledge, so it is a comfort to know the truth.

I was far more shaken by the realization that I really, truly was ignorant that this had been happening. Not because the consequences of this one bit of ignorance were so important, but because who knows what other epistemic curses have hidden themselves in the shadows? I realized that I had not been in control of my doublethink, that I couldn't have been.

Pinning down that one tiny little stab of dissonance took great preparation and effort, and there's no way I'd been working fast enough before. "How often," I wondered, "does this kind of thing happen?"

Very often, it turns out. I began noticing and acting on confusion several times a day, where before I'd been doing it a couple times a week. I wasn't just noticing things that I'd have ignored on purpose before; I was noticing things that would have slipped by because my reflexes slowed as I weighed the benefit of paying attention. "Ignore it" was not an available action in the face of confusion anymore, and that was a dramatic change. Because there are no disruptions, acting on confusion is becoming automatic.

I can't know for sure which bits of confusion I've noticed since the change would otherwise have slipped by unseen. But here's a plausible instance. Tonight I was having dinner with a friend I've met very recently. I was feeling s little bit tired and nervous, so I wasn't putting as much effort as usual into directing the conversation. At one point I realized we had stopped making making any progress toward my goals, since it was clear we were drifting toward small talk. In a tired and slightly nervous state, I imagine that I might have buried that bit of information and abdicated responsibility for the conversation--not by means of considering whether allowing small talk to happen was actually a good idea, but by not pouncing on the dissonance aggressively, and thereby letting it get away. Instead, I directed my attention at the feeling (without effort this time!), inquired of myself what precisely was causing it, identified the prediction that the current course of conversation was leading away from my goals, listed potential interventions, weighed their costs and benefits against my simulation of small talk, and said, "What are your terminal values?"

(I know that sounds like a lot of work, but it took at most three seconds. The hard part was building the pouncing reflex.)

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion. You might think you can do this selectively, but if you do, I strongly suspect you're wrong in exactly the way I was.

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques--things like willful doublethink--that I double-thought myself into believing accuracy was not so great.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Dark side epistemology prevents total dedication to continuous improvement in epistemic rationality. Though individual dark side actions may be instrumentally rational, the patterns of thought required to allow them are not. Though instrumental rationality is ultimately the goal, your instrumental rationality will always be limited by your epistemic rationality.

That was important enough to say again: Your instrumental rationality will always be limited by your epistemic rationality.

It only takes a fraction of a second to sweep an observation into the corner. You don't have time to decide whether looking at it might prove problematic. If you take the time to protect your compartments, false beliefs you don't endorse will slide in from everywhere through those split-second cracks in your art. You must attend to your confusion the very moment you notice it. You must be relentless an unmerciful toward your own beliefs.

Excellent epistemology is not the natural state of a human brain. Rationality is hard. Without extreme dedication and advanced training, without reliable automatic reflexes of rational thought, your belief structure will be a mess. You can't have totally automatic anti-rationalization reflexes if you use doublethink as a technique of instrumental rationality.

This has been a difficult lesson for me. I have lost some benefits I'd gained from the Dark Arts. I'm late now, sometimes. And painful truths are painful, though now they are sharp and fast instead of dull and damaging.

And it is so worth it! I have much more work to do before I can move on to the next thing. But whatever the next thing is, I'll tackle it with far more predictive power than I otherwise would have--though I doubt I'd have noticed the difference.

So when I say that I'm against against against doublethink--that dark side epistemology is bad--I mean that there is more potential on the light side, not that the dark side has no redeeming features. Its fruits hang low, and they are delicious.

But the fruits of the light side are worth the climb. You'll never even know they're there if you gorge yourself in the dark forever.

Don’t Apply the Principle of Charity to Yourself

49 UnclGhost 19 November 2011 07:26PM

In philosophy, the Principle of Charity is a technique in which you evaluate your opponent’s position as if it made the most amount of sense possible given the wording of the argument. That is, if you could interpret your opponent's argument in multiple ways, you would go for the most reasonable version. This is a good idea for several reasons. It counteracts the illusion of transparency and correspondence bias, it makes you look gracious, if your opponent really does believe a bad version of the argument sometimes he’ll say so, and, most importantly, it helps you focus on getting to the truth, rather than just trying to win a debate.

Recently I was in a discussion online, and someone argued against a position I'd taken. Rather than evaluating his argument, I looked back at the posts I'd made. I realized that my previous posts would be just as coherent if I'd written them while believing a position that was slightly different from my real one, so I replied to my opponent as if I had always believed the new position. There was no textual evidence that showed that I hadn't. In essence, I got to accuse my opponent of using a strawman regardless of whether or not he actually was. It wasn't until much later that I realized I'd applied the Principle of Charity to myself.

Now, this is bad for basically every reason it's good to apply it to other people. You get undeserved status points for being good at arguing. You exploit the non-existence of transparency. It helps you win a debate rather than trying to maintain consistent and true beliefs. And maybe worst of all, if you're good enough at getting away with it, no one knows you're doing it but you... and sometimes not even you.

Like most bad argument techniques, I wasn't aware I was doing this at a conscious level. I've probably been doing it for a long time but just didn't recognize it. I'd heard about not giving yourself too much credit, and not just trying to "win" arguments, but I had no idea I was doing both of those in this particular way. I think it's likely that this habit started from realizing that posting your opinion doesn't give people a temporary flash of insight and the ability to look into your soul and see exactly what you meanall they have to go by is the words, and (what you hope are) connotations similar to your own. Once you've internalized this truth, be very careful not to abuse it and take advantage of the fact that people don't know that you don't always believe the best form of the argument.

It's also unfair to your opponent to make them think they've misunderstood your position when they haven't. If this happens enough, they could recalibrate their argument decoding techniques, when really they were accurate to start with, and you'll have made both of you that much worse at looking for the intended version of arguments.

Ideally, this would be frequently noticed, since you are in effect lying about a large construct of beliefs, and there's probably some inconsistency between the new version and your past positions on the subject. Unfortunately though, most people aren't going to go back and check nearly as many of your past posts as you just did. If you suspect someone's doing this to you, and you're reasonably confident you don't just think so because of correspondence bias, read through their older posts (try not to go back too far though, in case they've just silently changed their mind). If that fails, it's risky, but you can try to call them on it by asking about their true rejection.

How do you prevent yourself from doing this? If someone challenges your argument, don't look for ways by which you can (retroactively) have been right all along. Say "Hm, I didn't think of that", to both yourself and your opponent, and then suggest the new version of your argument as a new version. You'll be more transparent to both yourself and your opponent, which is vital for actually gaining something out of any debate.

 

tl;dr: If someone doesn't apply the Principle of Charity to you, and they're right, don't apply it to yourselfrealize that you might just have been wrong.

How are critical thinking skills acquired? Five perspectives

9 matt 22 October 2010 02:29AM

Link to sourcehttp://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mappingArgument Maps Improve Critical ThinkingDebate tools: an experience report

How are critical thinking skills acquired? Five perspectivesTim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.

In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis.   This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice.  We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems.   To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach.   In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students.   (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)

LW has been introduced to argument mapping before

Subtext is not invariant under linear transformations

36 PhilGoetz 23 March 2010 03:49PM

You can download the audio and PDFs from the 2007 Cognitive Aging Summit in Washington DC here; they're good listening.  But I want to draw your attention to the graphs on page 6 of Archana Singh-Manoux's presentation.  It shows the "social gradient" of intelligence.  The X-axis is decreasing socioeconomic status (SES); the Y-axis is increasing performance on tests of reasoning, memory, phonemic fluency, and vocabulary.  Each graph shows a line sloping from the upper left (high SES, high performance) downwards and to the right.

Does anything leap out at you as strange about these graphs?

continue reading »

Fictional Evidence vs. Fictional Insight

31 Wei_Dai 08 January 2010 01:59AM

This is a response to Eliezer Yudkowsky's The Logical Fallacy of Generalization from Fictional Evidence and Alex Flint's When does an insight count as evidence? as well as komponisto's recent request for science fiction recommendations.

My thesis is that insight forms a category that is distinct from evidence, and that fiction can provide insight, even if it can't provide much evidence. To give some idea of what I mean, I'll list the insights I gained from one particular piece of fiction (published in 1992), which have influenced my life to a large degree:

  1. Intelligence may be the ultimate power in this universe.
  2. A technological Singularity is possible.
  3. A bad Singularity is possible.
  4. It may be possible to nudge the future, in particular to make a good Singularity more likely, and a bad one less likely.
  5. Improving network security may be one possible way to nudge the future in a good direction. (Side note: here are my current thoughts on this.)
  6. An online reputation for intelligence, rationality, insight, and/or clarity can be a source of power, because it may provide a chance to change the beliefs of a few people who will make a crucial difference.

So what is insight, as opposed to evidence? First of all, notice that logically omniscient Bayesians have no use for insight. They would have known all of the above without having observed anything (assuming they had a reasonable prior). So insight must be related to logical uncertainty, and a feature only of minds that are computationally constrained. I suspect that we won't fully understand the nature of insight until the problem of logical uncertainty is solved, but here are some of my thoughts about it in the mean time:

  • A main form of insight is a hypothesis that one hadn't previously entertained, but should be assigned a non-negligible prior probability.
  • An insight is kind of like a mathematical proof: in theory you could have thought of it yourself, but reading it saves you a bunch of computation.
  • Recognizing an insight seems easier than coming up with it, but still of nontrivial difficulty.

So a challenge for us is to distinguish true insights from unhelpful distractions in fiction. Eliezer mentioned people who let the Matrix and Terminator dominate their thoughts about the future, and I agree that we have to be careful not to let our minds consider fiction as evidence. But is there also some skill that can be learned, to pick out the insights, and not just to ignore the distractions?

P.S., what insights have you gained from fiction?

P.P.S., I guess I should mention the name of the book for the search engines: A Fire Upon the Deep by Vernor Vinge.

Why Real Men Wear Pink

51 Yvain 06 August 2009 07:39AM

"Fashion is a form of ugliness so intolerable we have to alter it every six months."

-- Oscar Wilde

For the past few decades, I and many other men my age have been locked in a battle with the clothing industry. I want simple, good-looking apparel that covers my nakedness and maybe even makes me look attractive. The clothing industry believes someone my age wants either clothing laced with profanity, clothing that objectifies women, clothing that glorifies alcohol or drug use, or clothing that makes them look like a gangster. And judging by the clothing I see people wearing, on the whole they are right.

I've been working my way through Steven Pinker's How The Mind Works, and reached the part where he quotes approvingly Quentin Bell's theory of fashion. The theory provides a good explanation for why so much clothing seems so deliberately outrageous.

continue reading »

31 Laws of Fun

34 Eliezer_Yudkowsky 26 January 2009 10:13AM

So this is Utopia, is it?  Well
I beg your pardon, I thought it was Hell.
        -- Sir Max Beerholm, verse entitled
        In a Copy of More's (or Shaw's or Wells's or Plato's or Anybody's) Utopia

This is a shorter summary of the Fun Theory Sequence with all the background theory left out - just the compressed advice to the would-be author or futurist who wishes to imagine a world where people might actually want to live:

  1. Think of a typical day in the life of someone who's been adapting to Utopia for a while.  Don't anchor on the first moment of "hearing the good news".  Heaven's "You'll never have to work again, and the streets are paved with gold!" sounds like good news to a tired and poverty-stricken peasant, but two months later it might not be so much fun.  (Prolegomena to a Theory of Fun.)
  2. Beware of packing your Utopia with things you think people should do that aren't actually fun.  Again, consider Christian Heaven: singing hymns doesn't sound like loads of endless fun, but you're supposed to enjoy praying, so no one can point this out.  (Prolegomena to a Theory of Fun.)
  3. Making a video game easier doesn't always improve it.  The same holds true of a life.  Think in terms of clearing out low-quality drudgery to make way for high-quality challenge, rather than eliminating work.  (High Challenge.)
  4. Life should contain novelty - experiences you haven't encountered before, preferably teaching you something you didn't already know.  If there isn't a sufficient supply of novelty (relative to the speed at which you generalize), you'll get bored.  (Complex Novelty.)
continue reading »

She has joined the Conspiracy

9 Eliezer_Yudkowsky 13 January 2009 07:48PM

Kimiko

I have no idea whether I had anything to do with this.

Say It Loud

31 Eliezer_Yudkowsky 19 September 2008 05:34PM

Reply toOverconfidence is Stylish

I respectfully defend my lord Will Strunk:

"If you don't know how to pronounce a word, say it loud! If you don't know how to pronounce a word, say it loud!"  This comical piece of advice struck me as sound at the time, and I still respect it. Why compound ignorance with inaudibility?  Why run and hide?

How does being vague, tame, colorless, irresolute, help someone to understand your current state of uncertainty?  Any more than mumbling helps them understand a word you aren't sure how to pronounce?

Goofus says:  "The sky, if such a thing exists at all, might or might not have a property of color, but, if it does have color, then I feel inclined to state that it might be green."

Gallant says:   "70% probability the sky is green."

Which of them sounds more confident, more definite?

But which of them has managed to quickly communicate their state of uncertainty?

(And which of them is more likely to actually, in real life, spend any time planning and preparing for the eventuality that the sky is blue?)

continue reading »

Lawrence Watt-Evans's Fiction

23 Eliezer_Yudkowsky 15 July 2008 03:00AM

One of my pet topics, on which I will post more one of these days, is the Rationalist in Fiction.  Most of the time - it goes almost without saying - the Rationalist is done completely wrong.  In Hollywood, the Rationalist is a villain, or a cold emotionless foil, or a child who has to grow into a real human being, or a fool whose probabilities are all wrong, etcetera.  Even in science fiction, the Rationalist character is rarely done right - bearing the same resemblance to a real rationalist, as the mad scientist genius inventor who designs a new nuclear reactor in a month, bears to real scientists and engineers.

Perhaps this is because most speculative fiction, generally speaking, is interested in someone battling monsters or falling in love or becoming a vampire, or whatever, not in being rational... and it would probably be worse fiction, if the author tried to make that the whole story.  But that can't be the entire problem.  I've read at least one author whose plots are not about rationality, but whose characters are nonetheless, in passing, realistically rational.

That author is Lawrence Watt-Evans.  His work stands out for a number of reasons, the first being that it is genuinely unpredictable.  Not because of a postmodernist contempt for coherence, but because there are events going on outside the hero's story, just like real life.

Most authors, if they set up a fantasy world with a horrible evil villain, and give their main character the one sword that can kill that villain, you could guess that, at the end of the book, the main character is going to kill the evil villain with the sword.

Not Lawrence Watt-Evans.  In a Watt-Evans book, it's entirely possible that the evil villain will die of a heart attack halfway through the book, then the character will decide to sell the sword because they'd rather have the money, and then the character uses the money to set up an investment banking company.

continue reading »

View more: Next