Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A Dialogue On Doublethink

50 Post author: BrienneStrohl 11 May 2014 07:38PM

Followup to: Against Doublethink (sequence), Dark Arts of Rationality, Your Strength as a Rationalist


Doublethink

It is obvious that the same thing will not be willing to do or undergo opposites in the same part of itself, in relation to the same thing, at the same time. --Book IV of Plato's Republic

Can you simultaneously want sex and not want it? Can you believe in God and not believe in Him at the same time? Can you be fearless while frightened?

To be fair to Plato, this was meant not as an assertion that such contradictions are impossible, but as an argument that the soul has multiple parts. It seems we can, in fact, want something while also not wanting it. This is awfully strange, and it led Plato to conclude the soul must have multiple parts, for surely no one part could contain both sides of the contradiction.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong. Like when you try to convince yourself that staying up just a little longer playing 2048 won't have adverse effects on the presentation you're giving tomorrow, when you know full well that's exactly what's going to happen.

But it may be that cognitive dissonance is the exception in the face of contradictions, rather than the rule. How would you know? If it doesn't cause any emotional friction, the two propositions will just sit quietly together in your brain, never mentioning that it's logically impossible for both of them to be true. When we accept a contradiction wholesale without cognitive dissonance, it's what Orwell called "doublethink".

When you're a mere mortal trying to get by in a complex universe, doublethink may be adaptive. If you want to be completely free of contradictory beliefs without spending your whole life alone in a cave, you'll likely waste a lot of your precious time working through conundrums, which will often produce even more conundrums.

Suppose I believe that my husband is faithful, and I also believe that the unfamiliar perfume on his collar indicates he's sleeping with other women without my permission. I could let that pesky little contradiction turn into an extended investigation that may ultimately ruin my marriage. Or I could get on with my day and leave my marriage intact.

It's better to just leave those kinds of thoughts alone, isn't it? It probably makes for a happier life.

Against Doublethink

Suppose you believe that driving is dangerous, and also that, while you are driving, you're completely safe. As established in Doublethink, there may be some benefits to letting that mental configuration be.

There are also some life-shattering downsides. One of the things you believe is false, you see, by the law of the excluded middle. In point of fact, it's the one that goes "I'm completely safe while driving". Believing false things has consequences.

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear. You won't have to put up with the inconvenience of a seatbelt. You will be happily unconcerned for a day, a week, a year. Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb. Or paralyzed from the neck down. Or dead. It's not inevitable, but it's possible; how probable is it? You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in. --Eliezer Yudkowsky, Doublethink (Choosing to be Biased)

What are beliefs for? Please pause for ten seconds and come up with your own answer.

Ultimately, I think beliefs are inputs for predictions. We're basically very complicated simulators that try to guess which actions will cause desired outcomes, like survival or reproduction or chocolate. We input beliefs about how the world behaves, make inferences from them to which experiences we should anticipate given various changes we might make to the world, and output behaviors that get us what we want, provided our simulations are good enough.

My car is making a mysterious ticking sound. I have many beliefs about cars, and one of them is that if my car makes noises it shouldn't, it will probably stop working eventually, and possibly explode. I can use this input to simulate the future. Since I've observed my car making a noise it shouldn't, I predict that my car will stop working. I also believe that there is something causing the ticking. So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working. My belief has thus led to the action of researching the ticking noise, planning some simple tests, and will probably lead to cleaning the sticky lifters.

If it's true that solving the ticking noise will keep my car running, then my beliefs will cash out in correctly anticipated experiences, and my actions will cause desired outcomes. If it's false, perhaps because the ticking can be solved without addressing a larger underlying problem, then the experiences I anticipate will not occur, and my actions may lead to my car exploding.

Doublethink guarantees that you believe falsehoods. Some of the time you'll call upon the true belief ("driving is dangerous"), anticipate future experiences accurately, and get the results you want from your chosen actions ("don't drive three times the speed limit at night while it's raining"). But some of the time, if you actually believe the false thing as well, you'll call upon the opposite belief, anticipate inaccurately, and choose the last action you'll ever take.

Without any principled algorithm determining which of the contradictory propositions to use as an input for the simulation at hand, you'll fail as often as you succeed. So it makes no sense to anticipate more positive outcomes from believing contradictions.

Contradictions may keep you happy as long as you never need to use them. Should you call upon them, though, to guide your actions, the debt on false beliefs will come due. You will drive too fast at night in the rain, you will crash, you will fly out of the car with no seat belt to restrain you, you will die, and it will be your fault.

Against Against Doublethink

What if Plato was pretty much right, and we sometimes believe contradictions because we're sort of not actually one single person?

It is not literally true that Systems 1 and 2 are separate individuals the way you and I are. But the idea of Systems 1 and 2 suggests to me something quite interesting with respect to the relationship between beliefs and their role in decision making, and modeling them as separate people with very different personalities seems to work pretty darn well when I test my suspicions.

I read Atlas Shrugged probably about a decade ago. I was impressed with its defense of capitalism, which really hammers home the reasons it’s good and important on a gut level. But I was equally turned off by its promotion of selfishness as a moral ideal. I thought that was *basically* just being a jerk. After all, if there’s one thing the world doesn’t need (I thought) it’s more selfishness.

Then I talked to a friend who told me Atlas Shrugged had changed his life. That he’d been raised in a really strict family that had told him that ever enjoying himself was selfish and made him a bad person, that he had to be working at every moment to make his family and other people happy or else let them shame him to pieces. And the revelation that it was sometimes okay to consider your own happiness gave him the strength to stand up to them and turn his life around, while still keeping the basic human instinct of helping others when he wanted to and he felt they deserved it (as, indeed, do Rand characters). --Scott of Slate Star Codex in All Debates Are Bravery Debates

If you're generous to a fault, "I should be more selfish" is probably a belief that will pay off in positive outcomes should you install it for future use. If you're selfish to a fault, the same belief will be harmful. So what if you were too generous half of the time and too selfish the other half? Well, then you would want to believe "I should be more selfish" with only the generous half, while disbelieving it with the selfish half.

Systems 1 and 2 need to hear different things. System 2 might be able to understand the reality of biases and make appropriate adjustments that would work if System 1 were on board, but System 1 isn't so great at being reasonable. And it's not System 2 that's in charge of most of your actions. If you want your beliefs to positively influence your actions (which is the point of beliefs, after all), you need to tailor your beliefs to System 1's needs.

For example: The planning fallacy is nearly ubiquitous. I know this because for the past three years or so, I've gotten everywhere five to fifteen minutes early. Almost every single person I meet with arrives five to fifteen minutes late. It is very rare for someone to be on time, and only twice in three years have I encountered the (rather awkward) circumstance of meeting with someone who also arrived early.

Before three years ago, I was also usually late, and I far underestimated how long my projects would take. I knew, abstractly and intellectually, about the planning fallacy, but that didn't stop System 1 from thinking things would go implausibly quickly. System 1's just optimistic like that. It responds to, "Dude, that is not going to work, and I have a twelve point argument supporting my position and suggesting alternative plans," with "Naaaaw, it'll be fine! We can totally make that deadline."

At some point (I don't remember when or exactly how), I gained the ability to look at the true due date, shift my System 1 beliefs to make up for the planning fallacy, and then hide my memory that I'd ever seen the original due date. I would see that my flight left at 2:30, and be surprised to discover on travel day that I was not late for my 2:00 flight, but a little early for my 2:30 one. I consistently finished projects on time, and only disasters caused me to be late for meetings. It took me about three months before I noticed the pattern and realized what must be going on.

I got a little worried I might make a mistake, such as leaving a meeting thinking the other person just wasn't going to show when the actual meeting time hadn't arrived. I did have a couple close calls along those lines. But it was easy enough to fix; in important cases, I started receiving Boomeranged notes from past-me around the time present-me expected things to start that said, "Surprise! You've still got ten minutes!"

This unquestionably improved my life. You don't realize just how inconvenient the planning fallacy is until you've left it behind. Clearly, considered in isolation, the action of believing falsely in this domain was instrumentally rational.

Doublethink, and the Dark Arts generally, applied to carefully chosen domains is a powerful tool. It's dumb to believe false things about really dangerous stuff like driving, obviously. But you don't have to doublethink indiscriminately. As long as you're careful, as long as you suspend epistemic rationality only when it's clearly beneficial to do so, employing doublethink at will is a great idea.

Instrumental rationality is what really matters. Epistemic rationality is useful, but what use is holding accurate beliefs in situations where that won't get you what you want?

Against Against Against Doublethink

There are indeed epistemically irrational actions that are instrumentally rational, and instrumental rationality is what really matters. It is pointless to believing true things if it doesn't get you what you want. This has always been very obvious to me, and it remains so.

There is a bigger picture.

Certain epistemic rationality techniques are not compatible with dark side epistemology. Most importantly, the Dark Arts do not play nicely with "notice your confusion", which is essentially your strength as a rationalist. If you use doublethink on purpose, confusion doesn't always indicate that you need to find out what false thing you believe so you can fix it. Sometimes you have to bury your confusion. There's an itsy bitsy pause where you try to predict whether it's useful to bury.

As soon as I finally decided to abandon the Dark Arts, I began to sweep out corners I'd allowed myself to neglect before. They were mainly corners I didn't know I'd neglected.

The first one I noticed was the way I responded to requests from my boyfriend. He'd mentioned before that I often seemed resentful when he made requests of me, and I'd insisted that he was wrong, that I was actually happy all the while. (Notice that in the short term, since I was probably going to do as he asked anyway, attending to the resentment would probably have made things more difficult for me.) This self-deception went on for months.

Shortly after I gave up doublethink, he made a request, and I felt a little stab of dissonance. Something I might have swept away before, because it seemed more immediately useful to bury the confusion than to notice it. But I thought (wordlessly and with my emotions), "No, look at it. This is exactly what I've decided to watch for. I have noticed confusion, and I will attend to it."

It was very upsetting at first to learn that he'd been right. I feared the implications for our relationship. But that fear didn't last, because we both knew the only problems you can solve are the ones you acknowledge, so it is a comfort to know the truth.

I was far more shaken by the realization that I really, truly was ignorant that this had been happening. Not because the consequences of this one bit of ignorance were so important, but because who knows what other epistemic curses have hidden themselves in the shadows? I realized that I had not been in control of my doublethink, that I couldn't have been.

Pinning down that one tiny little stab of dissonance took great preparation and effort, and there's no way I'd been working fast enough before. "How often," I wondered, "does this kind of thing happen?"

Very often, it turns out. I began noticing and acting on confusion several times a day, where before I'd been doing it a couple times a week. I wasn't just noticing things that I'd have ignored on purpose before; I was noticing things that would have slipped by because my reflexes slowed as I weighed the benefit of paying attention. "Ignore it" was not an available action in the face of confusion anymore, and that was a dramatic change. Because there are no disruptions, acting on confusion is becoming automatic.

I can't know for sure which bits of confusion I've noticed since the change would otherwise have slipped by unseen. But here's a plausible instance. Tonight I was having dinner with a friend I've met very recently. I was feeling s little bit tired and nervous, so I wasn't putting as much effort as usual into directing the conversation. At one point I realized we had stopped making making any progress toward my goals, since it was clear we were drifting toward small talk. In a tired and slightly nervous state, I imagine that I might have buried that bit of information and abdicated responsibility for the conversation--not by means of considering whether allowing small talk to happen was actually a good idea, but by not pouncing on the dissonance aggressively, and thereby letting it get away. Instead, I directed my attention at the feeling (without effort this time!), inquired of myself what precisely was causing it, identified the prediction that the current course of conversation was leading away from my goals, listed potential interventions, weighed their costs and benefits against my simulation of small talk, and said, "What are your terminal values?"

(I know that sounds like a lot of work, but it took at most three seconds. The hard part was building the pouncing reflex.)

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion. You might think you can do this selectively, but if you do, I strongly suspect you're wrong in exactly the way I was.

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques--things like willful doublethink--that I double-thought myself into believing accuracy was not so great.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Dark side epistemology prevents total dedication to continuous improvement in epistemic rationality. Though individual dark side actions may be instrumentally rational, the patterns of thought required to allow them are not. Though instrumental rationality is ultimately the goal, your instrumental rationality will always be limited by your epistemic rationality.

That was important enough to say again: Your instrumental rationality will always be limited by your epistemic rationality.

It only takes a fraction of a second to sweep an observation into the corner. You don't have time to decide whether looking at it might prove problematic. If you take the time to protect your compartments, false beliefs you don't endorse will slide in from everywhere through those split-second cracks in your art. You must attend to your confusion the very moment you notice it. You must be relentless an unmerciful toward your own beliefs.

Excellent epistemology is not the natural state of a human brain. Rationality is hard. Without extreme dedication and advanced training, without reliable automatic reflexes of rational thought, your belief structure will be a mess. You can't have totally automatic anti-rationalization reflexes if you use doublethink as a technique of instrumental rationality.

This has been a difficult lesson for me. I have lost some benefits I'd gained from the Dark Arts. I'm late now, sometimes. And painful truths are painful, though now they are sharp and fast instead of dull and damaging.

And it is so worth it! I have much more work to do before I can move on to the next thing. But whatever the next thing is, I'll tackle it with far more predictive power than I otherwise would have--though I doubt I'd have noticed the difference.

So when I say that I'm against against against doublethink--that dark side epistemology is bad--I mean that there is more potential on the light side, not that the dark side has no redeeming features. Its fruits hang low, and they are delicious.

But the fruits of the light side are worth the climb. You'll never even know they're there if you gorge yourself in the dark forever.

Comments (105)

Comment author: Decius 11 May 2014 05:07:23PM 9 points [-]

Would an apt summary be "Expertly used Dark Side techniques have a high local maximum of instrumental rationality, but there is a region of higher instrumental rationality that involves epistemic rationality techniques that are incompatible with Dark Side techniques"?

Comment author: brazil84 08 May 2014 09:07:40AM 9 points [-]

Ultimately, I think beliefs are inputs for predictions

As Robin Hanson has pointed out, beliefs are also a way of showing something about oneself. Tribal membership, moral superiority, etc. A good Cimmerian believes in Crom, the grim gloomy unforgiving god.

Often, when we attempt to accept contradictory statements as correct, it causes cognitive dissonance--that nagging, itchy feeling in your brain that won't leave you alone until you admit that something is wrong.

My impression is that most people never admit that their beliefs are contradictory, instead they either lash out at whoever is bringing the contradictions to the forefront of their mind or start ignoring him.

But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map.

Can you give three examples of improvements in your life since your epiphany?

Comment author: TheAncientGeek 08 May 2014 09:38:57AM 7 points [-]

Caring about contradictions signals geekishnes, which is generally undesirable.

Pointing out contradictions is generally seen as an attack, an attempt to lower status, rathernthan as something neutral or positive. Rationality and knowledge are high status as end states, for all that what you have to do to get ther is seen as low status nerdishness.

The ultimate in high status is effortless omniscience,as displayed by James Bond, who always knows everything about everything from nuclear reactors to the international diamond trade without ever reading a book.

Comment author: Lumifer 08 May 2014 02:57:52PM 3 points [-]

signals geekishnes, which is generally undesirable.

Um. Undesirable to whom and for what?

Comment author: TheAncientGeek 08 May 2014 03:23:07PM 6 points [-]

To most people for signaling purposes.

Comment author: Lumifer 08 May 2014 03:34:01PM 1 point [-]

I don't believe this to be true.

We might have a different concept of geekishness, though.

Comment author: TheAncientGeek 08 May 2014 04:12:57PM 1 point [-]

Who gets laughed at...Bill Gates or Warren Buffet?

Comment author: gjm 08 May 2014 04:27:55PM 6 points [-]

Neither, these days.

Comment author: tedks 08 May 2014 10:10:32PM *  -1 points [-]

How many derogatory memes (in the internet sense, pictures with words on them) exist about Warren Buffet compared to Bill Gates?

You can't deny that one of the two is easier to laugh at. You might believe this to be morally wrong or undesirable for other reasons, but it seems to be obviously true.

Comment author: Lumifer 08 May 2014 11:40:28PM 11 points [-]

Warren Buffet compared to Bill Gates? You can't deny that one of the two is easier to laugh at.

Let's throw in another non-geek/geek pair: Justin Bieber and Mark Zuckerberg.

You can't deny that one of the two is easier to laugh at.

Comment author: gjm 08 May 2014 11:21:04PM 3 points [-]

I suppose I must just not frequent the right corners of the internet, because I can't remember the last time I saw a derogatory internet-meme about either of them.

And the things I can recall Gates getting laughed at for are mostly geeky inside-baseball. For instance: "640K should be enough for anybody" -- he got laughed at for saying that (even though, so far as anyone can tell, he didn't ever actually say it) but mostly by other geeks. I've seen him admired for being very smart, admired for being a shrewd businessman and buliding a hugely valuable company, excoriated for giving the world a lot of bad software, hated for shady business tactics, laughed at for things other geeks find funny -- but I really can't think of any occasion I've witnessed where he's been laughed at for being geeky. Perhaps I just have friends and colleagues who are too geeky, and all these years the Normal People have been pointing and laughing at Bill Gates for being a geek?

I'm sure there's been a lot more said and done (positive and negative) about Gates than about Buffett, because Gates was founder and CEO of Microsoft -- a company whose products just about everyone in the Western world uses daily and many people have strong feelings about, and that engaged in sufficiently, ah, colourful business practices to get it into hot water with more than one large national government -- and Buffett, well, wasn't. Can you, off the top of your head, think of three things Microsoft has done that you feel strongly about? OK, now what about Berkshire Hathaway?

So, I dunno, maybe Gates is easier to laugh at because he's geekier, but it seems to me there are other more obvious explanations for any difference in laughed-at-ness.

Comment author: gwern 08 May 2014 11:41:00PM *  5 points [-]

And the things I can recall Gates getting laughed at for are mostly geeky inside-baseball. For instance: "640K should be enough for anybody" -- he got laughed at for saying that (even though, so far as anyone can tell, he didn't ever actually say it) but mostly by other geeks. I've seen him admired for being very smart, admired for being a shrewd businessman and buliding a hugely valuable company, excoriated for giving the world a lot of bad software, hated for shady business tactics, laughed at for things other geeks find funny -- but I really can't think of any occasion I've witnessed where he's been laughed at for being geeky.

The Simpsons comes to mind as mocking Gates for being geeky, and I'd suggest that Gates gets mocked more than Buffett (I struggle to think of anyone mocking Buffett except Bitcoiners recently after he criticized it); that said, Gates gets mocked a lot less these days than he did in the '90s, and your inability to think of many examples is due to the disappearance of '90s popular media, magazines, Usenet posts, ./ comments, etc, from consciousness.

Comment author: Eugine_Nier 08 May 2014 11:11:00PM 3 points [-]

To be fare, I suspect a large number of the anti-Gates memes are by other geeks fighting the open/closed source holy war.

Comment author: gothgirl420666 09 May 2014 09:29:36PM 3 points [-]

Geeks have most likely absorbed the "geeks are lesser, should be laughed at" meme to a certain extent as well.

Comment author: blacktrance 08 May 2014 11:56:47PM 4 points [-]

Who's more prominent, Bill Gates or Warren Buffet? Yes, Bill Gates gets made fun of more, but he gets more attention in general.

Comment author: TheAncientGeek 09 May 2014 02:32:05PM 0 points [-]

If gets mocked more, he would get more attention.

Comment author: Gunnar_Zarncke 09 May 2014 05:52:22AM 1 point [-]

Making fun of a high status person is a compensating action by low status people. Which person is made fun of depends more on the availability of trivia about that person than on their accomplishments (and geekiness surely is one such trivia). Also at the status high end the variance in any dimension is probably high.

Comment author: TheAncientGeek 09 May 2014 02:36:12PM 3 points [-]

Most trivia aren't funny. Simultaneous high and low status is funny. Dumb sports stars are another example.

Comment author: wedrifid 09 May 2014 05:48:38PM 1 point [-]

Making fun of a high status person is a compensating action by low status people.

(And even if it isn't you will tend to be well served by claiming that is what the behaviors mean. Because that is the side with the power.)

Comment author: army1987 11 May 2014 08:39:35AM 2 points [-]

Because that is the side with the power.

Which one do you mean, social power or structural power?

Comment author: Eugine_Nier 13 May 2014 12:12:42AM *  8 points [-]

I'm not sure I agree with Yvain's post.

One issue, with the abortion example:

Moldbug later uses the example of pro-lifers protesting abortion as an example of an unsympathetic and genuinely powerless cause. Yet as far as I can tell abortion protesters and Exxon Mobile protesters are treated more or less the same.

Well, there are laws limiting the ability of pro-life activists to protest outside abortion clinics. There are no analogous laws for Exxon Mobile.

His claim about how social power can't overcome structural power is dubious. Tell that to Mozilla co-founder Brendan Eich or GitHub co-founder Tom Preston-Werner. To be fair to Yvain both these incidents happened after the article was written and it appears he has at least moved in the direction of updating on them.

Also Yvain says:

Social power is much easier to notice than structural power, especially if you're not the one on the wrong end of the structural power.

This is pure BS. Structural power is very easy to notice, look at the org-chart. It is social power, as Yvain defines it, that is much harder to notice.

Comment author: wedrifid 12 May 2014 03:54:34PM 2 points [-]

Which one do you mean, social power or structural power?

I mean power. The ability to significantly influence decision relevant outcomes without excessive cost to self. The statement doesn't care where the power is derived and it would sacrifice meaning to make either substitution.

Comment author: [deleted] 30 May 2014 10:15:52PM 1 point [-]

If Bill Gates is mocked more than Warren Buffett, there are other, arguably more plausible reasons for this. Anecdotally, the most frequent cause I've encountered for criticizing or mocking Bill Gates is a dislike of his company, its products, practices, and prevalence.

Comment author: brazil84 08 May 2014 12:15:08PM 3 points [-]

The ultimate in high status is effortless omniscience,as displayed by James Bond, who always knows everything about everything from nuclear reactors to the international diamond trade without ever reading a book.

If James Bond wandered into a discussion of jewelry and started pontificating about the international diamond trade, I wonder if it would be seen as high status or low status.

Comment author: BrienneStrohl 09 May 2014 02:40:15AM 2 points [-]

Can you give three examples of improvements in your life since your epiphany?

Sure!

1) My conversations with friends are more efficient illuminating. 2) I learn more quickly from mistakes. 3) I prevent more mistakes before they get the chance to happen.

If I hadn't given those examples, could you have predicted positive changes resulting from having generally more accurate beliefs? It really doesn't seem that surprising to me that someone's life would improve in a zillion different ways if they weren't wrong so much.

Comment author: brazil84 09 May 2014 07:43:51AM 4 points [-]

1) My conversations with friends are more efficient illuminating. 2) I learn more quickly from mistakes. 3) I prevent more mistakes before they get the chance to happen.

Well can you give specific examples of mistakes you learned more quickly from and/or prevented? And can you give an example of some illumination you got more efficiently out of a conversation?

If I hadn't given those examples, could you have predicted positive changes resulting from having generally more accurate beliefs?

Not necessarily -- peoples' maps of reality tend to be pretty good when important personal interests are at stake. Perhaps a good Cimmerian believes, in theory, that if he dies then Crom will instantly take him to eternal paradise. But somehow that doesn't stop our Good Cimmerian from expending a lot of effort trying to stay alive, possibly including breaking some of Crom's rules.

Also, it costs mental energy to make your beliefs more accurate and there is no guarantee that it will be worth the trouble to do so.

Last, as mentioned above, beliefs serve other purposes besides being inputs for predictions.

Comment author: So8res 08 May 2014 05:07:44PM *  25 points [-]

Thanks for writing this!

I remain unconvinced. I agree with most of your points, and I think most of my disagreement stems from modeling my mind, the world, and/or 'dark techniques' in a different way than you do. I'd be happy to get together and try to converge sometime.

I do have one direct disagreement with the text, which is somewhat indicative of my more general disagreements.

Your instrumental rationality will always be limited by your epistemic rationality.

In my experience, many rationalists are motivation-limited, not accuracy-limited. I have met many people who are smarter than I am, who think faster than I do, who are better epistemic rationalists than I am---and who suffer greatly from akrasia or other stamina issues.

I seem to be quite good at achieving my goals. I am by no means convinced that this is due to some excess of willpower: my successes could alternatively be attributed to chance, genetics, self-delusion, or other factors. Even conditioned upon the assumption that my ability to avoid akrasia is a large part of my success, I am not convinced that my motivational techniques are the source of this ability.

However, I do see many "light-side" epistemic rationalists suffering from more akrasia than I do. In the real world, I am not convinced that epistemic rationality is enough. As such, I am cautious about removing motivational techniques in the name of the light.

(I also am under the impression that I can use my motivational techniques in such a way as to avoid many of the averse effects you mention, which gets back to us modeling things differently. This is, of course, exactly what my brain would tell me, and the objection should largely be disregarded until we have a chance to converge.)

There is, of course, some degree to which the above argument only indicates my ability to find self-protecting arguments that I myself find convincing. This topic is somewhat emotionally laden for me, so next time I find a few spare hours I will spend them strongly considering whether I am wrong. However, after cursory examination, I don't expect any particular update.

Comment author: ChristianKl 09 May 2014 11:27:01AM 2 points [-]

Which particular motivation techniques do you use?

Comment author: So8res 09 May 2014 04:19:19PM 3 points [-]

There are many. I was particularly referring to the ones I discussed in the dark arts post, to which the above post is a followup.

Comment author: ChristianKl 09 May 2014 04:42:32PM 2 points [-]

Okay, I didn't remember that you were the person who wrote that post.

Comment author: tedks 08 May 2014 10:25:59PM -1 points [-]

It seems somewhat absurd to say that your ability to achieve goals is limited by the thingspace cluster we refer to as epistemic rationality. After all, caring too much about epistemic rationality leads to needing things like this.

Epistemic rationality seems like it should be something that you care about when it matters to care about, and don't care about when it doesn't matter. Like any other investment, your capital invested should be proportional to your rate and belief of return. Similarly, you should always be willing to sacrifice some epistemic cleanliness if it means winning. You can clean up the dark nasty corners of your mind on top of your pile of utility.

Comment author: So8res 09 May 2014 12:08:24AM 12 points [-]

I think the point Brienne made is that seemingly small tradeoffs of epistemic accuracy for instrumental power actually cost much more than you might expect. You can't pay a little epistemic accuracy for a lot of instrumental power, because epistemic rationality requires that you leave yourself no outs. If you sanction even one tiny exception, you lose the benefits of purity that you didn't even know were available.

Comment author: tedks 09 May 2014 03:38:21AM -1 points [-]

You're definitely paying for epistemic rationality with instrumental power if you spend all of your time contemplating metaethics so that you have a pure epistemic notion of what your goals are.

Humans start with effectively zero rationality. At some point, it becomes less winning to spend time gaining epistemic rationality than to spend time effecting your goals.

So, it seems like you can spend potential epistemtic rationality for instrumental power by using time to effect change rather than becoming epistemically pure.

To respond to some of your later points:

Take a programming language like OCaml. OCaml supports mutable and immutable state, and you could write an analysis over OCaml that would indeed barf and die on the first instances of mutation. Mutable state does make it incredibly difficult, sometimes to the point of impossiblity, to use conventional analyses on modern computers to prove facts about programs.

But this doesn't mean that a single ref cell destroys the benefits of purity in an OCaml program. To a human reading the program (which is really the most important use case), a single ref cell can be the best way to solve a problem, and the language's modularity can easily abstract it away so that the interface on the whole is pure.

Similarily, I agree that it's important to strive for perfection, in all cases. But striving for perfection doesn't mean taking every available sacrifice for perfection. I can strive for epistemic perfection while still choosing locally to not improve my epistemic state. An AI might have a strict total ordering of terminal goals, but a human never will. So as a human, I can simultaneously strive for epistemic perfection and instrumental usefulness.

In any case, I still think there's a limit where the return on investment into epistemic rationality diminishes into nothingness, and I think that limit is much closer than most less wrongers think, primarily because what matters most isn't absolute rationality, but relative rationality in your particular social setting. You only need to be more able to win than everyone you compete with; becoming more able to win without actually winning is not only a waste of time, but actively harmful. It's better to win two battles than to waste time overpreparing. Overfocusing on epistemic rationality ignores the opportunity cost of neglecting to use your arts for something outside themselves.

Comment author: Lumifer 09 May 2014 12:56:09AM -1 points [-]

If you sanction even one tiny exception, you lose the benefits of purity

What is that "purity" you're talking about? I didn't realize humans could achieve epistemic perfection.

Comment author: So8res 09 May 2014 01:12:23AM 10 points [-]

Keep in mind here that I'm steelmanning someone else's argument, perhaps improperly. I don't want to put words in anyone else's mouth. That said, I used the term 'purity' in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Continuing the steelmanning, however, I'd say that while no human can achieve epistemic perfection, there's a large class of epistemic failures that you only recognize if you're striving for perfection. Striving for purity, not purity itself, is what gets you the gains.

Comment author: BrienneStrohl 09 May 2014 02:34:10AM 9 points [-]

So8ers, you're completely accurate in your interpretation of my argument. I'm going to read some more of your previous posts before responding much to your first comment here.

Comment author: Eugine_Nier 09 May 2014 01:38:07AM 6 points [-]

Yes, as Eliezer put it somewhat dramatically here:

If you once tell a lie, the truth is ever after your enemy.

To expand on this in context, as long as you are striving for the truth any evidence you come across helps you, but once you choose to believe a lie you must forever avoid dis-confirming evidence.

Comment author: fezziwig 09 May 2014 07:41:33PM 3 points [-]

You've drawn an important distinction, between believing a lie and telling one. Right now we're talking about lying to ourselves so the difference isn't very great, but be very careful with that quote in general.

Comment author: fezziwig 09 May 2014 07:47:45PM 2 points [-]

You've drawn an important distinction, between believing a lie and telling one. Your formulation is correct, but Eliezer's is wrong.

Comment author: Eugine_Nier 12 May 2014 02:02:47AM 7 points [-]

Telling a lie has it's own problems, as I discuss here.

Comment author: fezziwig 12 May 2014 08:08:10PM 1 point [-]

Yes, it's pretty much impossible to tell a lie without hurting other people, or at least interfering with them; that's the point of lying, after all. But right now we're talking about the harm one does to oneself by lying; I submit that there needn't be any.

Comment author: Eugine_Nier 12 May 2014 09:57:44PM 1 point [-]

Did you even read the comment I linked to? It's whole point was about the harm you do to yourself and your cause by lying.

Comment author: Armok_GoB 14 May 2014 12:15:17AM *  1 point [-]

One distinction I don't know if it matters, but many discussions fail to mention at all, is the distinction between telling a lie and maintaining it/keeping the secret. Many of the epistemic arguments seem to disappear if you've previously made it clear you might lie to someone, you intend to tell the truth a few weeks down the line, and if pressed or questioned you confess and tell the actual truth rather than try to cover it with further lies.

Edit: also, have some kind of oat and special circumstance where you will in fact never lie, but precommit to only use it for important things or give it a cost in some way so you won't be pressed to give it for everything.

Comment author: BrienneStrohl 09 May 2014 10:25:49PM 3 points [-]

I can already predict, though, that much or my response will include material from here and here.

Comment author: Lumifer 09 May 2014 01:35:34AM 0 points [-]

in loose analogy to a 'pure' programming language, wherein one exception is sufficient to remove much of the possible gains.

Could you give some examples?

there's a large class of epistemic failures that you only recognize if you're striving for perfection.

I am not sure which class you're talking about... again, can you provide some examples?

Comment author: army1987 08 May 2014 04:00:11PM 12 points [-]

You'd better remove Scott's real last name from your post before search engines index it, because he doesn't want it to be easy to find his blog given his full name.

Comment author: BrienneStrohl 08 May 2014 06:15:41PM 11 points [-]

done. sorry, didn't know.

Comment author: gjm 09 May 2014 10:15:47AM 1 point [-]

Agreed, but unless a bunch of other things that have been there for ages get removed it's never going to take much effort to do. (E.g., Scott might want to look for ways to make typing in his real name not produce the top Google result that it currently does.)

Comment author: army1987 09 May 2014 03:31:19PM *  3 points [-]

Raikoth.net only links to his old blog. AFAICT none of the results in the first page allow someone who doesn't already know where his new blog is to find it in less than five minutes, and he isn't trying to make it impossible, only inconvenient.

(Edit: OTOH raikoth.net does show his new pen name, and someone might google it, but I don't think it'd occur to Aunt Tillie to do so.)

Comment author: gjm 09 May 2014 07:01:01PM *  1 point [-]

Your edit explains exactly what I had in mind.

[EDITED to fix a typo.]

Comment author: Swimmer963 10 May 2014 10:37:36AM *  4 points [-]

Thoughts on this:

Obviously it's possible to want multiple things and believe multiple things. My mind, at least, is best approximately as a society of sub-agents than as a single unified self. I think "System 1 vs System 2" is already too much of an approximation–my System 1 definitely isn't unified, and even my System 2 doesn't agree on a single set of beliefs.

Can you simultaneously want sex and not want it?

Yes, and even large amounts of luminosity haven't made this divide go away. I used to not want sex because it was unpleasant, but want to want it because it was a way to profess love and, damn it, I wanted to do that. The not-wanting-sex happened on a more basic, less endorsed level, leading to weird mental resistance and frustration whenever I overrode it and had sex anyway because it was a thing I ought to do. I now do almost the opposite–I listen to my System 1 instincts and don't have sex, but I'm not totally happy with this state of affairs. There's good evidence that humans can't change their sexual orientations, so I've accepted it for now, but if that status quo changed, I would have some rethinking to do, and might press a button to make it different. These are different 'file formats' of belief–System 2 verbal beliefs don't automatically propagate into System 1 visceral urges–but they're nevertheless contradictory, and years of thinking about and paying a lot of attention to the issue hasn't allowed me to resolve that.

Another example: I want kids. By that, I mean that seeing a baby makes me feel all warm and fuzzy inside; that I daydream about it; that the first thought that comes when I see or learn many things is "I'm going to teach this to my kids!" I'm also fairly sure that having kids now is not the correct thing to do. It may not be the correct thing to do for a few years. In this case, System 2 rules win out, while System 1 whispers quietly in the background that why don't I have a baby already, and hey, you could put up with some unpleasantness and have a baby in nine months. I'm sure as hell not going to change my System 1, but there is or is not an instrumentally rational thing to do, and what my System 1 wants is only a small part of the calculation. So, if all the other variables push me in the other direction, I might end up not having kids for a long time–and having a mental contradiction for the same length of time.

Is this inevitable? Maybe, maybe not. But it certainly seems to be the default, even for people who spend a lot of time thinking about their beliefs.

Comment author: dhasenan 13 May 2014 05:05:23AM 3 points [-]

So I predict that if I intervene and stop the ticking (in non-ridiculous ways), my car will keep working.

Nitpick: "ridiculous" is relative to your goals here. A slightly better wording might be "fix the root cause of the ticking".

Comment author: Kawoomba 08 May 2014 05:36:16PM 3 points [-]

Well, I for one am confused much of the time, and whenever I encounter someone who ostensibly isn't, I get nervous. Believing falsehoods isn't just the domain of dark artisans, it comes courtesy of having a brain. My belief that "most of my beliefs must have large error bounds" probably has among my lowest error bounds; I'm surest about being unsure.

I do wonder if convincing oneself of having given up deluding oneself isn't the greatest dark side achievement of all -- after all, how would you know it's not? Maybe you got tricked by your System 0.

But I'm being contrarian. Good post overall. I guess the metric I'd prefer in terms of belief improvement is "number of times I've noticed my confusion and bent myself to accept what I perceive, instead of bending my perceptions to myself". More of an engineering approach, still allowing for a few holy cows that don't get slaughtered (without overly compromising your overall strength as a rationalist).

Comment author: ChristianKl 08 May 2014 01:26:09PM 3 points [-]

When you know that some of your beliefs are false, and you know that leaving them be is instrumentally rational, you do not develop the automatic reflex of interrogating every suspicion of confusion.

Noticing confusion is about noticing your feelings and reacting towards them. Acknowledging your feelings and thinking about their causes is useful whether the feeling is confusion, anger or fear.

Comment author: drethelin 03 June 2014 08:24:41PM 3 points [-]

Also joy! and happiness! noticing what kind of things leave me joyful has been helpful for me

Comment author: Viliam_Bur 08 May 2014 02:02:48PM 9 points [-]

Being wrong about something may harm you in the long term. Being right when others are wrong can get you killed right now.

Not sure how exactly this relates to the article (maybe it doesn't), but I feel weird when this obvious part is missing from a debate about instrumental rationality. As if there is just me and the universe, and if I have the correct beliefs, the universe will reward me, and if I have incorrect beliefs, the universe will punish me, on average. Therefore, let's praise the universe and let's have correct beliefs! I agree that if I were a Robinson on an empty island, trying to have correct beliefs would probably be the best way. But most people are not in this situation.

It is a great privilege to live in the time and space when having the right beliefs doesn't get you killed immediately. It probably contributes to our epistemic rationality more than anything else. And I enjoy it, a lot! But it doesn't mean that the social punishments are gone completely. Even in the same country, different people live in different situations, so probably an important strategic move in becoming more rational is to navigate yourself in situations where the punishment for having correct beliefs is smaller. If you can't... then you play by the more complex rules; the outcomes of epistemic rationally may be smaller, and you might need some doze of Dark Arts just to survive. (And by the way, this is the situation we are optimized for by evolution.)

Uhm... not sure where I wanted to get by saying this. I guess I wanted to say that "epistemic rationality is the best way to win" depends on the environment. In theory, you could have epistemically correct beliefs and yet behave in public according to other people's wrong beliefs and expectations; but I think this is rather difficult for a human.

Comment author: ChristianKl 08 May 2014 08:48:28PM 11 points [-]

Having correct beliefs and telling people about them are two separate things.

Comment author: JTHM 09 May 2014 02:43:28AM *  10 points [-]

Lying constantly about what you believe is all well and good if you have Professor Quirrell-like lying skills and your conscience doesn't bother you if you lie to protect yourself from others' hostility to your views. I myself lie effortlessly, and felt not a shred of guilt when, say, I would hide my atheism to protect myself from the hostility of my very anti-anti-religious father (he's not a believer himself, he's just hostile to atheism for reasons which elude me).

Other people, however, are not so lucky. Some people are obliged to publicly profess belief of some sort or face serious reprisals, and also feel terrible when they lie. Defiance may not be feasible, so they must either use Dark Side Epistemology to convince themselves of what others demand they be convinced, or else be cursed with the retching pain of a guilty conscience.

If you've never found yourself in such a situation, lucky you. But realize that you have it easy.

Comment author: brazil84 10 May 2014 12:11:22PM 6 points [-]

Lying constantly about what you believe is all well and good if you have Professor Quirrell-like lying skills and your conscience doesn't bother you if you lie to protect yourself from others' hostility to your views.

Even then, it's more cognitively demanding to lie. It's like running a business with two sets of books -- the set you show to the IRS and the set you actually use to run the business. It may save you a lot in taxes but you still have to spend double the time keeping your books.

Comment author: TheAncientGeek 10 May 2014 03:18:18PM *  0 points [-]

Agreeing with the people around you isn't demanding. And most people don't need to maintain any "true" beliefs about politics, religion and philosophy. They butter no parsnips in practice, and parsinip-buttering beliefs are not varied or unpredictable enough for ingroup signalling purposes.

Comment author: brazil84 10 May 2014 05:01:29PM 3 points [-]

Agreeing with the people around you isn't demanding.

I would say it depends on whether you really agree with them or not. If you believe X and you are surrounded by people who believe Y, and you need to conceal your belief in X, then you constantly have to be asking yourself "what would someone who believes in Y do or say?"

And most people don't need to maintain any "true" beliefs about politics, religion and philosophy.

I'm not sure what it means to "maintain 'true' beliefs." If you go through life, you will naturally develop a mental model (at least one, I suppose) of how the world works. If that model contains an Almighty Creator, then you are a theist. If it doesn't, then you are an atheist. Perhaps there is a third possibility, that your model is uncertain on this point, making you an agnostic.

If you are an atheist or an agnostic, and you are in a time or place where everyone is expected to be a theist, especially anyone who wants to get ahead in life, then that's a potential problem. Agreed?

Comment author: TheAncientGeek 11 May 2014 11:02:54AM *  1 point [-]

I believe your first point us answered by my second.

You don't need mental models involving God or not god for any practical purpose .. other than solidarity with your community.

If you are one of the people, typical on LW but not in the population at large, who like to have beliefs on the "big" but practically unimportant questions, you will find dissimulation difficult. If not, not.

Comment author: brazil84 11 May 2014 06:59:11PM 4 points [-]

You don't need mental models involving God or not god for any practical purpose .. other than solidarity with your community.

I disagree with that. For example, suppose you are hunting in the woods and you find 10 gold coins. According to your village elders, Crom the grim gloomy unforgiving god commands that you donate any such windfall to the Village Shrine to Crom, and that to do so will guarantee you eternal paradise. And that to fail to do so will guarantee eternal damnation.

If your mental model of the universe includes Crom the grim gloomy unforgiving god, then of course you will make the donation. Otherwise you are likely to keep the windfall to yourself. Of course a decision must be made.

Of course you might object that those days are gone, that nobody is expected to follow religious precepts anymore, at least not in the United States. And I would disagree with that too. In today's United States, you must still decide where to live and whom to do business with. Does your mental model of the universe include the fact that certain groups are more prone to crime and disruptive behavior than others? If so, you would be wise to have a rationalization in mind for why you don't want to live anywhere near such groups. Or at least a few euphemisms.

Anyway, please answer my question from before:

If you are an atheist or an agnostic, and you are in a time or place where everyone is expected to be a theist, especially anyone who wants to get ahead in life, then that's a potential problem. Agreed?

Comment author: army1987 12 May 2014 11:19:56AM 1 point [-]

Otherwise you are likely to keep the windfall to yourself.

Unless your model of the world includes people ostracizing you for doing so.

Comment author: brazil84 12 May 2014 12:36:38PM 4 points [-]

Unless your model of the world includes people ostracizing you for doing so.

I completely agree, but you are kinda fighting the hypothetical here.

Comment author: TheAncientGeek 11 May 2014 11:18:44AM *  5 points [-]

A man was telling one of his friends the secret of his contented married life: "My wife makes all the small decisions," he explained, "and I make all the big ones, so we never interfere in each other's business and never get annoyed with each other. We have no complaints and no arguments." "That sounds reasonable," answered his friend sympathetically. "And what sort of decisions does your wife make?" "Well," answered the man, "she decides what jobs I apply for, what sort of house we live in, what furniture we have, where we go for our holidays, and things like that." His friend was surprised. "Oh?" he said. "And what do you consider important decisions then?" "Well," answered the man, "I decide who should be Prime Minister, whether we should increase our help to poor countries, what we should do about the atom bomb, and things like that."

Comment author: ChristianKl 11 May 2014 09:05:54PM 2 points [-]

You don't need mental models involving God or not god for any practical purpose .. other than solidarity with your community.

Oathes do work as a commitment device if you think that the God on which you swear is real and will punish you really exists. No automatic tracking like Beeminder, but still a decent alternative.

Comment author: army1987 12 May 2014 11:21:07AM 1 point [-]

But such punishment is even further away in time than staying fat or failing the exam, so if the latter can't motivate you to diet or study...

Comment author: ChristianKl 09 May 2014 11:25:31AM 5 points [-]

Not talking about religion, politics and sex is position that's acceptable in many places.

Being an atheist is also an identity label. You don't need an identity label to have accurate beliefs. If you label yourself as an atheist than you will feel uncomfortable doing certain to participate in certain religious rituals because your family expects you to be at church.

If you just don't believe the ritual becomes a silly game that won't make you uncomfortable.

Comment author: NancyLebovitz 03 June 2014 11:25:44AM 1 point [-]

How skillful you need to be at lying depends on the culture you're in and the personalities of the people you're surrounded by.

Some cultures leave a lot of room for hypocrisy.

Comment author: christopherj 16 May 2014 04:32:38AM 1 point [-]

I myself lie effortlessly, and felt not a shred of guilt when, say, I would hide my atheism to protect myself from the hostility of my very anti-anti-religious father (he's not a believer himself, he's just hostile to atheism for reasons which elude me).

Hm, an atheist who hides his atheism, from his father who also seems to be an atheist (aka non-believer) but acts hostile towards atheists? Just out of curiosity, do you also act hostile towards atheists when you're around him?

Comment author: fezziwig 08 May 2014 07:17:56PM *  9 points [-]

I think you've identified a special case of a more general problem, which is that true beliefs do not have equal value, and that their values can vary wildly with your circumstances. To borrow blacktrance's example: if you're living in 6th-century Rome then it's useful to know that Jews aren't inherently evil...but it's more useful to know what happens to people who say so. And if you don't know how to profess that Jews are inherently evil without being corrupted by that lie, then it's more important to learn that than it is to believe true things about Jews.

This discipline, of predicting the value of information before you've learned it, is very difficult. For me, it's the most difficult thing. But it's also the center of the art; if it weren't, we could all level up endlessly by browsing Wikipedia.

Comment author: nydwracu 09 May 2014 06:29:06AM 4 points [-]

The concept is called 'ketman' -- that term was popularized by Czeslaw Miłosz, who wrote about its practice under Communism.

I'm not sure if the pressure comes from lying per se -- it's not as if the practice is recent or uncommon -- or from having no place to go where you can escape the necessity to lie. Dalrymple was on to something when he said that the purpose of forcing public profession of the official idea under Communism was to humiliate; any place to tell the truth is a blow against the regime's goal of humiliation. Underground acts of non-public defiance aren't a new concept.

Secret societies aren't a new concept either; they don't seem to be as common anymore as they once were (but then again, how would I know?), but that's because they've been replaced by open but obscure/anonymous pseudosocieties online.

But there's a problem with the act of practicing ketman and going underground. Say you get n utility from having a secret society or similar, having an outlet to assert the truth outside the watch of the authority demanding that you lie -- but you'd get n^2 utility from getting the official lies dethroned. But you'd lose a great deal of utility if you got caught not believing in the lie.

That's a difficult coordination problem, since you clearly can't dethrone the official idea yourself. Perhaps it is deserving of study.

Comment author: Viliam_Bur 10 May 2014 07:11:04PM 6 points [-]

I'm not sure if the pressure comes from lying per se -- it's not as if the practice is recent or uncommon -- or from having no place to go where you can escape the necessity to lie.

I believe it's the latter. On emotional level, if I can't speak openly with a person, I have a feeling like they "don't belong to my tribe", they are a stranger. There is a difference between being sometimes with strangers, and being alone among strangers, all the time.

It is much easier to have clear rules about when to use my "public" face, and when to relax and be myself. Using my "public" face increases my internal pressure; I need a place to talk about it and relax. If I don't have that place, then I will lose attention in random moments, and expose my internal heresies. It is easier to keep control, if I have clear boundaries for when the hypocrisy begins and when it ends.

Having just one person to talk honestly with already helps a lot. (I am tired to google now, but there is probably some article on LW about how the first voice of dissent is most important.) It is much easier for me to think, if I can talk. Talking makes my thought processes much clearer. Not having a sane person to talk with is like not having a part of brain, or for a more realistic analogy, like being drunk or exhausted all the time.

Comment author: NancyLebovitz 03 June 2014 11:29:26AM 3 points [-]

The recent history of getting homosexuality mainstreamed is an interesting example.

Comment author: MugaSofer 10 May 2014 06:38:04PM *  5 points [-]

As if there is just me and the universe, and if I have the correct beliefs, the universe will reward me, and if I have incorrect beliefs, the universe will punish me, on average. Therefore, let's praise the universe and let's have correct beliefs!

It is just you and the universe. "Other people" are a part of the universe.

(I actually kind of agree with you, though - the larger point is that your beliefs can impact outcomes directly rather than only via predictions. A non-sentient example of this would be Placebo effects. This seems not to have been included in the OP's discussion.)

Comment author: blacktrance 08 May 2014 04:53:50PM *  4 points [-]

Having correct beliefs does not mean expressing them. If I traveled back in time to medieval Rome, I would still believe that Jews aren't inherently evil and that Christ did not rise from the dead, but it would be unwise for me to be too public about those beliefs.

Comment author: Eugine_Nier 08 May 2014 11:17:34PM *  4 points [-]

Nickpick: my understanding is that even in medieval Rome a lot of people didn't consider Jews inherently evil. At least to the extend that they were willing to engage in business dealings with them.

Comment author: JoshuaFox 10 May 2014 05:47:05PM 2 points [-]

cache out -> cash out.

Not that I want to nitpick spelling, but "cached thoughts" and "cashing out your beliefs" are both used for different things.

Comment author: moridinamael 09 May 2014 02:23:08PM 2 points [-]

So, it seems like there's been up upswing in interest regarding meditation around here recently. I mention this because in this article Brienne advocates for several mental habits such as catching yourself having millisecond-scale mental events and arresting or reversing them, or being able to dispassionately watch herself being uncomfortable and then act on that discomfort in an effectively dissociated fashion. I have done exactly the same thing where I've suggested in a post that the solution to somebody's problem was to simply execute a highly specific mental contortion, with the "how" of it left as an exercise to the reader. Plug for MarkL's excellent meditation blog.

If I were to be honored with a seat on the Less Wrong High Council, I would probably lobby for some kind of short daily meditative practice to me incorporated into our dogma. Aside from various peer reviewed health benefits, I can anecdotally report that mindfulness meditation trains exactly the type of command-and-control abilities Brienne is describing.

Comment author: NancyLebovitz 03 June 2014 11:48:31AM 1 point [-]

I've believed that some thoughts are hard to notice because they happen quickly, but now I'm wondering whether it's not so much that the thoughts are fast as that blanking the thoughts out of consciousness is what happens quickly.

Your "mental events" would include at least both the thoughts and the blanking out process. Have you noticed a blanking out process, and if so, what did you notice about it?

Comment author: CronoDAS 08 May 2014 08:42:17AM 3 points [-]

It's not that hard to accidentally believe a contradiction, since we're not logically omniscient and "consistency checking" is a computationally intractable problem except in simple cases. Proving that an arbitrary sentence of propositional logic isn't a contradiction is an NP-complete problem, and human beliefs are more complicated than statements in propositional logic.

Comment author: brazil84 09 May 2014 11:45:19PM 2 points [-]

It's not that hard to accidentally believe a contradiction, since we're not logically omniscient and "consistency checking" is a computationally intractable problem except in simple cases.

I agree with you in theory, but in practice there are plenty of contradictions which are pretty darned obvious. i.e. the limiting factor in human rationality seems to be the human tendency to self-deception and hypocrisy. As opposed to the computational difficulty of finding contradictions.

Comment author: shminux 08 May 2014 03:06:30PM *  3 points [-]

Against Against Against Doublethink

What, only 3 levels-deep meta? This is like approximating e^x with only 1+x+x^2/2+x^3/6. Back to the drawing board.

Comment author: Benito 08 May 2014 06:29:45PM 1 point [-]

According to this analogy, we previously thought e to be 2. Now it's 2.6 recurring. We're making progress, of a sort.

Comment author: itaibn0 10 May 2014 10:58:26PM 0 points [-]

Not if what you're trying to calculate is e^(-5).

Comment author: Kawoomba 08 May 2014 10:01:04PM 1 point [-]
Comment author: TheAncientGeek 08 May 2014 09:15:23AM -1 points [-]

Belief is for many things, including signaling.

Instrumentlal rationality and epistemic rationality aren’t the same. Epistemic rationality seeks to maxmise knowledge, truth and consistency. Instrumental rationality seeks to maximise efficiemcy, gain and personal utility.One area they come apart is signalling, the implicit and explicit ways we tell others what kind of person we are.  The instrumentally rational way is to signal is to maximise your utility by sending out  agreeable signals to whichever individual or group youhappen to need something from. This Vicar-of-Bray style behaviour will lead to your making highly inconsistent statements in the limit. If you want to signal sincerity, you will need to believe them too.So you will end up with inconsistent beliefs. So,IR+signalling is inconsistent with ER. 

Comment author: Eugine_Nier 08 May 2014 11:15:00PM 4 points [-]

Of course the more times you switch sides, the harder it becomes for anyone to take your sincerity seriously.

Comment author: TheAncientGeek 10 May 2014 03:08:45PM 2 points [-]

Assuming you're found out.

If you are scrutinised, in different siutations, by someone who cares about consistency, the benefit of inconsistent signalling vanishes.And noone is scrutinsed more than a politician in a healthy democracy. People read reports of politicians contradicting themselves and being inconsistent, and infer that politicians are unusually hypocritical.

But absence of evidence is not evidence of absence The ordinary persons hypocrisy is not publicised becausd the ordinary person does not have reporters following them round. The ordinary person typically moves in a number of fairly disjoimt circles -- the workplace, family, same-sex friends and so on -- signaling different loyalties to each. The existence of Chinese walls is even humorously acknowledged: "what happens in X stays in X".

Inconsisten.cy reaches a peak when communicating with completely unconnected individuals and groups. My go-to example is a telesales operative Iwho would ring various people during the crude of a day and agree with every word they said. Her customers were of course unknown to each other and in no position to compare notes,.

Comment author: Eugine_Nier 11 May 2014 03:40:00AM 2 points [-]

Well, in the example you cited, the Vicar of Bray, one is dealing with the kind of religious fanatics who are likely to have low tolerance for hypocrisy and may very well do some investigation into one's history.

Comment author: nydwracu 09 May 2014 06:33:37AM 3 points [-]

This Vicar-of-Bray style behaviour will lead to your making highly inconsistent statements in the limit.

Will it?

Consider the regime of the official idea. Under certain regime structures, its direction of development is as obvious as its current state, and its current state is obvious. That is, there's one group that you consistently need something from, and the only inconsistencies arise from its idea-drift over time -- which can be predicted with a good deal of accuracy.

Comment author: TheAncientGeek 09 May 2014 12:38:00PM 0 points [-]

Where is this regime ....?

Comment author: nydwracu 14 May 2014 06:15:14PM 2 points [-]

It's a type, a pattern. I don't mean to single out any particular regime. I suspect there are instances of this type, but I'll leave that as an exercise for the reader.

Comment author: TheAncientGeek 14 May 2014 06:18:02PM 0 points [-]

I suspect that there aren't instances, hence my question.

Comment author: drethelin 03 June 2014 08:20:00PM *  1 point [-]

This reminds me of the studies that found that "Releasing stress" via punching pillows and screaming only trained you to respond to stressful situations in violent ways, rather than actually having beneficial effects. Training is a question of learning to unconsciously do a conscious activity and training yourself in dark side methods is making unconscious your reliance/use of falsities.

Comment author: Gunnar_Zarncke 29 May 2014 05:00:00PM 1 point [-]

Finally got around to reading this completely. Great exposition.

[T]oward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques [...] that I double-thought myself into believing accuracy was not so great. But I was wrong. And that mattered. Having accurate beliefs is a ridiculously convergent incentive.

It reminds me of Wittgensteins ladder: You seemed to have stepped up the ladder thru practical rationality and dark arts and by no longer need them explicitly. You have unconscious competence.

I wonder if it just coincidence that there are also Four Stages of Competence.

Comment author: TobyBartels 12 May 2014 05:53:16AM 1 point [-]

Would you advocate never using Dark Side techniques, or are these techniques reasonable in some situations?, even though, once you become a real master at rationality, they have to be left behind.

(Before I studied the Dark Arts, truth was truth and lies were lies. While I studied the Dark Arts, truth was not truth and lies were not lies. After I studied the Dark Arts, truth was truth and lies were lies.)

Comment author: Viliam_Bur 12 May 2014 08:07:30AM 4 points [-]

Just a sidenote: studying is not the same as using. It is worth studying Dark Arts techniques so you recognize when other people are trying to use them against you.

Comment author: TobyBartels 14 May 2014 10:41:37PM 0 points [-]

True indeed. Although in this case, even the using would be using them on yourself.

Comment author: tristanhaze 01 October 2014 12:09:41PM 0 points [-]

I just want to say that the title of this post is fantastic, and in a deep sort of mathy way, beautiful. It's probably usually not possible, but I love it when an appropriate title - especially a nice not-too-long one - manages to contain, by itself, so much intellectual interest. Even just seeing that title listed somewhere could plant an important seed in someone's mind.

Comment author: christopherj 16 May 2014 05:33:27AM 0 points [-]

It is pretty much a necessity that humans will believe contradictory things, if only because consistency checking each new belief with each of your current beliefs is impossibly difficult. Cognitive dissonance won't occur if the contradiction is so obscure that you haven't noticed it, or perhaps wouldn't even understand exactly how it contradicts a set of 136 other beliefs even if it was explained to you. Even if you could check for contradictions, your values change drastically from one hour to the next (how much you value food, water, company, solitude, leisure, etc), and that will change all your beliefs that start with "I want ...". Most likely you actually have different bits of brain with different values vying for dominance

Moreover, many times a belief is part of a group membership (eg "I support [cause]", or simply feels good (eg "I am a good person"). People will not appreciate if you point out contradictions in these things, possibly because they are instrumental and not epistemic beliefs. There is no doubt that professing contradictory beliefs can be highly beneficial (eg "Republicans are fiscally conservative, want small government, cut taxes, more money for the military and enforcing morality", if you reject any of that you're not a viable candidate)