Today's post, The Modesty Argument, was originally published on December 10, 2006. A summary (taken from the LW wiki):

Factor in what other people think, but not symmetrically, if they are not epistemic peers.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Proper Use of Humility, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
13 comments, sorted by Click to highlight new comments since:

I think its interesting how much of Eliezer's early writings focus on being willing to take controversial stands, even when other people disagree.

Had I not been "coming out of the rationality closet" to my family and friends (as a result of college decision conversations) for the past few days, I would be confused about why this is.

Had I not been "coming out of the rationality closet" to my family and friends (as a result of college decision conversations) for the past few days, I would be confused about why this is.

I'm intrigued yet confused. What on earth do you mean by coming out of the rationality closet? Making a few well thought out decisions about college doesn't sound especially out there. Or did the coming out also involve confessing apostasy?

My parents already knew that I'm an atheist.

The issue is when you start running into thinking weird things about what you're trying to do, and how you're trying to do it, and when those weird things don't have particularly loyal signals.

The particular example I have in mind is the idea of rationality outreach. I explained that there's some amount that I can do, but that if I help people who have similar goals to mine pursue their more efficiently, and influence other people to have goals more similar to mine, then more utility will be generated over the long run than if I tried to do something myself. And that its possible that Harvard people would be more likely to create utility in the long term.

Which just sounds icky to a lot of people. A lot of people. Reactions have ranged from "mumbo jumbo" to "I guess you're right, but eurgh" to "that's ballsy, and I don't know whether to laugh at you or admire you".

I ultimately compared it to lobbying (it's ugly, and you can try to create political change without it, but its harder and it probably won't work), and summarized it as "Helping the world by helping others help others". This seems to have worked.

It also interferes with my ability to signal loyalty via politics, since I disagree with most people's political statements, think discussion of it is nigh useless with regards to changing anything, and can see people going for applause lights in conversation. Almost nobody seems to talk about the effectiveness of proposed policies as anything other than a right/wrong issue, and that spectrum seems to be pretty much uncorrelated with success. Trying to talk about the economics of something gets slight glares, since you're arguing for the other team.

...so I just stay out of political conversations now.

Our of curiosity, if you had to label yourself politically, what would it be?

AAT has two very important qualifiers that you have to include if you're discussing it. First, both individuals have to have the same priors. If I have some reason to believe that the other party and I have different priors, then my confidence that AAT applies will decrease as well. In the case of the creationist, I think it is at least likely that we have some different priors (occam's razor, nature of causality, etc), so I won't be terribly surprised if AAT doesn't seem to work in this case. Second, there is that picky clause about "perfect Bayesians". If a creationist is not accurately updating on the evidence, then I shouldn't expect AAT to work. If you really want to check yourself with AAT, you could (assuming you have the same priors) determine what an ideal bayesian would do with the evidence the other person has and determine if AAT works then. That math seems like it would get really complicated really fast.

With regards to the superintelligence/schizophrenics question: we can all agree that the AI would actually get the wrong answer if it inferred, "I think I am a superintelligence, therefore I probably am a schizophrenic". However, every schizophrenic with delusions of deification would draw the right conclusion. If I was trying to code a superintelligence, I wouldn't deliberately write the AI to do this, but if anyone ever reads this who thinks that they are a superintelligence, maybe you should try to test that claim, just to be sure. Write down some predictions and test how accurate you actually are.

On a side note, I've often wondered what would happen if a schizophrenic was taught the methods of rationality, or a rationalist developed schizophrenia. Have any cases of that ever happened? Is it something we should try to test?

Not schizophrenia, but reading LW have had untypeably immense positive effects on mental health, which was rotten to the point I'm very confident there wouldn't exist a "me" one way or another if I hadn't.

I'd be surprised if it didn't help with schizophrenia as well.

I am curious about the details of how LW had those immense positive effects, Armok.

The obvious, boring way of removing delusions and granting the tools and will for gradual self improvement.

And perhaps most importantly, realizing that being insane was a bad thing and I should do somehting about it.

I was inspired by the later scenes in A Beautiful Mind, where Nash was still hallucinating as he went about his day but he chose to just ignore his visions of people he knew were not real.

That movie was very interesting. The scene that caught my attention the most was when he realized the little girl couldn't be real because she never aged.

I wonder what would happen if you went to a psych ward and started teaching a schizophrenic patient the scientific method? Not specifically related to their visions, but just about natural phenomena. Would they be able to shake off their delusions?

I think a better test would be to teach people prone to developing schizophrenia and then see if it help with those that did develop schizophrenia. It would be much easier to teach rationality before the onset of schizophrenia to boot.

Absolutely we should run that test, and I suspect it would help. The experiment I proposed, however, was more designed out of the question, "would it be possible to teach rationality to someone who cannot trust their own perceptions, and in fact may not realize yet that their perceptions are untrustworthy?" Is rationality genuinely not possible in that case? Or is it possible to give them enough rational skills to recover from the deepest set delusions humans can have?

[-]Xom00

People affected by Charles Bonnet syndrome, according to Wikipedia, are often sane and able to distinguish their hallucinations as hallucinations.