lukeprog comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 18 August 2011 11:56:58PM 51 points [-]

Quick comment one:

This jumped out instantly when I looked at the charts: Your prior and evidence can't possibly both be correct at the same time. Everywhere the prior has non-negligible density has negligible likelihood. Everywhere that has substantial likelihood has negligible prior density. If you try multiplying the two together to get a compromise probability estimate instead of saying "I notice that I am confused", I would hold this up as a pretty strong example of the real sin that I think this post should be arguing against, namely that of trying to use math too blindly without sanity-checking its meaning.

Quick comment two:

I'm a major fan of Down-To-Earthness as a virtue of rationality, and I have told other SIAI people over and over that I really think they should stop using "small probability of large impact" arguments. I've told cryonics people the same. If you can't argue for a medium probability of a large impact, you shouldn't bother.

Part of my reason for saying this is, indeed, that trying to multiply a large utility interval by a small probability is an argument-stopper, an attempt to shut down further debate, and someone is justified in having a strong prior, when they see an attempt to shut down further debate, that further argument if explored would result in further negative shifts from the perspective of the side trying to shut down the debate.

With that said, any overall scheme of planetary philanthropic planning that doesn't spend ten million dollars annually on Friendly AI is just stupid. It doesn't just fail the Categorical Imperative test of "What if everyone did that?", it fails the Predictable Retrospective Stupidity test of, "Assuming civilization survives, how incredibly stupid will our descendants predictably think we were to do that?"

Of course, I believe this because I think the creation of smarter-than-human intelligence has a (very) large probability of an (extremely) large impact, and that most of the probability mass there is concentrated into AI, and I don't think there's nothing that can be done about that, either.

I would summarize my quick reply by saying,

"I agree that it's a drastic warning sign when your decision process is spending most of its effort trying to achieve unprecedented outcomes of unquantifiable small probability, and that what I consider to be down-to-earth common sense is a great virtue of a rationalist. That said, down-to-earth common-sense says that AI is a screaming emergency at this point in our civilization's development, and I don't consider myself to be multiplying small probabilities by large utility intervals at any point in my strategy."

Comment author: lukeprog 26 June 2013 12:32:25AM 0 points [-]

I have told other SIAI people over and over that I really think they should stop using "small probability of large impact" arguments.

I confirm this.