timtyler comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

Sort By: Popular

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 21 August 2011 08:53:24PM 2 points [-]

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

I don't think so - assuming we are trying to maximise p(save all humans).

It appears that at least one of us is making a math mistake.

Comment author: CarlShulman 22 August 2011 03:14:09AM *  1 point [-]

I don't think so - assuming we are trying to maximise p(save all humans).

Likewise. ETA: on what I take as the default meaning of "confidence in averting" in this context, P(avert disaster|disaster otherwise impending).

Comment author: saturn 21 August 2011 09:00:12PM 2 points [-]

It's not clear whether "confidence in averting" means P(avert disaster) or P(avert disaster|disaster).