timtyler comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 August 2011 04:02:21PM 2 points [-]

I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

I agree with the first sentence but don't know if the second sentence is always true. Even if my calculations show that solving friendly AI will avert the most probable cause of human extinction, I might estimate that any investigations into it will very likely turn out to be fruitless and success to be virtually impossible.

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

In other words, the question is not just how much evidence I have in favor of risks from AI but how certain I can be to mitigate it compared to other existential risks.

Could you outline your estimations of the expected value of contributing to the SIAI and that a negative Singularity can be averted as a result of work done by the SIAI?

Comment author: timtyler 21 August 2011 08:53:24PM 2 points [-]

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

I don't think so - assuming we are trying to maximise p(save all humans).

It appears that at least one of us is making a math mistake.

Comment author: saturn 21 August 2011 09:00:12PM 2 points [-]

It's not clear whether "confidence in averting" means P(avert disaster) or P(avert disaster|disaster).

Comment author: CarlShulman 22 August 2011 03:14:09AM *  1 point [-]

I don't think so - assuming we are trying to maximise p(save all humans).

Likewise. ETA: on what I take as the default meaning of "confidence in averting" in this context, P(avert disaster|disaster otherwise impending).