timtyler comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
I agree with the first sentence but don't know if the second sentence is always true. Even if my calculations show that solving friendly AI will avert the most probable cause of human extinction, I might estimate that any investigations into it will very likely turn out to be fruitless and success to be virtually impossible.
If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?
In other words, the question is not just how much evidence I have in favor of risks from AI but how certain I can be to mitigate it compared to other existential risks.
Could you outline your estimations of the expected value of contributing to the SIAI and that a negative Singularity can be averted as a result of work done by the SIAI?
I don't think so - assuming we are trying to maximise p(save all humans).
It appears that at least one of us is making a math mistake.
It's not clear whether "confidence in averting" means P(avert disaster) or P(avert disaster|disaster).
Likewise. ETA: on what I take as the default meaning of "confidence in averting" in this context, P(avert disaster|disaster otherwise impending).