XiXiDu comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: MichaelVassar 19 August 2011 02:17:49PM 21 points [-]

I'm pretty sure that I endorse the same method you do, and that the "EEV" approach is a straw man.
It's also the case that while I can endorse "being hesitant to embrace arguments that seem to have anti-common-sense implications (unless the evidence behind these arguments is strong) ", I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

Comment author: XiXiDu 19 August 2011 04:02:21PM 2 points [-]

I can't endorse treating the parts of an argument that lack strong evidence (e.g. funding SIAI is the best way to help FAI) as justifications for ignoring the parts that have strong evidence (e.g. FAI is the highest EV priority around). In a case like that, the rational thing to do is to investigate more or find a third alternative, not to go on with business as usual.

I agree with the first sentence but don't know if the second sentence is always true. Even if my calculations show that solving friendly AI will avert the most probable cause of human extinction, I might estimate that any investigations into it will very likely turn out to be fruitless and success to be virtually impossible.

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

In other words, the question is not just how much evidence I have in favor of risks from AI but how certain I can be to mitigate it compared to other existential risks.

Could you outline your estimations of the expected value of contributing to the SIAI and that a negative Singularity can be averted as a result of work done by the SIAI?

Comment author: MichaelVassar 20 August 2011 12:40:19AM 4 points [-]

In practice, when I seen a chance to do high return work on other x-risks, such as synthetic bio, I do such work. It can't always be done publicly though. It doesn't seem likely at all to me that UFAI isn't a solvable problem, given enough capable people working hard on it for a couple decades, and at the margin it's by far the least well funded major x-risk, so the real question, IMHO, is simply what organization has the best chance of actually turning funds into a solution. SIAI, FHI or build your own org, but saying it's impossible without checking is just being lazy/stingy, and is particularly non-credible from someone who isn't making a serious effort on any other x-risk either.

Comment author: timtyler 21 August 2011 08:53:24PM 2 points [-]

If I was 90% sure that humanity is facing extinction as a result of badly done AI but my confidence that averting the risk is possible was only .1% while I estimated another existential risk to kill off humanity with a 5% probability and my confidence in averting it was 1%, shouldn't I concentrate on the less probable but solvable risk?

I don't think so - assuming we are trying to maximise p(save all humans).

It appears that at least one of us is making a math mistake.

Comment author: saturn 21 August 2011 09:00:12PM 2 points [-]

It's not clear whether "confidence in averting" means P(avert disaster) or P(avert disaster|disaster).

Comment author: CarlShulman 22 August 2011 03:14:09AM *  1 point [-]

I don't think so - assuming we are trying to maximise p(save all humans).

Likewise. ETA: on what I take as the default meaning of "confidence in averting" in this context, P(avert disaster|disaster otherwise impending).