Comment author: hawkice 01 December 2014 01:56:33AM 0 points [-]

I'm having trouble imagining how risk would ever go down, sans entering a machine-run totalitarian state, so I clearly don't have the same assessment of bad things happening "sooner rather than later". I can't imagine a single dangerous activity that is harder or less dangerous now than it was in the past, and I suspect this will continue. The only things that will happen sooner than later are establishing stable and safe equilibria (like post-Cold War nuclear politics). If me personally being alive meaningfully effects an equilibrium (implicit or explicit) then Humanity is quite completely screwed.

Comment author: 3p1cd3m0n 04 December 2014 01:45:22AM 0 points [-]

For one, Yudkowsky in Artificial Intelligence as a Positive and Negative Factor in Global Risk says that artificial general intelligence could potentially use its super-intelligence to decrease existential risk in ways we haven't thought of. Additionally, I suspect (though I am rather uninformed on the topic) that Earth-originating life will be much less vulnerable one it spreads away from Earth, as I think many catastrophes would be local to a single planet. I suspect catastrophes from nanotechnology one such catastrophe.

Comment author: 3p1cd3m0n 01 December 2014 12:14:01AM 0 points [-]

How important is trying to personally live longer for decreasing existential risk? IMO, It seems that most risk of existential catastrophes occurs sooner rather than later, so I doubt living much longer is extremely important. For example, Wikipedia says that a study at the Singularity Summit found that the median date for the singularity occurring is 2040, and one personal gave 80% confidence intervals from 5 - 100 years. Nanotechnology seems to be predicted to come sooner rather than later as well. What does everyone else think?

Comment author: 3p1cd3m0n 30 November 2014 11:19:14PM 0 points [-]

Is there any justification for the leverage penalty? I understand that it would apply if there were a finite number of agents, but if there's an infinite number of agents, couldn't all agents have an effect on an arbitrarily larger number of other agents? Shouldn't the prior probability instead be P(event A | n agents will be effected) = (1 / n) + P(there being infinite entities)? If this is the case, then it seems the leverage penalty won't stop one from being mugged.

Comment author: ike 20 November 2014 02:39:48PM 1 point [-]
Comment author: 3p1cd3m0n 21 November 2014 02:30:03AM 0 points [-]

Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?

Comment author: 3p1cd3m0n 20 November 2014 03:31:22AM *  0 points [-]

Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I've found none, but I'd like to know because I'm considering developing AGI as a career.

Edit: What about AI that's not AGI?

Comment author: Jiro 27 August 2014 06:03:57PM *  0 points [-]

But your earlier quote says that it makes sense to reduce risk by a millionth of a percentage point because the expected value of lives saved is still large. It doesn't propose reducing the risk from 19% to nothing; it proposes reducing the risk by a tiny amount. Only in the unlikely event that that tiny change happens to be the tipping point that prevents extinction would this reduction be beneficial; the expected value is derived by multiplying this unlikelihood by the large number of lives saved were it to be true. That sounds like Pascal's Mugging. I agree that it wouldn't be Pascal's Mugging to reduce the 19% to 0, but I think that reducing it to 18.999999% is.

In response to comment by Jiro on Efficient Charity
Comment author: 3p1cd3m0n 28 August 2014 05:00:18PM 0 points [-]

I see what you mean. I don't really know enough about Pascal's mugging to determine whether decreasing existential risk be 1 millionth of a percent is worth it, but it's a moot point, as it seems reasonable that existential risk could be reduced by far more than 1 millionth of one percent.

Comment author: Jiro 27 August 2014 02:41:33PM 0 points [-]

Doesn't that fall prey to Pascal's Mugging?

In response to comment by Jiro on Efficient Charity
Comment author: 3p1cd3m0n 27 August 2014 05:54:17PM 0 points [-]

I don't think decreasing existential risk falls into it, because the probability of an existential catastrophe isn't extremely small. One survey taken at Oxford predicted that there was a ~19% chance of human extinction prior to 2100. Determining the probability of existential catastrophe is very challenging and the aforementioned statistic should be viewed skeptically, but a probability anywhere near 19% would still (as far as I can tell) prevent to from falling prey to Pascal's mugging.

In response to Efficient Charity
Comment author: 3p1cd3m0n 26 August 2014 09:47:22PM *  0 points [-]

For many utility functions, I think donating to an organisation working on decreasing existential risk would be incredibly efficient, as:

Even if we use the most conservative of [estimates of the utility of decreasing existential risk], which entirely ignores the possibility of space colonisation and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10^16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. (Bostrom, Existential Risk Prevention as Global Priority)

View more: Prev