Mac comments on Report -- Allocating risk mitigation across time - LessWrong

11 Post author: owencb 20 February 2015 04:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (4)

You are viewing a single comment's thread.

Comment author: Mac 23 February 2015 12:34:52AM *  0 points [-]

I argue that we may be underinvesting in scenarios where AI comes soon even though these scenarios are relatively unlikely, because we will not have time later to address them.

Edit: Separately...

p(X) denotes the probability that we will face problem X. Note that this is meant to be an absolute probability, not conditional on getting to the the point where we might face X.

Are you assuming a hard takeoff intelligence explosion? If not, shouldn’t you also be interested in the probability of UFAI given future advances that may lead to it?

Kurzweil seems to think we will pass some unambiguous signposts on the way to superhuman AI. I would grant this scenario a nonzero probability.

Nitpick: “the the” is a typo.