Mac comments on Report -- Allocating risk mitigation across time - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (4)
Edit: Separately...
Are you assuming a hard takeoff intelligence explosion? If not, shouldn’t you also be interested in the probability of UFAI given future advances that may lead to it?
Kurzweil seems to think we will pass some unambiguous signposts on the way to superhuman AI. I would grant this scenario a nonzero probability.
Nitpick: “the the” is a typo.