ChrisHallquist comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread.

Comment author: ChrisHallquist 13 July 2012 08:15:24AM 1 point [-]

After being initially impressed by this, I found one thing to pick at:

Reason 1: Mitigating AI risk could mitigate all other existential risks, but not vice-versa.

"Could" here tells you very little. The question isn't whether "build FAI" could work as a strategy for mitigating all other existential risks, it's whether that strategy has a good enough chance of working to be superior to other strategies for mitigating the other risks. What's missing is an argument for saying "yes" to that second question.