army1987 comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 13 July 2012 12:54:16AM *  6 points [-]

For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation

Disagree. There are many remaining theoretical (philosophical and mathematical) difficulties whose investigation doesn't depend on the current level of technology. It would've been better to start working on the problem 300 years ago, when AI risk was still far away. Value of information on this problem is high, and we don't (didn't) know that there is nothing to be discovered, it wouldn't be surprising if some kind of progress is made.

Comment author: [deleted] 13 July 2012 10:14:03PM 1 point [-]

That's hindsight. Nobody could have reasonably foreseen the rise of very powerful computing machines that far ago.