akvadrako comments on Top 9+2 myths about AI risk - Less Wrong

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Andy_McKenzie 30 June 2015 01:28:04PM 4 points [-]

That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.

Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.

Comment author: akvadrako 14 August 2015 09:18:44AM *  1 point [-]

I am one of those proponents of stopping all AI research and I will explain why.

(1) Don't stand too close to the cliff. We don't know how AGI will emerge and by the time we are close enough to know, it's probably too late. Either human error or malfeasance will bring us over the edge.

(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we'll even grasp why the first AGI we build works.

Neither of these statements seems controversial, so if we are determined to not produce unfriendly AGI, the only safe approach is to stop AI research well before it becomes dangerous. It's playing with fire in a straw cabin, our only shelter on a deserted island. Things would be different if someday we solve the friendliness problem, build a provably secure "box", or are well distributed across the galaxy.