You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Arenamontanus comments on Top 9+2 myths about AI risk - Less Wrong Discussion

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Andy_McKenzie 30 June 2015 01:28:04PM 4 points [-]

That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.

Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.

Comment author: Arenamontanus 01 July 2015 09:10:19AM 3 points [-]

I remember that we were joking at the NYC Singularity Summit workshop a few years back that maybe we should provide AI researchers with heroin and philosophers to slow them down.

As far as I have noticed, there are few if any voices in the academic/nearby AI safety community that promote slowing AI research as the best (or even a good) option. People talking about relinquishment or slowing seem to be far outside the main discourse, typically people who have only a passing acquaintance with the topic or a broader technology scepticism.

The best antidote is to start thinking about the details of how one would actually go about it: that generally shows why differential development is sensible.