Artaxerxes comments on Musk on AGI Timeframes - Less Wrong

19 Post author: Artaxerxes 17 November 2014 01:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Artaxerxes 17 November 2014 04:07:29AM 5 points [-]

Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.

The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.

Comment author: examachine 18 November 2014 04:30:15AM 0 points [-]

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

Comment author: JoshuaZ 17 December 2014 03:44:00AM *  2 points [-]

This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover.

More concretely, what experiment in your view should they be doing?