Artaxerxes comments on Musk on AGI Timeframes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.
The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.
I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.
This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover.
More concretely, what experiment in your view should they be doing?