Artaxerxes comments on Musk on AGI Timeframes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
This is not Musk's field of expertise. I do not give his words special weight.
The fact that he can sit in on some cutting edge tech demos, or even chat with CEOs, still doesn't make him an expert.
I have a technical background in AI; there's still massive hurdles to overcome; not 5-10 year hurdles. Nothing from Deepmind will "escape onto the internet" any time soon. It is very much grounded in the "Narrow AI" technologies like machine learning.
I feel pretty confident calling him a Cassandra.
Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.
The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.
I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.
This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover.
More concretely, what experiment in your view should they be doing?