TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
The debate is about an AGI that is essentially all powerful.
No. I don't assume that humans would be able to understand it is the AGI would ask. There no way to ask a human when what the AGI does rises over a threshold of complexity that's understandable to humans.
Have you read Friendship is Optimal?
Might be better expressed as "able to exploit our technologies, and psychology, in ways we couldn't guess".