TheAncientGeek comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 11 May 2015 10:21:26PM *  1 point [-]

I don't know where you got these ideas about "choice engineering" and about the limits of what the AGI could achieve in the way of persuasion if it were "smart enough,"

The debate is about an AGI that is essentially all powerful.

Engage in some activity which, if the humans were asked about it beforehand, they would refuse to consent

No. I don't assume that humans would be able to understand it is the AGI would ask. There no way to ask a human when what the AGI does rises over a threshold of complexity that's understandable to humans.

Have you read Friendship is Optimal?

Comment author: TheAncientGeek 16 May 2015 10:56:20AM *  1 point [-]

The debate is about an AGI that is essentially all powerful.

Might be better expressed as "able to exploit our technologies, and psychology, in ways we couldn't guess".