nshepperd comments on Debunking Fallacies in the Theory of AI Motivation - Less Wrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 11 May 2015 09:36:56AM 2 points [-]

Using "good" to only refer to what is actually good is however vastly better, as precision goes. What I am taking issue to here is the careless equivocation between maximising pleasure and good intentions. A correct description of the "nanny AI" scenario would read something like this:

[The AI] has bad intentions (it was programmed to maximise human pleasure), and indeed by using its superior intelligence it successfully achieves that goal and does in fact maximise human pleasure -- by connecting all human brains up to dopamine drips.

Of course it is true that a AI programmed to do what is good would most likely generally increase happiness (and even pleasure) to some extent, but to conclude from that that these things are interchangeable is pure folly.