nshepperd comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
Using "good" to only refer to what is actually good is however vastly better, as precision goes. What I am taking issue to here is the careless equivocation between maximising pleasure and good intentions. A correct description of the "nanny AI" scenario would read something like this:
Of course it is true that a AI programmed to do what is good would most likely generally increase happiness (and even pleasure) to some extent, but to conclude from that that these things are interchangeable is pure folly.