dripgrind comments on My true rejection - Less Wrong

-16 Post author: dripgrind 14 July 2011 10:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: orthonormal 14 July 2011 11:06:49PM 5 points [-]

Your proposals are the kind of strawman utilitarianism that turns out to be both wrong and stupid, for several reasons.

Also, I don't think you understand what the SIAI argues about what an unFriendly intelligence would do if programmed to maximize, say, the personal wealth of its programmers. Short story, this would be suicide or worse in terms of what the programmers would actually want. The point at which smarter-than-human AI could be successfully abused by a selfish few is after the problem of Friendliness has been solved, rather than before.

Comment author: dripgrind 15 July 2011 12:06:58AM 0 points [-]

Ah, another point about maximising. What if the AI uses CEV of the programmers or the corporation? In other words, it's programmed to maximise their wealth in a way they would actually want? Solving that problem is a subset of Friendliness.

Comment author: orthonormal 15 July 2011 01:18:44AM *  1 point [-]

That's not how the term is used here. Friendliness is prior to and separate from CEV, if I understand it correctly.

From the CEV document:

Suppose we get the funding, find the people, pull together the project, solve the technical problems of AI and Friendly AI, carry out the required teaching, and finally set a superintelligence in motion, all before anyone else throws something together that only does recursive self-improvement. It is still pointless if we do not have a nice task for that optimization process to carry out.