Kindly comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong

8 Post author: ciphergoth 11 May 2012 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kindly 13 May 2012 05:52:37PM *  1 point [-]

Well, there's the problem of getting the human to be sufficiently well-meaning, as opposed to using Earth as The Sims 2100 before moving on to bigger and better galaxies. But if Friendliness is a coherent concept to begin with, why wouldn't the well-meaning superhuman figure it out after spending some time thinking about it?

Edit: What I'm saying is that if the candidate Friendly AI is actually a superhuman, then we don't have to worry about Step 1 of friendliness: explaining the problem. Step 2 is convincing the superhuman to care about the problem, and I don't know how likely that is. And finally Step 3 is figuring out the solution, and assuming the human is sufficiently super that wouldn't be difficult (all this requires is intelligence, which is what we're giving the human to begin with).

Comment author: TheOtherDave 13 May 2012 06:17:40PM 1 point [-]

Agreed that a sufficiently intelligent human would be no less capable of understanding human values, given data and time, than an equally intelligent nonhuman.