jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 30 January 2011 09:05:48PM 0 points [-]

The same issue applies to children - they don't necessarily have the same 'utility function', sometimes they even literally kill us, but usually they help us.

That would be not so much a benevolence explosion as a single AI creating "slave" AIs for its own purposes

So do we create children as our 'slaves' for our own purposes? You seem to be categorically ruling out the entire possibility of humans creating human-like AIs that have a parent-child relationship with their creators.

So just to make it precisely clear, I'm talking about that type of AI specifically. The importance and feasibility of that type of AGI vs other types is a separate discussion.

Sure it is - this part at least is easy. For example an AGI that is fully altruistic and only experiences love as it's single emotion would be clearly "somewhat better than us" from our perspective in every sense that matters.

If you mean that the AI doesn't [ .. ] That's the AI being more rational than us, and therefore better optimising for its utility function.

I don't see it as having anything to do with rationality.

The altruistic human-ish AGI mentioned above would be better than current humans from our current perspective - more like what we wish ourselves to be, and more able to improve our world than current humans.

Moreover, if our utility function describes what we truly want (which is the whole point of a utility function), it follows that we truly want an AI that optimizes for our utility function.

Yes.

This is obvious if it's 'utility function' is just a projection of my own - ie it simulates what I would want and uses that as it's utility function, but that isn't even necessary - it's utility function could be somewhat more complex than just a simulated projection of my own and still help fulfill my utility function.

That's why the plan is for the AI to figure it out by inspecting us. Morality is very much not simple to code.

If by inspection you just mean teach the AI morality in human language, then I agree, but that's a side point.