jacob_cannell comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (202)
So do we create children as our 'slaves' for our own purposes? You seem to be categorically ruling out the entire possibility of humans creating human-like AIs that have a parent-child relationship with their creators.
So just to make it precisely clear, I'm talking about that type of AI specifically. The importance and feasibility of that type of AGI vs other types is a separate discussion.
I don't see it as having anything to do with rationality.
The altruistic human-ish AGI mentioned above would be better than current humans from our current perspective - more like what we wish ourselves to be, and more able to improve our world than current humans.
Yes.
This is obvious if it's 'utility function' is just a projection of my own - ie it simulates what I would want and uses that as it's utility function, but that isn't even necessary - it's utility function could be somewhat more complex than just a simulated projection of my own and still help fulfill my utility function.
If by inspection you just mean teach the AI morality in human language, then I agree, but that's a side point.