timtyler comments on My true rejection - Less Wrong

-16 Post author: dripgrind 14 July 2011 10:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 15 July 2011 12:09:18PM *  -2 points [-]

"Maximize the utility of this one guy" isn't much easier than "Maximize the utility of all humanity" when the real problem is defining "maximize utility" in a stable way.

It seems quite a bit easier to me! Maybe not 7 billion times easier - but heading that way.

If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying "Maximize the utility of this one guy here who's clearly very nice and wants what's best for humanity."

That would work - if everyone agreed to trust them and their faith was justified. However, there doesn't seem to be much chance of that happening.