Yvain comments on My true rejection - Less Wrong

-16 Post author: dripgrind 14 July 2011 10:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yvain 15 July 2011 01:36:34AM 5 points [-]

Not really. "Maximize the utility of this one guy" isn't much easier than "Maximize the utility of all humanity" when the real problem is defining "maximize utility" in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying "Maximize the utility of this one guy here who's clearly very nice and wants what's best for humanity."

There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that's not where the majority of the problem lies.

I'd even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it'd be a net win for humanity, compared to our current prospects. With absolute power there's not a lot of incentive to be an especially malevolent dictator (see Moldbug's Fhnargl thought experiment for something similar) and in a post-scarcity world there'd be more than enough for everyone including IBM executives. It'd be sub-optimal, but compared to Unfriendly AI? Piece of cake.

Comment author: arundelo 15 July 2011 02:28:39AM *  7 points [-]

Moldbug's Fhnargl thought experiment

Fnargl.

Comment author: Yvain 15 July 2011 10:36:02AM 7 points [-]

[Yvain crosses "get corrected on spelling of 'Fnargl'" off his List Of Things To Do In Life]

Comment author: arundelo 15 July 2011 01:05:04PM 0 points [-]

Glad to be of service!

Comment author: ikrase 26 November 2012 06:21:07AM 0 points [-]

If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.

Comment author: timtyler 15 July 2011 12:09:18PM *  -2 points [-]

"Maximize the utility of this one guy" isn't much easier than "Maximize the utility of all humanity" when the real problem is defining "maximize utility" in a stable way.

It seems quite a bit easier to me! Maybe not 7 billion times easier - but heading that way.

If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying "Maximize the utility of this one guy here who's clearly very nice and wants what's best for humanity."

That would work - if everyone agreed to trust them and their faith was justified. However, there doesn't seem to be much chance of that happening.