PhilGoetz comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: xxd 30 November 2011 04:50:01PM 0 points [-]

Phil: an AI who is seeking resources to further it's own goals at the expense of everyone else is by definition an unfriendly AI.

Transhuman AI PhilGoetz is such a being.

Now consider this: I'd prefer the average of all human utility function over my maximized utility function even if it means I have less utility.

I dont want humanity to die and I am prepared to die myself to prevent it from happening.

Which of the two utility functions would most of humanity prefer hmmmmm?

Comment author: PhilGoetz 06 December 2011 10:16:03PM *  0 points [-]

Phil: an AI who is seeking resources to further it's own goals at the expense of everyone else is by definition an unfriendly AI.

The question is whether the PhilGoetz utility function, or the average human utility function, are better. Assume both are implemented in AIs of equal power. What makes the average human utility function "friendlier"? It would have you outlaw homosexuality and sex before marriage, remove all environmental protection laws, make child abuse and wife abuse legal, take away legal rights from women, give wedgies to smart people, etc.

Now consider this: I'd prefer the average of all human utility function over my maximized utility function even if it means I have less utility.

I don't think you understand utility functions.

Comment author: xxd 16 December 2011 12:27:54AM 0 points [-]

"The question is whether the PhilGoetz utility function, or the average human utility function, are better. "

That is indeed the question. But I think you've framed and stacked the the deck here with your description of what you believe the average human utility function is in order to attempt to take the moral high ground rather than arguing against my point which is this:

How do you maximize the preferred utility function for everyone instead of just a small group?