AlexMennen comments on Hacking the CEV for Fun and Profit - Less Wrong

52 Post author: Wei_Dai 03 June 2010 08:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (194)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlexMennen 13 June 2010 03:08:42PM 1 point [-]

There is a worrying tendency on LW to acknowledge verbally moral antirealism, but then argue as if moral realism is true.

I did not intend to imply that moral realism was true. If I somehow seemed to indicate that, please explain so I can make the wording less confusing.

There is no in-principle reason for humans to agree on what to do under extrapolation, and in practice we tend to disagree a lot before extrapolation.

True, but many of the disagreements between people relate to methods rather than goals or morals, and these disagreements are not relevant under extrapolation. Plus, I want other people to get what they want, so if an AI programmed to optimize the universe to my utility function does not do something fairly similar to optimizing the universe to the average human utility function, either the AI is misprogrammed or the average human utility function changed radically through unfavorable circumstances like the one described in the top-level post. I suspect that the same thing is true of you. And if you do not want other people to get what they want, what is the point of using the average human utility function in the first place?

Comment deleted 13 June 2010 04:19:34PM [-]
Comment author: AlexMennen 13 June 2010 04:39:49PM 2 points [-]

Most fundamentalist christians, although believing that there is a hell and that people like me are destined for it, and want their religion to be right, probably would not want an approximation of their religion created conditional on it not already being right. An AI cannot make Bob right.

That being said, there probably are some people who would want me thrown into hell anyway even if their religion stipulating that I would be was not right in the first place. I should amend my statement: I want people to get what they want in ways that do not conflict, or conflict only minimally, with what other people want. Also, the possibility that there are a great many people like the Bob (as I said, I'm not quite sure how many fundamentalists would want to make their religion true even if it isn't) is a very good reason not to use the average human utility function for the CEV. As you said, I do not want Bob to get what he wants and I suspect that you don't either. So why would you want to create an FAI with a CEV that is inclined to accommodate Bob's wish (which greatly conflicts with what other people want) if it proves especially popular?

Comment deleted 13 June 2010 07:03:42PM [-]
Comment author: AlexMennen 14 June 2010 03:55:56AM 2 points [-]

Well, I suppose we can reliably expect that there are not enough people like Bob, and me getting tortured removes much more utility from me than it gives Bob, but that's missing the point.

Imagine yourself in a world in which the vast majority of people want to subject a certain minority group to eternal torture. The majority who want that minority group to be tortured is so vast that an FAI with an average human utility function-based CEV would be likely to subject the members of that minority group to eternal torture. You have the ability to create an FAI with a CEV based off of the average human utility function, with your personal utility function, or not at all. What do you do?

Comment deleted 14 June 2010 09:12:29AM *  [-]
Comment author: AlexMennen 15 June 2010 04:43:29AM 1 point [-]

Silly me, I thought that we were arguing about whether using a personal utility function is a better substitute, and I was rather confused at what appeared to be a sudden concession. Looking at the comments above, I notice that you in fact only disputed my claim that the results would be very similar.

Comment author: Blueberry 13 June 2010 05:20:13PM -1 points [-]

CEV doesn't just average people's wishes. It extrapolates what people would do if they were better informed. Even if Bob wants to create a hell right now, his extrapolated volition may be for something else.

Comment author: purpleposeidon 09 July 2010 07:25:31AM 1 point [-]

I want bob to think he gets what he wants.