Blueberry comments on Hacking the CEV for Fun and Profit - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Most fundamentalist christians, although believing that there is a hell and that people like me are destined for it, and want their religion to be right, probably would not want an approximation of their religion created conditional on it not already being right. An AI cannot make Bob right.
That being said, there probably are some people who would want me thrown into hell anyway even if their religion stipulating that I would be was not right in the first place. I should amend my statement: I want people to get what they want in ways that do not conflict, or conflict only minimally, with what other people want. Also, the possibility that there are a great many people like the Bob (as I said, I'm not quite sure how many fundamentalists would want to make their religion true even if it isn't) is a very good reason not to use the average human utility function for the CEV. As you said, I do not want Bob to get what he wants and I suspect that you don't either. So why would you want to create an FAI with a CEV that is inclined to accommodate Bob's wish (which greatly conflicts with what other people want) if it proves especially popular?
CEV doesn't just average people's wishes. It extrapolates what people would do if they were better informed. Even if Bob wants to create a hell right now, his extrapolated volition may be for something else.