You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Oscar_Cunningham comments on Harsanyi's Social Aggregation Theorem and what it means for CEV - Less Wrong Discussion

21 Post author: AlexMennen 05 January 2013 09:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Oscar_Cunningham 07 January 2013 09:49:30AM 4 points [-]

The situation analogous to Simpson's paradox can only occur if for some reason we care about some people's opinion more than others in some situations (this is analogous to the situation in Simpson's paradox where we have more data points in some parts of the table than others. It is a necessary condition for the paradox to occur.)

For example: Suppose Alice (female) values a cure for prostate cancer at 10 utils, and a cure for breast cancer at 15 utils. Bob (male) values a cure for prostate cancer at 100 utils, and a cure for breast cancer at 150 utils. Suppose that because prostate cancer largely affects men and breast cancer largely affects women we value Alice's opinion twice as much about breast cancer and Bob's opinion twice as much about prostate cancer. Then in the aggregate curing prostate cancer is 210 utils and curing breast cancer 180 utils, a preference reversal compared to either of Alice or Bob.

Comment author: AlexMennen 07 January 2013 08:11:34PM 2 points [-]

This is essentially just an example of Harsanyi's Theorem in action. And I think it makes a compelling demonstration of why you should not program an AI in that fashion.

Comment author: 615C68A6 07 January 2013 03:05:26PM 0 points [-]

can only occur if for some reason we care about some people's opinion more than others in some situations

Isn't that the description of an utility maximizer (or optimizer) taking into account the preferences of an utility monster?

Comment author: Oscar_Cunningham 08 January 2013 12:29:57PM 0 points [-]

To get the effect that we need an optimiser that cares about some people's opinion more about some things but then for some other things cares about someone else's opinion. If we just have a utility monster who the optimiser always values more than others we can't get the effect. The important thing is that it sometimes cares about one person and sometimes cares about someone else.