Will_Sawin comments on Circular Altruism - Less Wrong

40 Post author: Eliezer_Yudkowsky 22 January 2008 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (300)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Sawin 17 January 2011 03:40:33AM 0 points [-]

I do see what you're driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.

But it doesn't benefit the vast majority of them, and by my standards it doesn't benefit humanity as a whole. So each individual person is thinking "this may benefit me, but it's much more likely to harm me.

You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!

Not enough; I want something book-length to read about this subject.

Ask someone else.

Comment author: bgaesop 17 January 2011 07:52:02PM *  0 points [-]

You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!

Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.

Ask someone else.

Okay. I'll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.

Comment author: Will_Sawin 17 January 2011 08:42:47PM 3 points [-]

Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.

Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.

This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.

The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.

Okay. I'll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.

Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.

Comment author: bgaesop 17 January 2011 10:41:59PM 0 points [-]

Thanks for the explanation of risk averseness.

Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.

I just checked the front page after posting that reply and did just that

Comment author: Perplexed 17 January 2011 08:59:25PM 1 point [-]

Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.