SaidAchmiz comments on To what extent does improved rationality lead to effective altruism? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
Point 1:
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms.
Point 2:
Thanks for the reference.
But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.
If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?
Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.
Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.)
"Risk avoidance" and "value" are not synonyms. I don't know why you would say that. I suspect one or both of us is seriously misunderstanding the other.
Re: point #2: I don't have the time right now, but sometime over the next couple of days I should have some time and then I'll gladly outline Dawes' argument for you. (I'll post a sibling comment.)
If I'm talking about a goal actually being 50% fulfilled, then it is.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.
That would be very kind :) No need to hurry.