(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant.
I didn't quite have in mind classical utilitarianism in mind. I had in mind principles like
- Not helping somebody is equivalent to hurting the person
- An action that doesn't help or hurt someone doesn't have moral value.
(2) Your described principle of indifference seems to me to be manifestly false.
I did mean after controlling for ability to have an impact.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Problems with your position:
1. "goals being fulfilled" is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous.
Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant.
Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it's not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn't seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant "higher", whoops) expectation.
[1] pp. 159-161 in the 1988 edition, if anyone's curious enough to look this up. Extra bonus: This section of the book (chapter 8, "Subjective Expected Utility Theory", where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.
Point 1:
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms.
Point 2:
Thanks for the reference.
But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.
If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?
Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.