Comment author: SaidAchmiz 20 March 2014 08:48:17PM *  0 points [-]

Problems with your position:

1. "goals being fulfilled" is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous.

Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.

2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant.

Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it's not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn't seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant "higher", whoops) expectation.

[1] pp. 159-161 in the 1988 edition, if anyone's curious enough to look this up. Extra bonus: This section of the book (chapter 8, "Subjective Expected Utility Theory", where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.

Comment author: tom_cr 20 March 2014 10:06:17PM 1 point [-]

Point 1:

my goals may be fulfilled to some degree

If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.

The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.

But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms.

Point 2:

Thanks for the reference.

But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.

If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?

Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.

Comment author: JonahSinick 20 March 2014 08:31:41PM 0 points [-]

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant.

I didn't quite have in mind classical utilitarianism in mind. I had in mind principles like

  • Not helping somebody is equivalent to hurting the person
  • An action that doesn't help or hurt someone doesn't have moral value.

(2) Your described principle of indifference seems to me to be manifestly false.

I did mean after controlling for ability to have an impact.

Comment author: tom_cr 20 March 2014 08:44:36PM 0 points [-]

I did mean after controlling for an ability to have impact

Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?

Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.

Comment author: SaidAchmiz 20 March 2014 07:46:13PM 0 points [-]

It's not a bias, it's a preference. Insofar as we reserve the term bias for irrational "preferences" or tendencies or behaviors, risk aversion does not qualify.

Comment author: tom_cr 20 March 2014 08:28:15PM 0 points [-]

I would call it a bias because it is irrational.

It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).

Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.

Comment author: SaidAchmiz 20 March 2014 07:38:55PM 0 points [-]

Thank you for bringing this up. I've found myself having to point out this distinction (between consequentialism and utilitarianism) a number of times; it seems a commonplace confusion around here.

Comment author: tom_cr 20 March 2014 07:46:52PM 0 points [-]

I see Sniffnoy also raised the same point.

Comment author: V_V 20 March 2014 01:57:34PM 1 point [-]

risk aversion is not a bias.

Comment author: tom_cr 20 March 2014 07:37:20PM -1 points [-]

I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.

Comment author: tom_cr 20 March 2014 06:57:08PM 3 points [-]

A couple of points:

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say

[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"

Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest number is dubious. Obviously, this principle is a damn fine heuristic, but it follows from consequentialism (as long as the social contract can be inferred to be useful), and isn't a foundation for it. The paper-clipping robot is still a consequentialist.

(2) Your described principle of indifference seems to me to be manifestly false.

When we talk of the value of any thing, we are not talking of an intrinsic property of the thing, but a property of the relationship between the thing and the entity holding the value. (People are also things. ) If an entity holds any value in some object, the object must exhibit some causal effect on the entity. The nature and magnitude of the value held must be consequences of that causality. Thus, we must expect value to scale (in an order-reversing way) with some generalized measure of proximity, or causal connectedness. It is not rational for me to care as much about somebody outside my observable universe as I do about a member of my family.

Comment author: tom_cr 17 March 2014 07:30:49PM 1 point [-]

Thanks for taking the time to try to debunk some of the sillier aspects of classic utilitarianism. :)

‘Actual value’ exists only theoretically, even after the fact.

You've come close to an important point here, though I believe its expression needs to be refined. My conclusion is that value has real existence. This conclusion is primarily based on the personal experience of possessing real preferences, and my inference (to a high level of confidence) that other humans routinely do the same. We might reasonably doubt the a priori correspondence between actual preference, and the perception of preference, but even so, the assumption that I make decisions entails that I'm motivated by the pursuit of value.

Perhaps, then, you would agree that it is more correct to say that the relative value of an action can be judged only theoretically.

Thus, we account for the fact that if the action had not been performed, the outcome would be something different, the value of which we can at best only make an educated guess about, making a non-theory-laden assessment of relative value impossible. The further substitution of my 'can be judged' in place of your 'exists' seems to me necessary, to avoid committing the mind projection fallacy.

The main question in this essay, the harder question, is if we can judge previous decisions based on their respective expected values, ...

If it is the decision that is being judged (as the question specifies), rather than its outcome, then clearly the answer is "yes." There can not be anything better than expected value to base a decision on. In a determined bid to be voted captain obvious, I examined this in some detail, in a blog post, Is rationality desirable?

... and how to possibly come up with the relevant expected values to do so.

This is called science! You are right, though, to be cautious. It strikes me that many assume they can draw conclusions about the relative rationality of two agents, when really, they ought to do more work for their conclusions to be sound. I once listened to a talk in which it was concluded that the test subjects in some psychological study were not 'Bayesian optimal.' I asked the speaker how he knew this. How had he measured their prior distributions? their probability models? their utility functions? These things are all part of the process of determining a course of action.

Comment author: nshepperd 13 March 2014 09:23:43PM -1 points [-]

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good" and would not say things like "taking a murder pill doesn't affect the fact that murder is bad".

Alternative: what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

Comment author: tom_cr 13 March 2014 10:03:10PM 0 points [-]

If "X is good" was simply an empirical claim about whether an object conforms to a person's values, people would frequently say things like "if my values approved of X, then X would be good"....

If that is your basis for a scientific standard, then I'm afraid I must withdraw from this discussion.

Ditto, if this is your idea of humor.

what if "X is good" was a mathematical claim about the value of a thing according to whatever values the speaker actually holds?

That's just silly. What if c = 299,792,458 m/s is a mathematical claim about the speed of light, according to what the speed of light actually is? May I suggest that you don't invent unnecessary complexity to disguise the demise of a long deceased argument.

No further comment from me.

Comment author: Strange7 13 March 2014 05:35:27AM 0 points [-]

My theory is that the dualistic theory of mind is an artifact of the lossy compression algorithm which, conveniently, prevents introspection from turning into infinite recursion. Lack of neurosurgery in the environment of ancestral adaptation made that an acceptable compromise.

Comment author: tom_cr 13 March 2014 05:11:29PM 0 points [-]

I quite like Bob Trivers' self-deception theory, though I only have tangential acquaintance with it. We might anticipate that self deception is harder if we are inclined to recognize the bit we call "me" as caused by some inner mechanism, hence it may be profitable to suppress that recognition, if Trivers is on to something.

Wild speculation on my part, of course. There may simply be no good reason, from the point of view of historic genetic fitness, to be good at self analysis, and you're quite possibly on to something, that the computational overhead just doesn't pay off.

Comment author: nshepperd 13 March 2014 12:05:03PM *  1 point [-]

That's not at all what I meant. Obviously minds and brains are just blobs of matter.

You are conflating the claims "lukeprog thinks X is good" and "X is good". One is an empirical claim, one is a value judgement. More to the point, when someone says "P is a contrarian value judgement, not a contrarian world model", they obviously intend "world model" to encompass empirical claims and not value judgements.

Comment author: tom_cr 13 March 2014 04:45:47PM 0 points [-]

I'm not conflating anything. Those are different statements, and I've never implied otherwise.

The statement "X is good," which is a value judgement, is also an empirical claim, as was my initial point. Simply restating your denial of that point does not constitute an argument.

"X is good" is a claim about the true state of X, and its relationship to the values of the person making the claim. Since you agree that values derive from physical matter, you must (if you wish to be coherent) also accept that "X is good" is a claim about physical matter, and therefore part of the world model of anybody who believes it.

If there is some particular point or question I can help with, don't hesitate to ask.

View more: Prev | Next