Squark comments on On not diversifying charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (73)
I think it is the correct definition in the sense that you should behave like one.
Why should I behave as if my values satisfy the VNM axioms?
Rationality here is typically defined as either epistemic (make sure your mental models match reality well) or instrumental (make sure the steps you take actually lead to your goals).
Defining rationality as "you MUST have a single utility function which MUST follow VNM" doesn't strike me as a good idea.
Because the VNM axioms seem so intuitively obvious that violating them strongly feels like making an error. Of course I cannot prove them without introducing another set of axioms which can be questioned in turn etc. You always need to start with some assumptions.
Which VNM axiom would you reject?
So you care more about following the VNM axioms, then which utility function you are maximizing? That behavior is itself not VNM rational.
If you don't follow the VNM axioms you are not maximizing any utility function.
So why do you care about maximizing any utility function?
What would constitute a valid answer to that question, from your point of view?
I can't think of one. You're the one arguing for what appears to be an inconsistent position.
What is the inconsistency?
Saying one should maximize a utility function, but not caring which utility function is maximized.
I would reject the completeness axiom. I often face choices where I don't know which option I prefer, but where I would not agree that I am indifferent. And I'm okay with this fact.
I also reject the transitivity axiom -- intransitive preference is an observed fact for real humans in a wide variety of settings. And you might say this is irrational, but my preference are what they are.
Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?
Sure. I'll go to the grocery store and have three kinds of tomato sauce and I'll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I'll stare at them indecisively until my preferences shift. It's sort of ridiculous -- it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go.
I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have.
If you google "intransitive preference" you get a bunch of references -- this one has cites to the original experiements: http://www.stanford.edu/class/symbsys170/Preference.pdf
It seems to me that what you're describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you're not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like "experiencing pleasure" and your problem here is epistemic rather than "moral": you're not sure which sauce would give you more pleasure.
You are using preference to mean something other than I thought you were.
I'm not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can't just define away those limits. And I'm not at all convinced that our preferences would converge even given infinite time. That's an assumption, not a theorem.
When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there's no non-paradoxical way to do rankings. (This is basically Arrow's theorem). And I suspect that's the cause for my lack of preference ordering.
Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.
Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.
However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that's about it.
"feels like" is a notoriously bad criterion :-)
Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.
Haven't there been a lot of discussion about the applicability of VNM to human ethical systems? It looks like a well-trodden ground to me.
He doesn't express the entire ranking, but he does still have to choose the best option.
What would be a good criterion? You cannot pull yourself up by your bootstraps. You need to start from something.
How would you want to operate? You mentioned instrumental rationality. I don't know how to define instrumental rationality without the VNM setting (or something similar).
Mismatch with reality.
Well, the locally canonical definition is this:
I see nothing about VNM there.
I'm not following
This is a nice motto, but how do you make a mathematical model out of it?
Well, you originally said " violating them strongly feels like making an error. " I said that "feels like" is a weak point. You asked for an alternative. I suggested mismatch with reality. As in "violating X leads to results which do not agree with what we know of reality".
We were talking about how would a human qualify as a "rational agent". I see no need to make mathematical models here.
This only makes sense in epistemic context, not in instrumental one. How can a way of making decisions "not agree with what we know of reality"? Note that I'm making a normative statement (what one should do), not a descriptive statement ("people usually behave in such-and-such way").
There is always a need to make mathematical models since before you have a mathematical model your understanding is imprecise. For example, a mathematical model allows you to prove than under certain assumptions diversifying donations is irrational.
Ever heard of someone praying for a miracle?
Bollocks! I guess next you'll be telling me I can not properly understand anything which is not expressed in numbers... :-P