There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.


From Wikipedia:

The Allais paradox arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:

Experiment 1 Experiment 2
Gamble 1A Gamble 1B Gamble 2A Gamble 2B
Winnings Chance Winnings Chance Winnings Chance Winnings Chance
$1 million 100% $1 million 89% Nothing 89% Nothing 90%
Nothing 1% $1 million 11%
$5 million 10% $5 million 10%

Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone.

However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B.


I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own "calculations". 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.

Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the "irrational" set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!

Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn't a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do - and that would explain Allais paradox as an effect of rational behavior). 

Basically - Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI's?

New Comment
19 comments, sorted by Click to highlight new comments since:

I'm envisaging an alternate universe much like our own, in which there is one additional social convention. Here is a list of some famous books in that universe:

The Portrait of a Rationalist
On the Rationality of the Heavenly Spheres
Discourse on Rationalism
The History of the Rationality and Irrationality of the Roman Empire
Dr Rational and Mr Irrational
The Rationalist of Monte Cristo
The Rational Gatsby
Things Irrational Apart
How Rational is Your Parachute?

[-]gjm70

This post really is about whether it is rational (not "good", "nice", "approved-of", etc.) to mistrust one's own rationality (not "correctness", "omniscience", "good taste", "general awesomeness", etc.).

I think you are taking your campaign against overuse of "rational" and its cognates too far.

I think you are taking your campaign against overuse of "rational" and its cognates too far.

This is not unlikely.

[-]gjm20

I beg your pardon; I should have said "irrationally far" :-).

Yep. We shouldn't use "rational" when we merely mean "correct", "optimal", "winning", or "successful".

Rationality is a collection of techniques for improvement of beliefs and actions. It is not a destination.

'Rational' as in rational agent is a pretty well defined concept in rational choice theory/game theory/decision theory. That is what I refer to when I use the word.

[-][anonymous]20

I may have an insight to this, and I thought of 2 Allais related questions that are separate scenarios than that in the OP to express my idea. I'll give them a different set of numbers to make it easier to tell apart. Here is the first scenario:

3A: You are given 2 million, with a 100% chance.

3B: You are given 2 million with 89 % chance, 1 million with 1% chance, and 5 Million with 10% chance.

4A: You are given 1 million with 89% chance, and 2 million with a 11% chance.

4B: You are given 1 million with a 90% chance, and 6 million with a 10% chance.

As far as I can tell, this is identical to the scenario, but with an extra 1 million dollars added to all positions. Personally, I now feel much better about picking 3A as opposed to 3B. (I'm worried, but not panicky/fretting.), whereas in the comparable section (1A/1B), I'm MUCH more fearful.

I was curious about why I might think this, and then thought of my second scenario:

This bet repeats 20 times. You must pick the same strategy each time. A rabbit is 1 day of Food, and a deer is 5 days of food. You begin with no food. If you go a full day without food, you will not have the energy to run from wolves at night and they will kill you. Food does not go bad.

5A: 100% chance of Rabbit.

5B: 89% chance of Rabbit, 1% chance of No food, 10% chance of Deer.

6A: 11% chance of Rabbit. 89% chance of No food.

6B: 10% chance of Deer. 90% chance of No food.

When do you die less? 5A, or 5B? When do you die less? 6A, or 6B?

This does not match how I observe my own brain to work. I see the guaranteed million vs the 1% risk of nothing and think "Oh no, what if I lose. I'd sure feel bad about my choice then.". Of course, thinking more deeply I realize that the 10% chance of an extra $4 million outweighs that downside, but it is not as obvious to my brain even though I value it more. If I were less careful, less intelligent, or less introspective, I feel that I would have 'gone with my instinct' and chosen 1A. (It is probably a good thing I am slightly tired right now, since this process happened more slowly than usual, so I think that I got a better look at it happening.)

You see, the reason for why it is discussed as an "effect" or "paradox" is that even if your risk aversion ("oh no what if I lose") is taken into account, it is strange to take 1A together with 2B. A risk averse person might "correctly" chose 1A, but that for person to be consistent in its choices has to chose 2A. Not 1A and 2B together.

My suggestion is that the slight increase in complexity in 1A adds to your risk (external risk+internal risk) and therefore within your given risk profile makes 1A and 2B a consistent combination.

Well when I look at experiment 1, I feel the risk. My brain simulates my reaction upon getting nothing and does not reduce its emotional weight in accordance with its unlikeliness. Looking at experiment 2, I see the possibility and think, "Well, I'd be screwed either way if I'm not lucky, so I'll just look at the other possibility.". My system 1 thought ignores the 89% vs 90% distinction as pointless, and, while not consistent with its other decision, it is right to do so.

Yes, but I doubt it explains the Allais paradox. Not even 1% of it. The math is too easy, and the effect too tenacious. It's almost as if humans are sometimes irrational.

For what it's worth, prospect theory helps explain this paradox.

The first aspect is how probability functions in the mind. In experiment one, you're comparing 1% to 10%. A 1% objective probability gets stretched drastically to a 10% subjective probability, while a 10% probability gets stretched to 15%. Functionally, your brain feels like there's a 1.5x difference in probability when there's really a 10x difference in probability. In experiment two, you're comparing 10% to 11%. The subjective probabilities are going to be almost equally stretched (say, 10%->15% and 11%->16%). Functionally, your brain feels like there's an 1.06x difference between them when there's really a 1.10x difference. In other terms your brain discounts the difference between 1% and 10% disproportionately to the difference between 10% and 11%.

The second aspect has to do with how we compress expected values. In experiment two, we're comparing an expected gain of $1m with $5m. If the most utilons we can gain or lose is 100, then $5m would be around 90 utilons (best thing ever!) and $1m would be 70 utilons (omg great thing!), a difference of 20 utilons. In experiment one, we're looking at gaining 90 utilons (win $5m, best thing ever!) or losing 90 utilons (lose $1m, worst thing ever!) for a 180 utilon difference. Basically, losses and gains are compressed separately differently, and our expect utilities are based on whatever our set point is.

Combined, your brain feels like experiment one is asking you to choose between 70 expected utilons (70 utilons @ 100%) or ~67 expected utilons (70 utilons @ 89% plus 90 utilons @ 15% minus 90 utilons @ 10%). Experiment two feels like choosing between 11.2 expected utilons (70 utilons at 16%) or 13.5 expected utilons (90 utilons @ 15%). In other words, the math roughly matches out with a being 'really close, but I'd rather have the sure thing' in experiment one and 'yeah, the better payoff overcomes less odds' in experiment two.

I'm not trying to say if this is how we should calculate subjective probabilities, but it seems to be how we actually do it. Personally, this decision feels so right that I'd err on the side of evolution for now. I would not be surprised if the truly rational answer is to trust our heuristics because the naively rational answer only works in hypothetical models.

Test method for the hypothesis: Use two samples of people: People who have reason to trust their mathematical ability more (say undergraduate math majors) and people who don't (the general undergrad population). If your hypothesis is correct then the math majors should display less of an irrationality in this context. That's hard to distinguish between them being just more rational in general, so this should be controlled in some way using other tests of rationality levels that aren't as mathematical (such as say vulnerability to the conjunction fallacy in story form)

This seems worth testing. I hypothesize that if one does so and controls for any increase in general rationality one won't get a difference between the math majors and the general undergraduates. Moreover, I suspect but am much less certain chance that even without controlling for any general increase in rationality, the math majors will show about as much of an Allais effect as the other undergrads.

One way of testing: Have two questions just like in Allais experiment. Make the experiment in five different versions where choice 1B has increasing complexity but same expected utility. See if 1B-aversion correlates with increasing complexity.

Ooh. I like that. That's a much more direct test than my suggestion.

I don't want to generalize from one example, but I'm sharing my personal experience in the hopes that somebody else will follow me and we can collect at least some small evidence. I have a Ph.D. in theoretical physics (meaning I'm at ease with simple math), but when I first encoutered the Allais paradox my first gut answer was 1A & 2B, even though I could immediately identify that something was wrong with this choice. I mean: I knew that my anwer was inconsistent, but I still had to make a conscious effort to persuade myself. To be honest, it's still like this every time I read about the paradox again: I know what the rational answer is, but the irrational one still makes me feel more confortable. Concluding, in my case there's definitley something beyond computational complexity at work.

SOLVED!

They are using a basic decision making process that works great in real life (uncertain probabilities) but not great in experiments like these (known certain probabilities). This decision making processes works so well in real life that it's probably become second nature for most people and they do it without thinking. People are not inconsistent!

Step 1 - Remove potential choices that could turn out bad. (choices with a significantly lower worst case scenarios than other choices)

Step 2 - Of the remaining choices, pick the choice that has the best best case scenario.

1A: WorstCaseSenario = 1 million BestCaseSenario = 1 million

1B: WorstCaseSenario = I get nothing BestCaseSenario = 5 million

2A: WorstCaseSenario = I get nothing BestCaseSenario = 1 million

2B: WorstCaseSenario = 1 get nothing BestCaseSenario = 5 million

This alone explains the decisions but there is an added effect that enhances it. 1$ DOES NOT equal 1 standard unit of happiness. The quality of life enhancement from having 1 million to having 2 million is not the same difference as the quality of life enhancement from no money to 1 million. I expect most people intuitively know this but here's an article siting a study that supports this.

http://www.time.com/time/magazine/article/0,9171,2019628,00.html

Part of what makes this difficult is that money does not translate linearly to utilons. To put it a different way, the amount of work you'd be prepared to do to make a dollar depends very greatly on how many dollars you already have.

This is the reason most people would pick a 100% chance of a million dollars over a 1% chance of a billion.

The group who probably wouldn't do so are bankers - they'd pick the 1% chance of a billion - but only because they know the overall bet, as a bet, is worth $10 million dollars as it stands, and they can sell on the whole thing as an investment to somebody else, and pocket the 10 million with 100% probability.

and pocket the 10 million with 100% probability.

Pocket some of the 10 million, anyway.