I don't want to generalize from one example, but I'm sharing my personal experience in the hopes that somebody else will follow me and we can collect at least some small evidence. I have a Ph.D. in theoretical physics (meaning I'm at ease with simple math), but when I first encoutered the Allais paradox my first gut answer was 1A & 2B, even though I could immediately identify that something was wrong with this choice. I mean: I knew that my anwer was inconsistent, but I still had to make a conscious effort to persuade myself. To be honest, it's still like this every time I read about the paradox again: I know what the rational answer is, but the irrational one still makes me feel more confortable. Concluding, in my case there's definitley something beyond computational complexity at work.
There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
From Wikipedia:
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own "calculations". 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.
Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the "irrational" set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!
Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn't a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do - and that would explain Allais paradox as an effect of rational behavior).
Basically - Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI's?