This does not match how I observe my own brain to work. I see the guaranteed million vs the 1% risk of nothing and think "Oh no, what if I lose. I'd sure feel bad about my choice then.". Of course, thinking more deeply I realize that the 10% chance of an extra $4 million outweighs that downside, but it is not as obvious to my brain even though I value it more. If I were less careful, less intelligent, or less introspective, I feel that I would have 'gone with my instinct' and chosen 1A. (It is probably a good thing I am slightly tired right now, since this process happened more slowly than usual, so I think that I got a better look at it happening.)
You see, the reason for why it is discussed as an "effect" or "paradox" is that even if your risk aversion ("oh no what if I lose") is taken into account, it is strange to take 1A together with 2B. A risk averse person might "correctly" chose 1A, but that for person to be consistent in its choices has to chose 2A. Not 1A and 2B together.
My suggestion is that the slight increase in complexity in 1A adds to your risk (external risk+internal risk) and therefore within your given risk profile makes 1A and 2B a consistent combination.
There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
From Wikipedia:
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own "calculations". 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.
Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the "irrational" set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!
Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn't a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do - and that would explain Allais paradox as an effect of rational behavior).
Basically - Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI's?