Val comments on Rationality test: Vote for trump - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
Please do.
The expression "irrational values" sounds like a category mistake to me.
You're right that "those values are irrational" is a category mistake, if we're being precise. But Houshalter has an important point...
Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.
Like, "Yes the money pump just took some of my money, but you haven't considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me."
While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.
A "rationality test" is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.
Having so grounded all the terms, I mostly agree with pwno and Houshalter.
And why should we be utility maximization agents?
Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.
Would you?
Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish.
(Also, your scenario is unrealistic; you couldn't really be sure of not getting caught. If you're very rich, the probability of getting caught doesn't have to be very large to make this an expected loss even from a purely selfish point of view.)
Have you read the LW sequences? Because like gjm explained, your question reveals a simple and objective misunderstanding of what utility functions look like when they model realistic people's preferences.