Val comments on Rationality test: Vote for trump - Less Wrong

-18 Post author: pwno 16 June 2016 08:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 18 June 2016 03:48:16AM 1 point [-]

I would argue all those values are irrational.

Please do.

The expression "irrational values" sounds like a category mistake to me.

Comment author: Liron 22 June 2016 12:21:02PM 1 point [-]

You're right that "those values are irrational" is a category mistake, if we're being precise. But Houshalter has an important point...

Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.

Like, "Yes the money pump just took some of my money, but you haven't considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me."

While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.

A "rationality test" is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.

Having so grounded all the terms, I mostly agree with pwno and Houshalter.

Comment author: Val 23 June 2016 07:07:19PM 0 points [-]

And why should we be utility maximization agents?

Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.

Would you?

Comment author: gjm 23 June 2016 07:39:10PM -2 points [-]

As a utility maximization agent, based on what you just wrote, you should.

Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish.

(Also, your scenario is unrealistic; you couldn't really be sure of not getting caught. If you're very rich, the probability of getting caught doesn't have to be very large to make this an expected loss even from a purely selfish point of view.)

Comment author: Liron 02 July 2016 09:00:18PM 0 points [-]

Have you read the LW sequences? Because like gjm explained, your question reveals a simple and objective misunderstanding of what utility functions look like when they model realistic people's preferences.