Liron comments on Rationality test: Vote for trump - Less Wrong

-18 Post author: pwno 16 June 2016 08:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 18 June 2016 03:14:33AM 0 points [-]

I would argue all those values are irrational. Ticking a box that has no effect on the world, and that no one will ever know about, should not matter. And I don't think many people would claim that they value that, if they accepted that premise. I think people value voting because they don't accept that premise, and think there is some value in their vote.

Comment author: Lumifer 18 June 2016 03:48:16AM 1 point [-]

I would argue all those values are irrational.

Please do.

The expression "irrational values" sounds like a category mistake to me.

Comment author: Liron 22 June 2016 12:21:02PM 1 point [-]

You're right that "those values are irrational" is a category mistake, if we're being precise. But Houshalter has an important point...

Any time you violate the axioms of a coherent utility-maximization agent, e.g. falling for the Allais paradox, you can always use meta factors to argue why your revealed preferences actually were coherent.

Like, "Yes the money pump just took some of my money, but you haven't considered that the pump made a pleasing whirring sound which I enjoyed, which definitely outweighed the value of the money it pumped from me."

While that may be a coherent response, we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.

A "rationality test" is a test that provides Bayesian evidence to distinguish people earlier vs. later on this path toward a more reflectively coherent utility function.

Having so grounded all the terms, I mostly agree with pwno and Houshalter.

Comment author: Lumifer 22 June 2016 02:36:50PM 2 points [-]

you can always use meta factors to argue why your revealed preferences actually were coherent.

Three observations. First, those aren't meta factors, those are just normal positive terms in the utility function that one formulation ignores and another one includes. Second, "you can always use" does not necessarily imply that the argument is wrong. Third, we are not arguing about coherency -- why would the claim that, say, I value the perception of myself as someone who votes for X more than 10c be incoherent?

we know that humans are born being somewhat farther-than-optimal from the ideal utility maximizer, and practicing the art of rationality adds value to their lives by getting them somewhat closer to the ideal than where they started.

I disagree, both with the claim that getting closer to the ideal of a perfect utility maximizer necessarily adds value to people's lives, and with the interpretation of the art of rationality as the art of getting people to be more like that utility maximizer.

Besides, there is still the original point: even if you posit some entilty as a perfect utility maximizer, what would its utility function include? Can you use the utility function to figure out which terms should go into the utility function? Colour me doubtful. In crude terms, how do you know what to maximize?

Comment author: Liron 22 June 2016 03:27:39PM 0 points [-]

Well I guess I'll focus on what seems to be our most fundamental disagreement, my claim that getting value from studying rationality usually involves getting yourself to be closer to an ideal utility maximizer (not necessarily all the way there).

Reading the Allais Paradox post can make a reader notice their contradictory preferences, and reflect on it, and subsequently be a little less contradictory, to their benefit. That seems like a good representative example of what studying rationality looks like and how it adds value.

Comment author: Lumifer 22 June 2016 05:14:30PM 2 points [-]

to their benefit

You assert this as if it were an axiom. It doesn't look like one to me. Show me the benefit.

And I still don't understand why would I want to become an ideal utility maximizer.

Comment author: Liron 02 July 2016 08:54:00PM 0 points [-]

For the sake of organization, I suggest discussing such things on the comment threads of Sequence posts.

Comment author: JEB_4_PREZ_2016 25 June 2016 03:50:56PM 0 points [-]

And I still don't understand why would I want to become an ideal utility maximizer.

If you could flip a switch right now that makes you an ideal utility maximizer, you wouldn't do it?

Comment author: Lumifer 26 June 2016 02:02:57AM 0 points [-]

Who gets to define my utility function? I don't have one at the moment.

Comment author: entirelyuseless 25 June 2016 09:04:16PM 0 points [-]

I would never flip a switch like that.

Comment author: Val 23 June 2016 07:07:19PM 0 points [-]

And why should we be utility maximization agents?

Assume the following situation. You are very rich. You meet a poor old lady in a dark alley who carries a purse with her, with some money which is a lot from her perspective. Maybe it's all her savings, maybe she just got lucky once and received it as a gift or as alms. If you mug her, nobody will ever find it out and you get to keep that money. Would you do it? As a utility maximization agent, based on what you just wrote, you should.

Would you?

Comment author: gjm 23 June 2016 07:39:10PM -2 points [-]

As a utility maximization agent, based on what you just wrote, you should.

Only if your utility function gives negligible weight to her welfare. Having a utility function is not at all the same thing as being wholly selfish.

(Also, your scenario is unrealistic; you couldn't really be sure of not getting caught. If you're very rich, the probability of getting caught doesn't have to be very large to make this an expected loss even from a purely selfish point of view.)

Comment author: Liron 02 July 2016 09:00:18PM 0 points [-]

Have you read the LW sequences? Because like gjm explained, your question reveals a simple and objective misunderstanding of what utility functions look like when they model realistic people's preferences.