You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

paper-machine comments on How does real world expected utility maximization work? - Less Wrong Discussion

12 Post author: XiXiDu 09 March 2012 11:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: [deleted] 09 March 2012 01:40:21PM *  7 points [-]

How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?

Why would anyone do that? (In the sense that your footnotes suggest this should be taken: quantifying over all possible worlds, trying to explicitly ground utility, and etc.)

We were human beings long before we started reading about rationality. I imagine EY looked at his schedule, his bank account, and the cost of a round-trip flight to New York, and said, "This might be cool, let's do it."

At the end of the day, everyone is still a human being. Everything adds up to normal, whether normality's perfectly optimized or not.

Comment author: Viliam_Bur 11 March 2012 01:26:47PM *  2 points [-]

Yes, my model agrees with that. But then it would be more fair to speak about things like they really are. To say "I was thinking for two minutes, and it seemed cool and without obvious problems, so I decided to do it". You know, like an average mortal would do.

Speaking in a manner that suggests that decisions are done otherwise, seems to me just as dishonest as when a theist says "I heard Jesus speaking to me", when in reality it was something like "I got this idea, it was without obvious problems, and it seemed like it could raise my status in my religious community".

Not pretending to be something that I am not -- isn't this a part of the rationalist creed?

If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.

Comment author: [deleted] 11 March 2012 05:56:11PM 1 point [-]

But then it would be more fair to speak about things like they really are.

I don't understand. Who is not speaking about things like they really are? EY doesn't even mention expected utility in his post. That was all a figment of someone's imagination.

If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.

No need to Gendlin. People aren't optimizing their utility functions, because they don't have conscious access to their utility functions.