Today's post, New Improved Lottery was originally publeslished on April 13, 2007. A summary (from the LW wiki):
If the opportunity to fantasize about winning was a rational justification for the lottery, we could design a "New Improved Lottery", where a single payment buys an epsilon chance of becoming a millionaire over the next five years. All your time could be spent thinking how you could become a millionaire at any moment.
Discuss the post here (rather than in the comments of the original post).
This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Lotteries: A Waste of Hope, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns.
The point is that human values are complex. If you tell someone that donating money to the Society for Treating Rare Diseases in Cute Kittens is irrational, because they'll save more beings by helping to spawn a positive Singularity, you reduce the whole activity to helping beings. It's not just about helping beings, it is also about feeling good and helping certain beings in a certain way at a certain time.
The same can be said about lotteries. Playing the lottery is not something that can be optimized, because you don't care about that. You just want to play the lottery and feel good about it.
What Eliezer Yudkowsky is promoting is in essence wireheading. Replacing a complex activity with some sort of optimized substitution that gives you the same differently. But this implicitly assumes that the activity is just a means to an end. Many human activities are neither instrumental nor terminal. We do what we do because we want to do so, not as an instrumental activity that can be optimized.
The whole expected utility maximization idea is completely inhuman. Humans want to experience utility by giving in to their desires and not optimize away their complex values in favor of increasing some abstract notion of expected reward.
Not really, IMO. You can model any agent in a utility maximization framework.
That's one of the results in this paper.