Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Value Uncertainty and the Singleton Scenario - Less Wrong

8 Post author: Wei_Dai 24 January 2010 05:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 25 January 2010 05:57:13PM 2 points [-]

To answer 1, the reason that a singleton government won't choose a random person and let him be dictator is that it has an improvement upon that. For example, if people's utilities are less than linear in negentropy, then it would do better to give everyone an equal share of negentropy. So why shouldn't I assume that in the singleton scenario my utility would be at least as large as if I have a random chance to be dictator?

For 2, I don't think a typical egoist would have a constant discount factor for other people, and certainly not the kind described in Robin's The Rapacious Hardscrapple Frontier. He might be willing to value the entire rest of the universe combined at, say, a billion times his own life, but that's not nearly enough to make EU(B)>EU(A). An altruist would have a completely different kind of utility function, but I think it would still be the case that EU(A)>EU(B).

Comment author: rwallace 25 January 2010 07:24:15PM 2 points [-]

Okay, so now the assumptions seem to be that a singleton government will give you exclusive personal title to a trillion galaxies, that we should otherwise behave as though the future universe were going to imitate a particular work of early 21st century dystopian science fiction, and that one discounts the value of other people compared to oneself by a factor of perhaps 10^23. I stand by my claim that the only effect of whipping out the calculator here is obfuscation; the real source of the bizarre conclusions is the bizarre set of assumptions.