Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

rwallace comments on Value Uncertainty and the Singleton Scenario - Less Wrong

8 Post author: Wei_Dai 24 January 2010 05:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: rwallace 24 January 2010 06:19:46PM 4 points [-]

You say you have stacked assumptions in favor of project B, but then you make two quite bizarre assumptions:

  1. A singleton has a one in 5 billion chance of giving you control of the entire visible universe,

  2. In the non-singleton scenario, there is zero probability that any of the universe outside the solar system will have any utility.

To make these assumptions is essentially to prejudge the matter in favor of project A, for obvious reasons; but that says nothing about what the outcome would be given more plausible assumptions.

Comment author: Wei_Dai 25 January 2010 03:02:10AM *  1 point [-]

1. A singleton has a one in 5 billion chance of giving you control of the entire visible universe

I already addressed the rationale for this assumption. Why do you think this assumption favors project A?

2. In the non-singleton scenario, there is zero probability that any of the universe outside the solar system will have any utility.

It's hard to see how, in a non-singleton scenario, one might get more resources than 1/5000 share of the solar system. Perhaps what other people do with their resources does matter somewhat to me, but in the expected utility computation I think it would count for very little in the presence of other large values, so for simplicity I set it to 0. If you disagree, let me know what you think a more realistic assumption is, and I can redo the calculations.

Comment author: rwallace 25 January 2010 02:30:37PM 2 points [-]
  1. As Tim Tyler pointed out, the fact that a singleton government physically could choose a random person and appoint him dictator of the universe is irrelevant; we know very well it isn't going to. This assumption favored project A because almost all your calculated utility derived from the hope of becoming dictator of the universe; when we accept this is not going to happen, all that fictional utility evaporates.

  2. To take the total utility of the rest of the universe as approximately zero for the purpose of this calculation would require that we value other people in general less than we value ourselves by a factor on the order of 10^32. Some discount factor is reasonable -- we do behave as though we value ourselves more highly than random other people. But if you agree that you wouldn't save your own life at the cost of letting a million other people die, then you agree the discount factor should not be as high as 10^6, let alone 10^32.

Comment author: Wei_Dai 25 January 2010 05:57:13PM 2 points [-]

To answer 1, the reason that a singleton government won't choose a random person and let him be dictator is that it has an improvement upon that. For example, if people's utilities are less than linear in negentropy, then it would do better to give everyone an equal share of negentropy. So why shouldn't I assume that in the singleton scenario my utility would be at least as large as if I have a random chance to be dictator?

For 2, I don't think a typical egoist would have a constant discount factor for other people, and certainly not the kind described in Robin's The Rapacious Hardscrapple Frontier. He might be willing to value the entire rest of the universe combined at, say, a billion times his own life, but that's not nearly enough to make EU(B)>EU(A). An altruist would have a completely different kind of utility function, but I think it would still be the case that EU(A)>EU(B).

Comment author: rwallace 25 January 2010 07:24:15PM 2 points [-]

Okay, so now the assumptions seem to be that a singleton government will give you exclusive personal title to a trillion galaxies, that we should otherwise behave as though the future universe were going to imitate a particular work of early 21st century dystopian science fiction, and that one discounts the value of other people compared to oneself by a factor of perhaps 10^23. I stand by my claim that the only effect of whipping out the calculator here is obfuscation; the real source of the bizarre conclusions is the bizarre set of assumptions.

Comment author: orthonormal 24 January 2010 07:49:49PM *  0 points [-]

I think you misunderstand Wei Dai's assumptions, but that may be his fault for adding too many irrelevant details to a simple problem.

Comment author: gjm 25 January 2010 12:23:37AM 0 points [-]

Perhaps you would care to say more about how you think Wei_Dai's assumptions differ from what rwallace described?