Polymeron comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong

39 Post author: Eliezer_Yudkowsky 19 October 2007 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (334)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Sawin 04 January 2011 05:56:44PM 2 points [-]

If the injured parties are humans, I should be very skeptical of the assertion because a very small fraction, (1/3^^3)*1/10^(something), of people have the power of life and death over 3^^^3 other people, whereas 1/10^(something smaller) hear the corresponding hoax.

That's the only answer that makes sense because it's the only answer that works on a scale of 3^^^3.

I think.

Comment author: Polymeron 04 January 2011 06:23:59PM 0 points [-]

"If the injured parties are humans, I should be very skeptical of the assertion because a very small fraction, (1/3^^3)*1/10^(something)"

You don't know that. In fact, you don't know that with some degree of uncertainty that, if I thought had a lot on the line, I might not take lightly.

I'm trying to think up several avenues. One is that the higher the claimed utility, the lower the probability (somehow); another tries to use the implications that accepting the claim would have on other probabilities in order to cancel it out.

I'll post a new comment if I manage to come up with anything good.

Comment author: Will_Sawin 04 January 2011 10:10:53PM 2 points [-]

I know because of anthropics. It is a logical impossibility for more than 1/3^^^3 individuals to have that power. You and I cannot both have power over the same thing, so the total amount of power is bounded, hopefully by the same population count we use to calculate anthropics.

Comment author: endoself 04 January 2011 10:41:14PM *  2 points [-]

Not in the least convenient possible world. What if someone told you that 3^^^3 copies of you were made before you must make your decision and that their behaviour was highly correlated as applies to UDT? What if the beings who would suffer had no consciousness, but would have moral worth as judged by you(r extrapolated self)? What if there was one being who was able to experience 3^^^3 times as much eudaimonia as everyone else? What if the self-indication assumption is right?

<troll> If you're going to engage in motivated cognition at least consider the least convenient possible world. </troll>

Comment author: Will_Sawin 05 January 2011 02:07:59AM 0 points [-]
  1. Am I talking to Omega now, or just some random guy? I don't understand what is being discussed. Please elaborate?

  2. Then my expected utility would not be defined. There would be relatively simple worlds with arbitrarily many of them. I honestly don't know what to do.

  3. Then my expected utility would not be defined. There would be relatively simple agents with arbitrarily sensitive utilities.

  4. Then I would certainly live in a world with infinitely many agents (or I would not live in any worlds with any probability), and the SIA would be meaningless.

My cognition is motivated by something else - by the desire to avoid infinities.

Comment author: endoself 05 January 2011 04:28:19AM *  0 points [-]

1) Sorry, I confused this with another problem; I meant some random guy.

2/3) Isn't how you decision process handles infinities rather important? Is there any corresponding theorem to the Von Neumann–Morgenstern utility theorem but without using either version of axiom 3? I have been meaning to look into this and depending on what I find I may do a top-level post about it. Have you heard of one?

edit: I found Fishburn, 1971, A Study of Lexicographic Expected Utility, Management Science. It's behind a paywall at http://www.jstor.org/pss/2629309. Can anyone find a non-paywall version or email it to me?

4) Yeah, my fourth one doesn't work. I really should have known better.

Sometimes, infinities must be made rigourous rather than eliminated. I feel that, in this case, it's worth a shot.

Comment author: Will_Sawin 05 January 2011 12:13:16PM 3 points [-]

What worries me about infinities is, I suppose, the infinite Pascal's mugging - whenever there's a single infinite broken symmetry, nothing that happens in any finite world matters to determine the outcome.

This implies that all are thought should be devoted to infinite rather than finite worlds. And if all worlds are infinite, it looks like we need to do some form of SSA dealing with utility again.

This is all very convenient and not very rigorous, I agree. I cannot see a better way, but I agree that we should look. I will use university library powers to read that article and send it to you, but not right now.

Comment author: endoself 05 January 2011 06:18:21PM 1 point [-]

I don't see any way to avoid the infinite Pascal's mugging conclusion. I think that it is probably discouraged due to a history of association with bad arguments and the actual way to maximize the chance of infinite benefit will seem more acceptable.

I will use university library powers to read that article and send it to you, but not right now.

Thank you.

Comment author: Will_Sawin 05 January 2011 08:15:07PM 1 point [-]

Consider an infinite universe consisting of infinitely many copies of Smallworld, and other one consisting of infinitely many copies of Bigworld.

It seems like the only reasonable way to compute expected utility is to compute SSA or pseudo-SSA in Bigworld and Smallworld, thus computing the average utility in each infinite world, with an implied factor of omega.

Reasoning about infinite worlds that are made of several different, causally independent, finite components may produce an intuitively reasonable measure on finite worlds. But what about infinite worlds that are not composed in this manner? An infinite, causally connected chain? A series of larger and larger worlds, with no single average utility?

How can we consider them?

Comment author: endoself 05 January 2011 09:41:24PM 2 points [-]

It seems like the only reasonable way to compute expected utility is to compute SSA or pseudo-SSA in Bigworld and Smallworld, thus computing the average utility in each infinite world, with an implied factor of omega.

Be careful about using an infinity that is not the limit of an infinite sequence; it might not be well defined.

An infinite, causally connected chain?

It depends on the specifics. This is a very underdefinded structure.

A series of larger and larger worlds, with no single average utility?

A divergent expected utility would always be preferable to a convergent one. How to compare two divergent possible universes depends on the specifics of the divergence.

Comment author: [deleted] 06 January 2011 03:05:30PM 0 points [-]

I've been thinking about Pascal's Mugging with regard to decision making and Friendly AI design, and wanted to sum up my current thoughts below.

1a: Assuming you are Pascal Mugged once, it greatly increases the chance of you being Pascal Mugged again.

1b: If the first mugger threatens 3^^^3 people, the next mugger can simply threaten 3^^^^3 people. The mugger after that can simply threaten 3^^^^^3 people.

1c: It seems like you would have to take that into account as well. You could simply say to the mugger, "I'm sorry, but I must keep my Money because the chance of their being a second Mugger who threatens one Knuth up arrow more people then you is sufficiently likely that I have to keep my money to protect those people against that threat, which is much more probable now that you have shown up."

1d: Even if the Pascal Mugger threatens an Infinite number of people with death, a second Pascal Mugger might threaten an Infinite number of people with a slow, painful death. I still have what appears to be a plausible reason to not give the money.

1e: Assume the Pascal Mugger attempts to simply skip that and say that he will threaten me with infinite disutility. The Second Pascallian Mugger could simply threaten me with an infinite disutility of a greater cardinality.

1f: Assume the Pascalian Mugger attempts to threaten me with an Infinite Disutility with the greatest possible infinite Cardinality. A subsequent Pascallian Mugger could simply say "You have made a mathematical error in processing the previous threats, and you are going to make a mathematical error in processing future threats. The amount of any other past or future Pascal's mugger threat is essentially 0 disutility compared to the amount of disutility I am threatening you with, which will be infinitely greater."

I think this gets into the Berry Paradox when considering threats. "A threat infinitely worse then the greatest possible threat statable in one minute." can be stated in less then one minute, so it seems as if it is possible for a Pascal's mugger to make a threat which is infinite and incalculable.

I am still working through the implications of this but I wanted to put down what I had so far to make sure I could avoid errors.

Comment author: Will_Sawin 06 January 2011 04:01:12PM 0 points [-]

Surely this will not work in the least convenient world?

Comment author: [deleted] 06 January 2011 10:17:12PM 0 points [-]

That is a good point, but my reading of that topic is that it was the least convenient possible world. I honestly do not see how it is possible to word a greatest threat.

Once someone actually says out loud what any particular threat is, you always seem to be vulnerable to someone coming along and generating a threat, which when taken in the context of threats you have heard, seems greater then any previous threat.

I mean, I suppose to make it more inconvenient for me, The Pascal Mugger could add "Oh by the way. I'm going to KILL you afterward, regardless of your choice. You will find it impossible to consider another Pascal's Mugger coming along and asking you for your money."

"But what if the second Pascal's Mugger resurrects me? I mean sure, it seems oddly improbable that he would do that just to demand 5 dollars which I wouldn't have if I gave them to you if I was already dead, and frankly it seems odd to even consider resurrection at all, but it could happen with a non 0 chance!"

I mean yes, the idea of someone ressurecting you to mug you does seem completely, totally ridiculous. but the entire idea behind Pascal's Mugging appears to be that we can't throw out those tiny, tiny, out of the way chances if there is a large enough threat backing them up.

So let's think of another possible least convenient world: The Mugger is Omega or Nomega. He knows exactly what to say to convince me that despite the fact that right now it seems logical that a greater threat could be made later, somehow this is the greatest threat I will ever face in my entire life, and the concept of a greater threat then this is literally inconceivable.

Except now the scenario requires me to believe that I can make a choice to give the Mugger 5$, but NOT make a choice to retain my belief that a larger threat exists later.

That doesn't quite sound like a good formulation of an inconvenient world either. (I can make choices except when I can't?) I will keep trying to think of a more inconvenient world once I get home and will post it here if I think of one.