Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong

39 Post author: Eliezer_Yudkowsky 19 October 2007 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (335)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 20 October 2007 02:02:01AM 8 points [-]

Tom and Andrew, it seems very implausible that someone saying "I will kill 3^^^^3 people unless X" is literally zero Bayesian evidence that they will kill 3^^^^3 people unless X. Though I guess it could plausibly be weak enough to take much of the force out of the problem.

Nothing could possibly be that weak.

Tom is right that the possibility that typing QWERTYUIOP will destroy the universe can be safely ignored; there is no evidence either way, so the probability equals the prior, and the Solomonoff prior that typing QWERTYUIOP will save the universe is, as far as we know, exactly the same.

Exactly the same? These are different scenarios. What happens if an AI actually calculates the prior probabilities, using a Solomonoff technique, without any a priori desire that things should exactly cancel out?

Comment author: Strange7 19 August 2011 01:31:52AM 2 points [-]

Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion... at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won't visibly advance until after the last proton has decayed.

Comment author: Arandur 19 August 2011 10:16:56PM 1 point [-]

... which doesn't solve the problem, but at least that AI won't be giving anyone... five dollars? Your point is valid, but it doesn't expand on anything.

Comment author: Strange7 19 August 2011 10:52:31PM -1 points [-]

More generally I mean that an AI capable of succumbing to this particular problem wouldn't be able to function in the real world well enough to cause damage.

Comment author: Arandur 20 August 2011 04:23:35AM -1 points [-]

I'm not sure that was ever a question. :3

Comment author: ialdabaoth 11 October 2012 08:32:49AM 2 points [-]

Nothing could possibly be that weak.

Well, let's think about this mathematically.

In other articles, you have discussed the notion that, in an infinite universe, there exist with probability 1 identical copies of me some 10^(10^29) {span} away. You then (correctly, I think) demonstrate the absurdity of declaring that one of them in particular is 'really you' and another is a 'mere copy'.

When you say "3^^^^3 people", you are presenting me two separate concepts:

  1. Individual entities which are each "people".

  2. A set {S} of these entities, of which there are 3^^^^3 members.

Now, at this point, I have to ask myself: "what is the probability that {S} exists?"

By which I mean, what is the probability that there are 3^^^^3 unique configurations, each of which qualifies as a self-aware, experiencing entity with moral weight, without reducing to an "effective simulation" of another entity already counted in {S}?

Vs. what is the probability that the total cardinality of unique configurations that each qualify as self-aware, experiencing entities with moral weight, is < 3^^^^3?

Because if we're going to juggle Bayesian probabilities here, at some point that has to get stuck in the pipe and smoked, too.