Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Unknown comments on Something to Protect - Less Wrong

52 Post author: Eliezer_Yudkowsky 30 January 2008 05:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Unknown 31 January 2008 11:31:09AM 2 points [-]

1. Save 400 lives, with certainty 2. Save 500 lives, 90% probability; save no lives, 10% probability.

i.e.

1. Save 4 lives, with certainty 2. Save 5 billion lives,0.00000009% probability; save no lives, 99.99999991% probability.

Any takes for #2? I seem to remember Ben Jones saying he would choose #1 in a case similar to the second case.

Formerly, I think I would have chosen #2 in the first case and #1 in the second. But Eliezer has converted me. Now I choose #2 in both cases. But would he do that himself? Consider:

"Perhaps I am one of the 'sentimentally irrational,' but I would pick the 400 certain lives saved if it were a one-time choice, and the 500 @ 90% if it were an iterated choice I had to make over, and over again. In the long run, probabilities would take hold, and many more people would be saved. But for a single instance of an event never to be repeated? I'd save the 400 for certain." (Anon, above)

"If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability: Pascal's Mugger is just a philosopher out for a fast buck.

But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not. An AI is not given its code like a human servant given instructions. An AI is its code. What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations? What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?

How do I know to be worried by this line of reasoning? How do I know to rationalize reasons a Bayesian shouldn't work that way?" (Eliezer Yudkowsky, Pascal's Mugging)

Who sees the similarity? Eliezer no doubt thinks that Anon is biased toward certainty, but so is he: he simply has less of the bias.

So I hereby retract my argument against voting, Pascal's Mugging, and Pascal's Wager. In the particular Mugging we discussed, there may have been anthropic reasons to make it proportionally improbable. But without such reasons, it should be accepted.

Comment author: Polymeron 04 May 2011 10:45:55AM 1 point [-]

It's not a matter of bias toward certainty; accepting Pascal's Mugger's terms can be conclusively demonstrated to be a losing strategy. Remember, the purpose is to win. That would imply that "rationality" that complies with the Mugger is not rational after all, which means rethinking the whole thing.

Having said that, I haven't been able to formulate a response to Pascal's Mugging myself, so I might be wrong-

...Except that in the process of writing this right now, I think I might have! I need to think this a little further.