Dmytry comments on Pascal's Mugging: Tiny Probabilities of Vast Utilities - Less Wrong

39 Post author: Eliezer_Yudkowsky 19 October 2007 11:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (334)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Dmytry 29 December 2011 09:27:38PM *  1 point [-]

Looks like strategic thinking to me. If you are to organize yourself to be prone to be Pascal-mugged, you will get Pascal mugged, and thus it is irrational to organize yourself to be Pascal-muggable.

edit: It is as rational to introduce certain bounds on applications of own reasoning as it is to try to build reliable, non-crashing software, or to impose simple rule of thumb limits on the output of the software that controls positioning of control rods in the nuclear reactor.

If you properly consider a tiny probability of mistake to your reasoning, a mistake that may lead to consideration of a number generated by a random string - a lot of such numbers are extremely huge - and apply some meta-cognition with regards to appearance of such numbers, you'll find that such extremely huge numbers are also disproportionally represented in products of errors in reasoning.

With regards to the wager, there is my answer: If you see someone bend over backwards to make a nickel, it is probably not Warren Buffett you're seeing. Indeed the probability of that person who's bending over backwards to make a nickel, having N$, would sharply fall off with increase of N. Here you see a being that is mugging you, and he allegedly has the power to simulate 3^^^^3 beings that he can mug, have sexual relations with, torture, what ever. The larger is the claim, the less probable it is that this is a honest situation.

It is however exceedingly difficult to formalize such answer or to arrive at it in a formal fashion. And for me, there could exist other wagers that are beyond my capability to reason correctly about.

For this reason as matter of policy I assume that I have an error per each inference step - the error that can result in consideration of an extremely huge number - and have an upper cut off on the numbers i'd use for considerations as an optimization strategy; if there is a huge number of this sort, more verification steps are needed. In particular, this has very high impact on morality on me. Any sort of situation where you are killing fewer people to save more people - those situations are extremely uncommon and difficult to conjecture - the appearance of such situation however can easily result from faulty reasoning.