khafra comments on On accepting an argument if you have limited computational power. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (85)
When confronted with highly speculative claims so beloved by philosophers, string theorists and certain AI apologists, my battle cry is "testable predictions!". If one argues in favor of a model that predicts a tiny probability of a really big harm, they better provide a testable justification of that model. In the case of Pascal's mugging, I have suggested a simple way to test if the model should be taken seriously. Such a test would have to be constructed specifically for each individual model, of course. If all you say is "I can't prove anything, but if I'm right, it'll be really bad", I yawn and move on.
This is the normal response, even here at LW--I think there's a popular misperception that LW doctrine is to give the Pascal's Mugger money. The point of the exercise is to examine the thought processes behind that intuitive, obviously correct "no," when it appears, on the surface, to be the lower expected utility option. After all, we don't want to build an AI that can be victimized by Pascalian muggers.
One popular option is the one you picked: Simply ignore probabilities below a certain threshhold, whatever the payoff. Another is to discount by the algorithmic complexity, or by the "measure" of the hostages. Yet another is to observe that, if 3^^^^3 people exist, a random person' (your) chances of being able to affect all the rest in a life-and-death way has to be scaled by 1/3^^^^3. Yet another is that, in a world where things like this happen, a dollar has near-infinite utility. Komponisto suggested that the kolmogorov complexity of 3^^^^3 deaths, or units of disutility, is much higher than that of the number 3^^^^3; so any such problem is inherently broken.
Of course, if you're not planning to build an optimizing agent, your "yawn and move on" response is fine. That's what the problem is about, not signing up for cryonics or donating to SI or whatever (the proponents of the last two argue for relatively large probabilities of extremely large utilities).