Eugine_Nier comments on Metacontrarian Metaethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (75)
The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you're neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).
For example, your chances of finding that the Earth has turned into your favorite fantasy novel, i.e., the particles making up the earth spontaneously rearranged themselves into a world closely resembling the world of the novel due to quantum tunneling, and then the whole thing turning into a giant bowl of tapioca pudding a week later, is much much higher then 1-in-3^^^^^3.
Especially the probability that the means by which you learned of these probabilities is unreliable, which is probably not even very tiny. (How tiny is the probability that you, the reader of this comment, are actually dreaming right now?)
Actually, considering the possibility that you've misjudged the probability doesn't help with Pascal's Mugging scenarios, because
And while P(judgment was correct) may be small, it won't be astronomically small under ordinary circumstances, which is what it would take to resolve the mugging.
(My preferred resolution is to restrict the class of admissable utility function-predictor pairs to those where probability shrinks faster that utility grows for any parameterizable statement, which is slightly less restrictive than requiring bounded utility functions.)
BTW, you realize we're talking about torture vs. dust spec and not Pascal's mugging here?
I think he's just pointing out that all you have to do is change the scenario slightly and then my objection doesn't work.
Still, I'm a little curious about how someone's ability to state a large number succinctly makes a difference. I mean, suppose the biggest number the mugger knew how to say was 12, and they didn't know about multiplication, exponents, up arrow notation, etc. They just chose 12 because it was the biggest number they could think of or knew how to express (whether they were bluffing totally or were actually going to torture 3^^^3 people). Should I take a mugger more seriously just because they know how to communicate big numbers to me?
The point of stating the large number succinctly is that it overwhelms the small likelihood of the muggers story being true, at least if you have something resembling a Solomonoff prior. Note also that the mugger isn't really necessary for the scenario, he's merely there to supply a hypothesis that you could have come up with on your own.
Good point. I guess the only way to counter these odd scenarios is to point out that everyone's utility function is different, and then the question is simply whether the responder wants to self-modify (or would be happier in the long run doing so) even after hearing some rationalist arguments to clarify their intuitions. The question of self-modification is a little hard to grasp, but at least it avoids all these far-fetched situations.
For the Pascal's mugging problem, I don't think that will help.
Isn't Pascal's mugging just this?
I'd just walk away. Why should I care? If I thought about it for so long that I had some lingering qualms, and I got mugged like that a lot, I'd self-modify just to enjoy the rest of the my life more.
As an aside, I don't think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It's all subjective.
There's a difference between mental distress and action-motivating desire. If I were asked to pay $5 to prevent someone from being murdered with near-certainty, I would. On the other hand, I would not pay $5 more for a video game where a character does not die, though I can't be sure of this self-simulation because I play video games rather infrequently. If I only had $5, I would definitely spend it on the former option.
I do not allow my mental distress to respond to the same things that motivate my actions; intuitively grasping the magnitude of existential risks is impossible and even thinking about a fraction of that tragedy could prevent action, such as by causing depression. However, existential risks still motivate my decisions.
It's still way too restrictive though, no? And are there ways you can Dutch book it with deals where probability grows faster (instead of the intuitively-very-common scenario where they always grow at the same rate)?