Eugine_Nier comments on Metacontrarian Metaethics - Less Wrong

2 Post author: Will_Newsome 20 May 2011 05:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread.

Comment author: Eugine_Nier 21 May 2011 03:36:19PM 3 points [-]

The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you're neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).

For example, your chances of finding that the Earth has turned into your favorite fantasy novel, i.e., the particles making up the earth spontaneously rearranged themselves into a world closely resembling the world of the novel due to quantum tunneling, and then the whole thing turning into a giant bowl of tapioca pudding a week later, is much much higher then 1-in-3^^^^^3.

Comment author: Amanojack 22 May 2011 02:04:24PM *  0 points [-]

The problem with basing decisions on events with a probability of 1-in-3^^^^^3, is that you're neglecting to take into account all kinds of possibilities with much higher (though still tiny probabilities).

Especially the probability that the means by which you learned of these probabilities is unreliable, which is probably not even very tiny. (How tiny is the probability that you, the reader of this comment, are actually dreaming right now?)

Comment author: jimrandomh 22 May 2011 05:02:14PM 0 points [-]

Actually, considering the possibility that you've misjudged the probability doesn't help with Pascal's Mugging scenarios, because

P(X|judged that X has probability p) >= p\*P(judgment was correct)

And while P(judgment was correct) may be small, it won't be astronomically small under ordinary circumstances, which is what it would take to resolve the mugging.

(My preferred resolution is to restrict the class of admissable utility function-predictor pairs to those where probability shrinks faster that utility grows for any parameterizable statement, which is slightly less restrictive than requiring bounded utility functions.)

Comment author: Eugine_Nier 22 May 2011 05:41:53PM 1 point [-]

BTW, you realize we're talking about torture vs. dust spec and not Pascal's mugging here?

Comment author: Amanojack 22 May 2011 05:49:20PM 0 points [-]

I think he's just pointing out that all you have to do is change the scenario slightly and then my objection doesn't work.

Still, I'm a little curious about how someone's ability to state a large number succinctly makes a difference. I mean, suppose the biggest number the mugger knew how to say was 12, and they didn't know about multiplication, exponents, up arrow notation, etc. They just chose 12 because it was the biggest number they could think of or knew how to express (whether they were bluffing totally or were actually going to torture 3^^^3 people). Should I take a mugger more seriously just because they know how to communicate big numbers to me?

Comment author: Eugine_Nier 22 May 2011 06:43:46PM 1 point [-]

The point of stating the large number succinctly is that it overwhelms the small likelihood of the muggers story being true, at least if you have something resembling a Solomonoff prior. Note also that the mugger isn't really necessary for the scenario, he's merely there to supply a hypothesis that you could have come up with on your own.

Comment author: Amanojack 22 May 2011 08:42:02PM 0 points [-]

Good point. I guess the only way to counter these odd scenarios is to point out that everyone's utility function is different, and then the question is simply whether the responder wants to self-modify (or would be happier in the long run doing so) even after hearing some rationalist arguments to clarify their intuitions. The question of self-modification is a little hard to grasp, but at least it avoids all these far-fetched situations.

Comment author: Eugine_Nier 22 May 2011 09:28:54PM 1 point [-]

For the Pascal's mugging problem, I don't think that will help.

Comment author: Amanojack 22 May 2011 09:48:09PM -2 points [-]

Isn't Pascal's mugging just this?

"Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."

I'd just walk away. Why should I care? If I thought about it for so long that I had some lingering qualms, and I got mugged like that a lot, I'd self-modify just to enjoy the rest of the my life more.

As an aside, I don't think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It's all subjective.

Comment author: endoself 22 May 2011 10:11:30PM 1 point [-]

As an aside, I don't think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It's all subjective.

There's a difference between mental distress and action-motivating desire. If I were asked to pay $5 to prevent someone from being murdered with near-certainty, I would. On the other hand, I would not pay $5 more for a video game where a character does not die, though I can't be sure of this self-simulation because I play video games rather infrequently. If I only had $5, I would definitely spend it on the former option.

I do not allow my mental distress to respond to the same things that motivate my actions; intuitively grasping the magnitude of existential risks is impossible and even thinking about a fraction of that tragedy could prevent action, such as by causing depression. However, existential risks still motivate my decisions.

Comment author: Will_Newsome 23 May 2011 09:51:31AM 0 points [-]

(My preferred resolution is to restrict the class of admissable utility function-predictor pairs to those where probability shrinks faster that utility grows for any parameterizable statement, which is slightly less restrictive than requiring bounded utility functions.)

It's still way too restrictive though, no? And are there ways you can Dutch book it with deals where probability grows faster (instead of the intuitively-very-common scenario where they always grow at the same rate)?