Now I tend not to follow this form very much, so please excuse me if this has been suggested before. Still, I don't know that there's anyone else on this board who could actually carry out these threats.

 

If anyone accepts a pascals mugging style trade off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds. Or a suitably higher number if they include a higher (plausible, from my external viewpoint) number. Rest assured I can at least match their raw computing power from where I am. Good luck.

 

EDIT: I'm told that Eleizer proposed a similar solution over here, although more eloquently then I have.

New Comment
57 comments, sorted by Click to highlight new comments since: Today at 7:46 PM
[-][anonymous]11y180

That's not a solution, it's just another mugging.

What's your point?

A solution to pascals mugging answers the question: What do we do when there is a very tiny (O(2^-x)) chance that some random thing has incredibly huge (O(3^^^3)) importance?

If anyone accepts a pascals mugging style trade off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds. Or a suitably higher number if they include a higher (plausible, from my external viewpoint) number. Rest assured I can at least match their raw computing power from where I am.

I go for a walk and meet a mugger. He hears about your precommitment and says, 'ah well, I know what reward I can offer you which both overcomes your expressed probability that I will pay of 1/3^^^^3 and also the threat of -3^^^^3: I will offer you not 3^^^^3 but 3^^^^^^3! If you will simply multiply out the expected value and then subtract staticIP's threat, you will find this is a very lucrative deal for you!'

I conclude his logic is unimpeachable and the net expected value so vast I would be a fool to not give him $5, and promptly do so.

They say a conservative is a liberal mugged by reality. What do you call a utilitarian mugged by unreality?

The fact that there are 37 comments on this post (and similar posts) is rapidly convincing me that the new -5 karma penalty for replying to downvoted things is actually a really good idea.

And the fact that many of those comments are on Top Comments Today shows that lots of people likely disagree with you.

No, it shows that a lot of people upvote comments disagreeing with this post. Downvoted threads usually get a lot of upvotes for commenters explaining what the OP is doing wrong. It helps signal to the OP that they are wrong and that the commenter's explanation is correct. The sad side effect, which Eliezer is so opposed to is that these upvotes encourage feeding the trolls. Up until now, I was thoroughly unconvinced that this was a problem, but this thread and this one are convincing me otherwise, which was the point of my comment.

As a side point, the grandparent is on the top comments today list too... So if your criteria for "people agree with and support this comment" is "is on Top Comments Today" then you've run into a paradox.

Tangentially, some of those comments are at 0 karma, and are still on that list. This doesn't make much sense to me, but honestly I've never looked at that feature before, and probably will continue not to.

Tangentially, some of those comments are at 0 karma, and are still on that list. This doesn't make much sense to me, but honestly I've never looked at that feature before, and probably will continue not to.

I had only looked at the first couple screenfuls of the list when replying to you, and most comments were from this thread and upvoted quite a bit. Anyway, the thing does occasionally misplace comments (e.g. heavily downvoted comments mysteriously in the middle of it), though I think that when there are lots of zero-karma comments at the bottom there really aren't any more upvoted comments from the last 24 hours.

If people thought it's bad to feed the trolls, even when disagreeing with them, they'd downvote all replies to trolls.

I understand that a lot of issues are solved, like the existence of god and so on, but I for one still haven't gotten an appropriate explanation as to why my claim, which seems perfectly valid to me, is incorrect. That proposal is going to further hinder this kind of discussion and debate.

And as far as I can tell, I'm correct. It's honestly very concerning to me that a bunch of lesswrongers have failed to follow this line of reasoning to its natural conclusion. Maybe I'm just not using the correct community-specific shibboleths, but the only one who's actually followed through on the logic is gwern. I look forward to seeing his counter reply to this.

I think you're getting at an important problem, and have taken a step toward the solution.

How do we deal with choice in the face of fundamentally arbitrary assertions?

One way, at least, is to see if there is another equivalently arbitrary assertion that would lead you to make the opposite choice.

This solution doesn’t work. Why? Because I pledge that if anyone fails to accept a “Pascal’s Mugging style trade-off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds”. I’ve just canceled out your pledge.

You could say all your allies take the same pledge as you, and you have more allies than me, but that’s getting too far into the practicalities of our lives and too far away from a general solution. A general solution can’t assume that the person considering whether to accept a Mugging will have heard either of our pledges, so the person would be unable to take those pledges into account for their decision.

I don’t know the actual solution to Pascal’s Mugging myself. I’ve pasted my outline-form notes on it so far into a reply to this comment, in case they’re useful.

You're not thinking big enough. If anyone ever accepts a Pascal's mugging again, my fuzzy celery God will execute a worse Pascal's mugging than any other in existence no matter what the original Pascal's mugging is.

P.S. You'll never find out what muggings fuzzy celery God executes because they're always unpredictable. This makes them impossible to disprove.

(My brother always hated having fights like this with me.)

If we have two gods, one claiming that if I do X, they'll mug me, and one claiming that if I don't do X they'll mug me, well I'm probably going to believe the god that isn't fuzzy and celery...

Well that's making the wrong choice, buddy. Other Gods are useless against fuzzy celery God because fuzzy celery God can transform itself at will into the Most Believable God. Don't think of fuzzy celery God as a piece of fuzzy celery. Fuzzy celery God is nothing like that. If an old wise man is the most compelling God-form for you, fuzzy celery God looks like an old wise man. If benevolent Gods are more credible to you, fuzzy celery God becomes benevolent. No matter what Pascal's mugging the person wants to accept, fuzzy celery God will always take on the appearance and traits of the most believable God that the person can conceive of.

This solution doesn’t work. Why? Because I pledge that if anyone fails to accept a “Pascal’s Mugging style trade-off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds”. I’ve just canceled out your pledge.

Your argument doesn't address the problem with Static_IP's post, and indeed it has exactly the same problem: it is not an argument/explanation/clarification, but instead it's one more mugging, see nyan_sandwich's comment. The problem is not that someone has put out a Pascal's mugging and now we have to pay up, unless the mugger is neutralized in some way. If it turns out that we in fact should pay up, the correct decision is easily performed.

The problem is that this situation is not understood. The theoretical model of expected utility plus some considerations about prior suggest that the correct decision is to pay the mugger, yet other considerations suggest otherwise, and there are potential flaws with the original argument, which motivates a search for better understanding of the situation. Modifying the situation in a way that makes the problem go away doesn't solve the original problem, it instead shifts attention away from it.

My unfinished outline-form notes on solving Pascal’s Mugging:

  • Pascal’s Mugger (http://www.nickbostrom.com/papers/pascal.pdf) possible solutions
    • Off – Pascal’s estimate might be farther off than the offered benefit, and how does he know how far to compensate?
    • Counter – there is a (smaller) probability that the man will give you the same amount of Utility only if you refuse. (Also a probability that will give way more Utility if refuse, but probably countered by probability that will give way more if accept.)
    • Known – the gambit is known, so that makes it more likely that he is tricking you – but sadly, no effect, I think.
    • Impossible – [My Dad]’s suspect argument: there is absolutely zero probability of the mugger giving you what he promises. There is no way to both extend someone’s lifespan and make them happy during it.
      • He could just take you out of the Matrix into a place where any obstacles to lengthy happiness are removed. There's still a probability of that, right?
    • God – maybe level of probability involved is similar to that of God's existence, with infinite heaven affecting the decision
    • Long-term – maybe what we should do in a one-shot event is different from what we should do if we repeated that event many times.
    • Assumption – one of the stated assumptions, such as utilitarianism or risk-neutrality, is incorrect and should not actually be held.

As an omnipotent god entity I pledge to counter any any attempt at pascals muggings, as long as the mugger actually has the power to do what they say.

I’ve just canceled out your pledge.

Yep. You did, or you would have if you could actually carry through on your threats. I maintain that you can't. Now it's a question of which of our claims is more likely to be true. That's kind of the point here. When you're dealing with that small or a probability then the calculation becomes useless and marred by noise.

If I'm correct, and I'm one of the very few entities capable of doing this, who happen across your world anyway, then I can cancel out your claim and a bunch of future claims. If you're correct then I can't. So the question is, how unlikely are my claims? How unlikely are yours? Are your claims significantly more likely (on the tiny scales we're working with) then mine?

But yes, now that I look at it more in depth (thank you for the links), it's obvious that this is a reiteration of the "counter" solution, but with actual specific and viable threats behind it.

If anyone accepts a pascals mugging style trade off with full knowledge of the problem, then I will slowly torture to death 3^^^^3 sentient minds.

I realize Pascal's Muggings involve highly implausible threats, but I think people are focusing on the 3^^^^3 sentient minds, and I have the more doubts about the surveillance problems. I can sort of imagine the feasibility of torturing 3^^^^3 minds (though I think the canonical form requires infinite torture rather than torturing to death) through some combination of bubbling off universes and automation, but I can't imagine any way of keeping track of whether anyone has accepted a Pascal's Mugging with full knowledge of the problem.

Now I tend not to follow this form very much, so please excuse me if this has been suggested before.

It actually, has. Within the last week or two.

Apparently my search-fu is weak. Would you care to link, or suggest search terms that would make finding it less arduous?

EDIT: found it, I think, over here. One of the obvious issues is that it's not a credible threat.

So far I haven't seen a counter argument over there that satisfies me. If there is anywhere that they go into it more in depth, please do give it a link here.

Why not put all the search terms you tried during your search-fu session into a comment. That will make this less likely to happen in the future.

Because I doubt I can remember all of them. Also, I'm not entirely clear on why you have "quotes" around search-fu. It's a pretty accepted term on the internet. Search-fu is the skill one is employing when searching for something.

A more reasonable question seems to me like asking how I arrived at the answer, not asking how I failed to arrive at the answer. I find it odd that you'd go that particular route, and would be very interested if you could expand on why you wanted to see all of my failed attempts, instead of my one successful attempt?

Seeing all my failed attempts is more difficult for me to write out (there were a few of them) and contains less usable information. I'm curios as to why you'd pick that instead of what seems to me like the more reasonable "post the search term that got you the answer".

would be very interested if you could expand on why you wanted to see all of my failed attempts, instead of my one successful attempt?

Because all of the other people who want to start threads on solutions to Pascal's muggings will be more likely to find your thread and go "Oh, someone already did this." That will save us from similar "solutions to Pascal's mugging threads" in the future. The worse your search terms failed, the better - that way Pascal's mugging solvers with low search-fu skill (no quotes this time) will be likely to find the existing solutions.

P.S. It's better to put them into complete sentences to avoid search engine penalties.

My fu cents:

Solutions to Pascal's Muggings. I Solved Pascal's Mugging. Would this Solve Pascal's Mugging? Possible Solution to Pascal's Muggings. Solve for Pascal's Muggings. Pascal's Mugging has been solved. Pascal's Mugging has a solution.

Ahh, makes sense. I actually found many different and interesting solutions to pascals mugging with my search terms though. Just not this "counter" solution.

This thread already shows up pretty close to the top for searches of "pascals mugging solutions" that I've attempted. For that exact phrase it's number 3, and has been before you posted this. I don't know that this particular solution needs to be more closely associated with the search terms then it already is.

Eliezer presented a counter solution similar to this on Overcoming Bias. search fu

Click here for Flimple Utility!!!

Well done. Roryokane mentioned it up here however.

I wonder what Eliezer would DO if he actually got $5 for flimple utility. I kinda want to try sending him five bucks as a psychology experiment.

Done. It's not really a sacrifice since I already donate to the SIAI. I will miss that 5 karma though.

Edit: Apparently you only get the -5 karma if you directly reply to a downvoted comment/post. Replying to reply is fine.

What? I don't understand. Did you give him 5$ or 5 karma or what?

Yeah I gave him $5.

Did you say it was for flimple utility?

Okay, well, I guess he doesn't know.

I don't see why there has to be a solution to Pascal's Mugging. Is it so implausible to think that it's just a valid quirk in the system?

Does this mean you'll pay up if you get Pascal-mugged, because it's a "valid quirk in the system"? If so, I have an amazing offer for you.

If you won't pay up, why not?

Err, yes. Maybe it is. That's what I'm trying to find out...

Are you saying that I should take some action with the knowledge that it might just be a quirk in the system? Like not posting my hypothesis?

Are you saying that I should take some action with the knowledge that it might just be a quirk in the system? Like not posting my hypothesis?

Not at all. I was probably being unclear, I apologize.

Fair enough. Would you mind explaining your intent then?

If anyone accepts a pascals mugging style trade off with full knowledge of the problem,

Well, it's very well known that Pascal himself accepted it, and I'm sure there are others. So, off you go and do whatever it is you wanted to do.

To be honest, your ability to come through on this threat is a classic example of the genre - it's very, very unlikely that you are able to do it, but obviously the consequences if you were able to would be, er, quite bad. In this case my judgement of the probabilities is that we are completely justified in ignoring the threat.

In this case my judgement of the probabilities is that we are completely justified in ignoring the threat.

Do you consider my pascals mugging to be less likely then the general examples of the genre, or do you think that all pascals muggings" probabilities are that we are completely justified in ignoring the threat."

It surely depends on one's estimate of the numbers. It seems worthwhile doing something about possible asteroid impacts, for example.

The fact that we're not dying off right now proves this untrue.

I never threatened to harm you. Yes, on average, you're significantly more likely to be in the torture group then where you are now, but anthropic principle and all that.