SilasBarta comments on To signal effectively, use a non-human, non-stoppable enforcer - LessWrong

31 Post author: Clippy 22 May 2010 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilasBarta 25 May 2010 10:10:06PM 1 point [-]

Wow! Now that you mention that article, I think I had solved the unsolved problem Eliezer describes in it, back in a discussion from a month ago, not realizing that my position on it wasn't the standard one here!

Someone tell me if I'm missing something here: Eliezer is saying that utility that a hypothesis predicts (from a course of action) can increase much faster than the length of the hypothesis. Therefore, you could feed an ideal AI a prediction that is improbable, but with a large enough utility to make it nevertheless highly important. This would force the AI to give in to Pascal's muggings.

My response (which I assumed was the consensus!) was that, when you permit a hypothesis long enough to associate that mega-utility with that course of action, you are already looking at very long hypotheses. When you allow all of those into consideration, you will necessarily allow hypotheses with similar probability but believe the opposite utility from that COA.

Because the mugger has not offered evidence to favor his/her hypothesis over the opposite, you assign, on net, no significant expected (dis)utility to what the mugger claims to do.

Comment author: JoshuaZ 25 May 2010 10:16:40PM 5 points [-]

But that's easy to solve. If you've already seen evidence that the mugger is someone who strongly keeps promises then you've now have enough reason to believe them to put the direction in favor of the mugger releasing the the AI. One doesn't necessarily even need that because humans more often tell the truth than lie, and more often keep their promises than break them. Once the probability of the mugger doing what they threaten is a tiny bit over 1/2, Pascal's mugging still is a threat.

Comment author: MichaelVassar 28 May 2010 05:12:10PM 3 points [-]

Maybe not. Game theoretically, making yourself visibly vulnerable to Pascal's Muggings may guarantee that they will occur, making them cease to constitute evidence.

Comment author: Polymeron 04 May 2011 12:59:28PM 0 points [-]

I've actually just expanded on this idea in the original Pascal's Mugging article. If the Mugger's claims are in no way associated with you or similar muggings, then conceivably you should take the probability at face value. But if that's not the case, then the probability of a direct manipulation attempt should also be taken into consideration, negating the increase in claimed utility.

I think that solves it.

Comment author: Yvain 25 May 2010 10:18:55PM 12 points [-]

If a normal mugger holds up a gun and says "Give me money or I'll shoot you", we consider the alternate hypotheses that the mugger will only shoot you if you do give er the money, or that the mugger will give you millions of dollars to reward your bravery if you refuse. But the mugger's word itself, and our theory of mind on the things that tend to motivate muggers, make both of these much less likely than the garden-variety hypothesis that the mugger will shoot you if you don't give the money. Further, this holds true whether the mugger claims er weapon is a gun, a ray gun, or a black hole generator; the credibility that the mugger can pull off er threat decreases if e says e has a black hole generator, but not the general skew in favor of worse results for not giving the money.

Why does that skew go away if the mugger claims to be holding an unfriendly AI or the threat of divine judgment some other Pascal-level weapon?

Your argument only seems to hold if there is no mugger and we're considering abstract principles - ie maybe I should clap my hands on the tiny chance that it might set into effect a chain reaction that will save 3^^^3 lives. In those cases, I agree with you; but as soon as a mugger gets into the picture e provides more information and skews the utilities in favor of one action.