Pavitra comments on Open Thread, August 2010-- part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (369)
No, see, that's different.
If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.
Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun. Do you now assign a higher expected-value utility to giving $100 to the EFF, or to giving the same $100 instead to SIAI? If I blew up the moon as a warning shot, would that change your mind?
The result of such an investigation might raise or lower P(threat can be carried out). This doesn't change the shape of the question: can a blackmailer issue a threat with P(threat can be carried out) x U(threat is carried out) > H, for all H? Can it do so at cost to itself that is bounded independent of H?
I refuse. According to economists, I have just revealed a preference:
P(Pavitra can blow up the sun) x U(Sun) < U(100$)
Yes. Now I have revealed
P(Pavitra can blow up the sun | Pavitra has blown up the moon) x U(Sun) > U(100$)
My point is that U($100) is partially dependent on P(Mallory can blow up the sun) x U(Sun), for all values of Mallory and Sun such that Mallory is demanding $100 not to blow up the sun. If P(M_1 can heliocide) is large enough to matter, there's a very good chance that P(M_2 can heliocide) is too. Credible threats do not occur in a vacuum.
I don't understand your points, can you expand them?
In my inequalities, P and U denoted my subjective probabilities and utilities, in case that wasn't clear.
The fact that probability and utility are subjective was perfectly clear to me.
I don't know what else to say except to reiterate my original point, which I don't feel you're addressing:
It's not even clear to me that you disagree with me. I am proposing a formulation (not a solution!) of Pascal's mugging problem: if a mugger can issue threats of arbitrarily high expected disutility, then a priors-and-utilities AI is boned. (A little more precisely: then the mugger can extract an arbitrarily large amount of utils from the P-and-U AI.) Are you saying that this statement is false, or just that it leaves out an essential aspect of Pascal's mugging? Or something else?
I'm saying that this statement is false. The mugger needs also to somehow persuade the AI of the nonexistence of other muggers of similar credibility.
In the real world, muggers usually accomplish this by raising their own credibility beyond the "omg i can blow up the sun" level, such as by brandishing a weapon.
OK let me be a little more careful. The expected disutility the AI associates to a threat is
EU(threat) = P(threat will be carried out) x U(threat will be carried out) + P(threat will not be carried out) x U(threat will not be carried out)
I think that the existence of other muggers with bigger weapons, or just of other dangers and opportunities generally, is accounted for in the second summand.
Now does the formulation look OK to you?
That formulation seems to fail to distinguish (ransom paid)&(threat not carried out) from (ransom not paid)&(threat not carried out).
There are two courses of actions being considered: pay ransom or don't pay ransom.
That's completely unreadable. I need symbolic abbreviations.
Then:
(p.s.: We really need a preview feature.)
Why so much focus on future threats to the sun? Are you going to argue, by analogy with the prisoner's dilemma, that the iterated Pascal's mugging is easier to solve than the one-shot Pascal's mugging?
I thought that, either by definition or as a simplifying assumption, EU(ransom paid & threat not carried out) = current utility - size of ransom, and that EU(ransom not paid & threat not carried out) = current utility.