Sewing-Machine comments on Open Thread, August 2010-- part 2 - Less Wrong

3 Post author: NancyLebovitz 09 August 2010 11:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 27 August 2010 08:15:46PM 1 point [-]

But what a prior-and-utility system means by "credible" is that the expected disutility is large. If a blackmailer can, at finite cost to itself, put our AI in a situation with arbitrarily high expected disutility, then our AI is boned.

Comment author: Pavitra 27 August 2010 08:25:51PM 0 points [-]

Ah, you're worried about a blackmailer that can actually follow up on that threat. I would point out that humans usually pay ransoms, so it's not exactly making a different decision than we would in the same situation.

Or, the AI might anticipate the problem and self-modify in advance to never submit to threats.

Comment author: [deleted] 27 August 2010 08:37:38PM 0 points [-]

I'm worried about a blackmailer that can with positive probability follow up on that threat.

Yes humans behave in the same way, at least according to economists. We pay ransoms when the probability of the threat being carried out, times the disutility that would result from the threat being carried out, is less than the ransom. The difference is that for human-scale threats, this expected disutility does seem to be bounded.

The AI might anticipate the problem and self-modify to never submit to threats

That could mean one of at least two things: either the AI starts to work according to the rules of a (hitherto not conceived?) non-prior-and-utility system. Or the AI calibrates its prior and its utility function so that it doesn't submit to (some) threats. I think the question is whether something like the second idea can work.

Comment author: Pavitra 27 August 2010 08:51:16PM -1 points [-]

No, see, that's different.

If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.

Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun. Do you now assign a higher expected-value utility to giving $100 to the EFF, or to giving the same $100 instead to SIAI? If I blew up the moon as a warning shot, would that change your mind?

Comment author: [deleted] 27 August 2010 09:06:44PM 0 points [-]

If you're dealing with a blackmailer that might be able to carry out their threats, then you investigate whether they can or not. The blackmailer themselves might assist you with this, since it's in their interest to show that their threat is credible.

The result of such an investigation might raise or lower P(threat can be carried out). This doesn't change the shape of the question: can a blackmailer issue a threat with P(threat can be carried out) x U(threat is carried out) > H, for all H? Can it do so at cost to itself that is bounded independent of H?

Allow me to demonstrate: Give $100 to the EFF or I'll blow up the sun.

I refuse. According to economists, I have just revealed a preference:

P(Pavitra can blow up the sun) x U(Sun) < U(100$)

If I blew up the moon as a warning shot, would that change your mind?

Yes. Now I have revealed

P(Pavitra can blow up the sun | Pavitra has blown up the moon) x U(Sun) > U(100$)

Comment author: Pavitra 27 August 2010 09:09:39PM *  1 point [-]

I have just revealed a preference:

P(Pavitra can blow up the sun) x U(Sun) < U(100$)

My point is that U($100) is partially dependent on P(Mallory can blow up the sun) x U(Sun), for all values of Mallory and Sun such that Mallory is demanding $100 not to blow up the sun. If P(M_1 can heliocide) is large enough to matter, there's a very good chance that P(M_2 can heliocide) is too. Credible threats do not occur in a vacuum.

Comment author: [deleted] 27 August 2010 09:13:46PM 0 points [-]

I don't understand your points, can you expand them?

In my inequalities, P and U denoted my subjective probabilities and utilities, in case that wasn't clear.

Comment author: Pavitra 27 August 2010 09:33:16PM 0 points [-]

The fact that probability and utility are subjective was perfectly clear to me.

I don't know what else to say except to reiterate my original point, which I don't feel you're addressing:

Consider the proposition that, at some point in my life, someone will try to Pascal's-mug me and actually back their threats up. In this case, I would still expect to receive a much larger number of false threats over the course of my lifetime. If I hand over all my money to the first mugger without proper verification, I won't be able to pay up when the real threat comes around.

Comment author: [deleted] 27 August 2010 10:18:28PM 1 point [-]

It's not even clear to me that you disagree with me. I am proposing a formulation (not a solution!) of Pascal's mugging problem: if a mugger can issue threats of arbitrarily high expected disutility, then a priors-and-utilities AI is boned. (A little more precisely: then the mugger can extract an arbitrarily large amount of utils from the P-and-U AI.) Are you saying that this statement is false, or just that it leaves out an essential aspect of Pascal's mugging? Or something else?

Comment author: Pavitra 27 August 2010 11:03:54PM -2 points [-]

I'm saying that this statement is false. The mugger needs also to somehow persuade the AI of the nonexistence of other muggers of similar credibility.

In the real world, muggers usually accomplish this by raising their own credibility beyond the "omg i can blow up the sun" level, such as by brandishing a weapon.