jimrandomh comments on What can you do with an Unfriendly AI? - Less Wrong

16 Post author: paulfchristiano 20 December 2010 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 20 December 2010 09:24:51PM *  3 points [-]

A standard trick reveals that knowing whether a problem has a solution is almost as helpful as knowing the solution. Here is a (very inefficient) way to use this ability, lets say to find a proof of some theorem. Start by asking a genie: is there a proof of length 1? After destroying or releasing the genie appropriately, create a new genie and ask: is there a proof of length 2? Continue, until eventually one genie finds a proof of length 10000000, say. Then ask: is there a proof of this length which begins with 0? If no, is there a proof which begins with 1? Is there a proof which begins with 10? 101? 100? 1001? 10010? 10011? etc. Once the process concludes, you are left with the shortest, lexicographically earliest proof the genie could find. Each genie you produce will try its best to find a solution to the problem you set it. By hypothesis, the genie isn't willing to sabotage itself just to destroy human society.

Now you have found an answer to your constraint satisfaction problem which wasn't hand-picked by the genie. In fact, in some strong but difficult to formalize sense the genie had exactly zero influence over which solution he gave you.

Since there won't be a proof that any given solution is the shortest, there is clear incentive to lie about which proof is the shortest that genie can find, which gives the genie lots of control over which answer gets picked. Basically, you are suggesting security by obscurity, ostensibly asking the genie one question, while in fact asking another.

Comment author: jimrandomh 20 December 2010 09:32:41PM 2 points [-]

Lying in that way would require the genie to prioritize destroying the world over the leverage we're using against it, which, in the scenario described, it does not.

Comment author: Vladimir_Nesov 20 December 2010 09:37:19PM 3 points [-]

Then just get the answer straight away, or build a system specifically intended to create a reliable incentive structure for getting a probably honest answer. I object the the particular method of security by obscurity.

Comment author: jimrandomh 20 December 2010 09:44:33PM 3 points [-]

The point is to prevent it from optimizing its answer for secondary characteristics besides correctness, when multiple answers are possible. If you "just get the answer straight away", it chooses among all the possible answers one that is correct, honest, and maximally poisonous to you.

Comment author: Vladimir_Nesov 20 December 2010 09:50:39PM *  1 point [-]

So basically, the supposed law behind the method is in getting multiple correctness measurements, which together are expected to strengthen the estimation of correctness of the result.

Two conflicting problems in this particular case. If the measurements are supposed to be independent, so that multiple data points actually reinforce each other and many questions are better than one question, then you can't expect for the different genies to be talking about the same answer, and so can't elicit it in the manner described in the post. Conversely, if you can in fact rely on all those genies talking about the same answer, then you can also rely on all your measurements not being independent, and your answer will be picked according to exactly the same criteria as if it was asked straight away.

Comment author: jimrandomh 20 December 2010 10:05:16PM 0 points [-]

So basically, the supposed law behind the method is in getting multiple correctness measurements, which together are expected to strengthen the estimation of correctness of the result.

What? No, that's not paulfchristano's proposal at all. The reason for splitting up the measurement is to incentivize the genies to give you the proof which is lexicographically first. A perfect proof checker is assumed, so you know that if a proof is given, the result is correct; and the incentives are such that if the genie can find a proof, it will give it.

Comment author: Vladimir_Nesov 20 December 2010 10:25:41PM 1 point [-]

What reason acts to prevent the genie from giving a proof that is not the lexicographically smallest it can find?

Comment author: jimrandomh 20 December 2010 10:31:59PM 0 points [-]

If it did that, then there would be at least one subgenie that didn't optimize its utility function - either one reporting no such proof of length n and getting punished when it could have provided a matching proof, or one reporting no such proof of length n and prefix p when it could have provided one and, again, getting punished. Remember, while each invocation (subgenie) has the same utility function, by assumption that function refers to something that can be done to the particular invocation which optimizes it.

Comment author: Vladimir_Nesov 20 December 2010 10:34:07PM *  4 points [-]

If it did that, then there would be at least one subgenie that didn't optimize its utility function

If there is any genie-value to obtaining control over the real world, instances of genies will coordinate their decisions to get it.

You can't fight superintelligent genies with explicit dependence bias. They are not independent, they can coordinate, and so they will, even if you can frame the problem statement in a way that suggests that they can't communicate. Worse, as I pointed out, in this particular game they must coordinate to get anything done.

Comment author: jimrandomh 20 December 2010 10:39:56PM *  2 points [-]

You are arguing that the stated assumptions about the genie's utility function are unrealistic (which may be true), but presenting it as though you had found a flaw in the proof that follows from those assumptions.

Comment author: paulfchristiano 20 December 2010 09:38:52PM 0 points [-]

There is absolutely no sense in which this scheme is security by obscurity. My claim is that the genie will respect my wishes even though he knows exactly what I am doing, because he values my generosity right now more than the promise of taking over the world later.

Comment author: Vladimir_Nesov 20 December 2010 09:44:18PM 2 points [-]

Again, if your genie is already incentivized to be honest, in what sense is your scheme with all its bells and whistles better than asking for the shortest answer the genie can find, in plain English?

Comment author: paulfchristiano 20 December 2010 10:27:22PM 2 points [-]

It is not magically incentivized to be honest. It is incentivized to be honest because each query is constructed precisely such that an honest answer is the rational thing to do, under relatively weak assumptions about its utility function. If you ask in plain English, you would actually need magic to produce the right incentives.

Comment author: Vladimir_Nesov 20 December 2010 10:31:45PM 0 points [-]

It is not magically incentivized to be honest. It is incentivized to be honest because each query is constructed precisely such that an honest answer is the rational thing to do, under relatively weak assumptions about its utility function. If you ask in plain English, you would actually need magic to produce the right incentives.

My question is about the difference. Why exactly is the plain question different from your scheme?

(Clearly your position is that your scheme works, and therefore "doesn't assume any magic", while the absence of your scheme doesn't, and so "requires magic in order to work". You haven't told me anything I don't already know, so it doesn't help.)

Comment author: paulfchristiano 21 December 2010 01:10:49AM 0 points [-]

Here is the argument in the post more concisely. Hopefully this helps:

It is impossible to lie and say "I was able to find a proof" by the construction of the verifier (if you claim you were able to find a proof, the verifier needs to see the proof to believe you.) So the only way you can lie is by saying "I was not able to find a proof" when you could have if you had really tried. So incentivizing the AI to be honest is precisely the same as incentivizing them to avoid admitting "I was not able to find a proof." Providing such an incentive is not trivial, but it is basically the easiest possible incentive to provide.

I know of no way to incentivize someone to answer the plain question easily just based on your ability to punish them or reward them when you choose to. Being able to punish them for lying involves being able to tell when they are lying.

Comment author: Vladimir_Nesov 21 December 2010 02:04:34AM -1 points [-]
Comment author: [deleted] 20 December 2010 10:41:46PM 0 points [-]

Are you saying you think Christiano's scheme is overkill? Presumably we don't have to sacrifice a virgin in order to summon a new genie, so it doesn't look expensive enough to matter.

Comment author: Vladimir_Nesov 20 December 2010 10:51:20PM *  1 point [-]

I'm saying that it's not clear why his scheme is supposed to add security, and it looks like it doesn't. If it does, we should understand why, and optimize that property instead of using the scheme straight away, and if it doesn't, there is no reason to use it. Either way, there is at least one more step to be made. (In this manner, it could work as raw material for new lines of thought where we run out of ideas, for example.)

Comment author: Perplexed 20 December 2010 10:11:52PM 0 points [-]

As I understand it, the genie is not incentivized to honest. It is incentivized to not get caught being dishonest. And the reason for the roundabout way of asking the question is to make the answer-channel bandwidth as narrow as possible.

Comment author: paulfchristiano 20 December 2010 10:30:23PM 2 points [-]

It is impossible to be dishonest by saying "yes," by construction. The genie is incentivized to say "yes' whenever possible, so it is disincentivized to be dishonest by saying "no." So the genie is incentivized to be honest, not just to avoid being called out for dishonesty.

Comment author: Vladimir_Nesov 20 December 2010 10:27:47PM 0 points [-]

As I understand it, the genie is not incentivized to honest. It is incentivized to not get caught being dishonest.

Since we care about the genie actually being honest, the technique can be thought about as a way of making it more likely that the genie is honest, with the threat of punishing dishonestly a component of that technique.