Luke_A_Somers comments on Superintelligence 13: Capability control methods - Less Wrong

7 Post author: KatjaGrace 09 December 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 09 December 2014 02:35:31PM 1 point [-]

But the AI is not programmed to make its future life easier -- it's programmed to greedily optimize the next step.

Why do you think that?

Comment author: solipsist 09 December 2014 07:16:43PM 2 points [-]

Because that program is the premise of my question. If an AI is not given any open-ended long-term goals, only small incremental ones, can it not be made arbiltrarilly safe?

Comment author: TheOtherDave 09 December 2014 07:58:15PM 3 points [-]

So, this strikes me as a special case of "one way to make an optimizer safe is to ensure it isn't very powerful," which is absolutely true. This has opportunity costs, of course.

That said... the wording Program the AI to halt and await further instructions as soon as it becomes 99.95% sure that it has optimized objective X by at least 1% seems to suggest that X is what the system is optimizing for, and the halt-and-await-instructions is an obstacle in its source code that impedes progress towards optimizing for X. In which case I'd expect a sufficiently powerful optimizing system to bypass that obstacle somehow.

I think what you mean is more "program the AI to optimize for the goal of achieving >99.95 confidence of >1% increase in X" And then hope there isn't an unbounded implicit term in there somewhere (e.g., of X measured in what system?).

Comment author: solipsist 09 December 2014 11:31:15PM 1 point [-]

Yes, I mean your second interpretation. The proposal is basically a hill climbing algorithm, with a human step in the for loop. The AGI to computes a direction (an action with a small effect on the world); humans evaluate the action; humans either implement the action or tweak the AI; repeat. On every iteration, AGI is instructed to optimize only for the next step

I agree greedy algorithms aren't incredibly powerful, but I'm more worried about safety. Evolution is also a greedy algorithm and it managed to create humans. Would using an AGI to hill climb destroy the world? If so, why?

Comment author: William_S 10 December 2014 03:01:22AM 2 points [-]

Suppose that you have a simple, benign solution that works only up to Y% optimization (just make the paperclips), and a hard, non-benign solution that is optimal above that point (take over the world, then make paperclips). The AI naively follows the benign strategy, and does not look too hard for alternatives up to Y%. All manual checks below Y% of optimization pass. But Y ends up as a number that falls between two of your numerical checkpoints. So, you observe all checkpoints passing below Y% optimization, until suddenly the AI switches to the non-benign solution between checkpoints, executes it to reach the next checkpoint, but has already caused damage.

Comment author: Luke_A_Somers 09 December 2014 09:12:20PM 2 points [-]

I see. I thought you meant take some other AI and apply breakpoints at incremental goal boundaries, and reset its goal system at that time.

I don't think this would work. Eventually it's going to reach a local maximum and have to sit down and think much harder to do valley-crossing (this will likely come at the end of a sequence of longer and longer optimization times finding this peak, so run-time won't be surprising). Then it is forced to do the long-term-make-life-easier-'evil-things'.