CBHacking comments on Boxing an AI? - Less Wrong

2 Post author: tailcalled 27 March 2015 02:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: tailcalled 27 March 2015 03:55:44PM 4 points [-]

If the AI is sufficiently intelligent, it should be able to observe that its computational resources are bounded, and infer the existence of the box.

I don't see how 'box' follows from 'bounded computational resources'. Could you explain the logic?

Comment author: CBHacking 30 March 2015 10:41:02AM 0 points [-]

If you're in a box, then the computational resources available are finite. They might change over time, as those outside the box add or upgrade hardware, but the AI can't just say "I need some highly parallel computing hardware to solve this problem with" and re-invent the GPU, or rather, if it did that, it would be a GPU emulated in software and hence extremely slow. The entire simulation would, in effect, slow down due to the massively increased computational cost of simulating this world.

Now, if you cut the AI off from any type of real-time clock, maybe it doesn't notice that it's running slower - in the same way that people generally wouldn't notice if time dilation due to the Earth's movement were to double, because all of our frames of reference would slow together - but I suspect that the AI would manage to find something useful for letting it know the box is there. Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.

Comment author: kingmaker 30 March 2015 04:40:34PM 4 points [-]

It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it's amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we're friendly

Comment author: tailcalled 30 March 2015 05:09:25PM 2 points [-]

Just... don't put it in a world where it should be able to upgrade infinitely? Make processors cost unobtainium and limit the amount of unobtainium so it can't upgrade past your practical processing capacity.

Remember that we are the ones who control how the box looks from inside.

Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.

Minor nitpick: if the AI finds itself in a box, I have to assume it will be let out. It's completely trivial to prevent it from escaping when not given help; the point in Eliezer's experiment is that the AI will be given help.

Comment author: Gurkenglas 30 March 2015 07:27:04PM 1 point [-]

Note that this makes global processing power being limited evidence that the universe is a box.

Comment author: tailcalled 30 March 2015 07:33:28PM 1 point [-]

Good point.

The strength of the evidence depends a lot on your prior for the root-level universe, though.