tailcalled comments on Boxing an AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (39)
If you're in a box, then the computational resources available are finite. They might change over time, as those outside the box add or upgrade hardware, but the AI can't just say "I need some highly parallel computing hardware to solve this problem with" and re-invent the GPU, or rather, if it did that, it would be a GPU emulated in software and hence extremely slow. The entire simulation would, in effect, slow down due to the massively increased computational cost of simulating this world.
Now, if you cut the AI off from any type of real-time clock, maybe it doesn't notice that it's running slower - in the same way that people generally wouldn't notice if time dilation due to the Earth's movement were to double, because all of our frames of reference would slow together - but I suspect that the AI would manage to find something useful for letting it know the box is there. Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.
Just... don't put it in a world where it should be able to upgrade infinitely? Make processors cost unobtainium and limit the amount of unobtainium so it can't upgrade past your practical processing capacity.
Remember that we are the ones who control how the box looks from inside.
Minor nitpick: if the AI finds itself in a box, I have to assume it will be let out. It's completely trivial to prevent it from escaping when not given help; the point in Eliezer's experiment is that the AI will be given help.
Note that this makes global processing power being limited evidence that the universe is a box.
Good point.
The strength of the evidence depends a lot on your prior for the root-level universe, though.