Boxing an AI is the idea that you can avoid the problems where an AI destroys the world by not giving it access to the world. For instance, you might give the AI access to the real world only through a chat terminal with a person, called the gatekeeper. This is should, theoretically prevent the AI from doing destructive stuff.
Eliezer has pointed out a problem with boxing AI: the AI might convince its gatekeeper to let it out. In order to prove this, he escaped from a simulated version of an AI box. Twice. That is somewhat unfortunate, because it means testing AI is a bit trickier.
However, I got an idea: why tell the AI it's in a box? Why not hook it up to a sufficiently advanced game, set up the correct reward channels and see what happens? Once you get the basics working, you can add more instances of the AI and see if they cooperate. This lets us adjust their morality until the AIs act sensibly. Then the AIs can't escape from the box because they don't know it's there.
Yeah, but the AI can use empiricism within its simulated world. If it's smarter than us, in a probably-less-convincing-than-reality world, I would not want to bet at strong odds against the AI figuring things out.
Boxing is potentially a useful component of real safety design, in the same way that seatbelts are a useful component of car design: it might save you, but it also has ways to fail.
The problem with AI safety proposals is that they usually take the form of "Instead of figuring out Friendliness, why don't we just do X?" where X is something handwavey that has some obvious ways to fail. The usual response, here, is to point out the obvious ways that it can fail, hopefully so that the proposer notices they haven't obviated solving the actual problem.
If you're just looking at ways to make the least-imperfect box you can, rather than claiming your box is perfect, I don't think I'm actually disagreeing with you here.
The idea isn't to make a box that looks like our world, because, as you pointed out, that would be pretty unconvincing. The idea is to make a radically different and macroscopically slightly similar but much simpler world that it can be in.
The purpose isn't to make friendliness unnecessary but instead to test if the basics of the AI works even if we aren't sure if it's intelligent and possibly, depending on how the AI is designed, provide a space for testing friendliness. Just turning the AI on and seeing what happens would obviously be dangerous, hence bo... (read more)