You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tailcalled comments on Boxing an AI? - Less Wrong Discussion

2 Post author: tailcalled 27 March 2015 02:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: tailcalled 27 March 2015 08:47:30PM *  3 points [-]

The idea isn't to make a box that looks like our world, because, as you pointed out, that would be pretty unconvincing. The idea is to make a radically different and macroscopically slightly similar but much simpler world that it can be in.

The purpose isn't to make friendliness unnecessary but instead to test if the basics of the AI works even if we aren't sure if it's intelligent and possibly, depending on how the AI is designed, provide a space for testing friendliness. Just turning the AI on and seeing what happens would obviously be dangerous, hence boxing.

If you're just looking at ways to make the least-imperfect box you can, rather than claiming your box is perfect, I don't think I'm actually disagreeing with you here.

I'm claiming the box is perfect. You can't escape from a prison if you don't know it exists, and you can't figure out it exists if it's hidden in the laws of physics.

Comment author: Wes_W 27 March 2015 09:08:19PM 0 points [-]

and you can't figure out it exists if it's hidden in the laws of physics.

Respectfully, I think you're just shoving all your complexity under the rug here. Unless you have a concrete proposal on how to actually do this, just asserting that your box won't be figure-out-able is dodging the question.

Comment author: dxu 27 March 2015 10:11:38PM *  4 points [-]

At first glance, I was also skeptical of tailcalled's idea, but now I find I'm starting to warm up to it. Since you didn't ask for a practical proposal, just a concrete one, I give you this:

  1. Implement an AI in Conway's Game of Life.
  2. Don't interact with it in any way.
  3. Limit the computational power the box has, so that if the AI begins engaging in recursive self-improvement, it'll run more and more slowly from our perspective, so we'll have ample time to shut it off. (Of course, from the AI's perspective, time will run as quickly as it always does, since the whole world will slow down with it.)
  4. (optional) Create multiple human-level intelligences in the world (ignoring ethical constraints here), and see how the AI interacts with them. Run the simulation until you are reasonably certain (for a very stringent definition of "reasonably") from the AI's behavior that it is Friendly.
  5. Profit.
Comment author: kingmaker 30 March 2015 04:32:39PM *  3 points [-]

The problem with this is that even if you can determine with certainty that an AI is friendly, there is no certainty that it will stay that way. There could be a series of errors as it goes about daily life, each acting as a mutation, serving to evolve the "Friendly" AI into a less friendly one

Comment author: Wes_W 28 March 2015 02:06:28AM 2 points [-]

Hm. That does sound more workable than I had thought.

Comment author: tailcalled 27 March 2015 10:42:04PM 0 points [-]

I would probably only include it as part of a batch of tests and proofs. It would be pretty foolish to rely on only one method to check if something that will destroy the world if it fails works correctly.

Comment author: dxu 27 March 2015 10:43:50PM *  0 points [-]

Yes, I agree with you on that. (Step 5 was intended as a joke/reference.)

Comment author: tailcalled 27 March 2015 09:46:19PM 2 points [-]

Pick or design a game that contains some aspect of reality that you care about in terms of AI. All games have some element of learning, a lot have an element of planning and some even have varying degrees of programming.

As an example, I will pick Factorio, a game that involves learning, planning and logistics. Wire up the AI to this game, with appropriate reward channels etc. etc.. Now you can test how good the AI is at getting stuff done; producing goods, killing aliens (which isn't morally problematic, as the aliens don't act as personlike morally relevant things) and generally learning about the universe.

The step with morality depends on how the AI is designed. If it's designed to use heuristics to identify a group of entities as humans and help them, you might get away with throwing it in a procedurally generated RPG. If it uses more general, actually morally relevant criteria (such as intelligence, self-awareness, etc.), you might need a very different setup.

However, speculating at exactly what setup is needed for testing morality is probably very unproductive until we decide how we're actually going to implement morality.