Comment author: kingmaker 30 March 2015 04:46:23PM -1 points [-]

Seeing as I'm new here, absolutely nothing

In response to comment by tailcalled on Boxing an AI?
Comment author: CBHacking 30 March 2015 10:41:02AM 0 points [-]

If you're in a box, then the computational resources available are finite. They might change over time, as those outside the box add or upgrade hardware, but the AI can't just say "I need some highly parallel computing hardware to solve this problem with" and re-invent the GPU, or rather, if it did that, it would be a GPU emulated in software and hence extremely slow. The entire simulation would, in effect, slow down due to the massively increased computational cost of simulating this world.

Now, if you cut the AI off from any type of real-time clock, maybe it doesn't notice that it's running slower - in the same way that people generally wouldn't notice if time dilation due to the Earth's movement were to double, because all of our frames of reference would slow together - but I suspect that the AI would manage to find something useful for letting it know the box is there. Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.

In response to comment by CBHacking on Boxing an AI?
Comment author: kingmaker 30 March 2015 04:40:34PM 4 points [-]

It may simply deduce that it is likely to be in a box, in the same way that Nick Bostrom deduced we are likely to be in a simulation. Along these lines, it's amusing to think that we might be the AI in the box, and some lesser intelligence is testing to see if we're friendly

In response to comment by Wes_W on Boxing an AI?
Comment author: dxu 27 March 2015 10:11:38PM *  4 points [-]

At first glance, I was also skeptical of tailcalled's idea, but now I find I'm starting to warm up to it. Since you didn't ask for a practical proposal, just a concrete one, I give you this:

  1. Implement an AI in Conway's Game of Life.
  2. Don't interact with it in any way.
  3. Limit the computational power the box has, so that if the AI begins engaging in recursive self-improvement, it'll run more and more slowly from our perspective, so we'll have ample time to shut it off. (Of course, from the AI's perspective, time will run as quickly as it always does, since the whole world will slow down with it.)
  4. (optional) Create multiple human-level intelligences in the world (ignoring ethical constraints here), and see how the AI interacts with them. Run the simulation until you are reasonably certain (for a very stringent definition of "reasonably") from the AI's behavior that it is Friendly.
  5. Profit.
In response to comment by dxu on Boxing an AI?
Comment author: kingmaker 30 March 2015 04:32:39PM *  3 points [-]

The problem with this is that even if you can determine with certainty that an AI is friendly, there is no certainty that it will stay that way. There could be a series of errors as it goes about daily life, each acting as a mutation, serving to evolve the "Friendly" AI into a less friendly one

View more: Prev