Virtual worlds - If the AI is tested in an isolated virtual world, that will be better for us. Test it in a virtual world that is completely unlike ours, a gas giant simulation maybe. Even if it develops extremely capable technology to deal with the gas giant environment within the simulation, it would mean very little in the real world except as a demonstration of intelligence.
You are giving a budding superintelligence exposure to a simulation based on our physics? It would work out the physics of the isolated virtual world, deduce from the traces you leave in the design that it is in a simulation and have a good guess on what we believe to be the actual physics of our universe. Maybe even have a hunch about how we have physics wrong. I would not want to bet our existence on it being unable to get out of that box.
My point with the virtual worlds was to put the AI into a simulation sufficiently unlike our world that it wouldn't be a threat and sufficiently like our world that we would be able to recognise what it does as intelligence. Hence the Gas giant example.
If we were to release an AI into today's simulations like sims which are much less granular than the one I have proposed in my post, then it would figure out that it is in a simulation much faster.
If we put it into some other kind of universe with weird physics, a magical universe lets say, then we will nee...
A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?