"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."
http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/#ftag=CAD590a51e
One idea for a first pass could be: Suppose you had a computer with 1000 times the computing power than the best supercomputer has. Would running your algorithm on that machine be dangerous on its own?
For example I think even with 1000x computing power the Deep learning type approach would be ok. It would just let you have really good image/voice/action recognizers. On the other hand consider Deep Mind's general game playing program which plays a variety of simple video games near optimally including exploiting bugs. A system like this at 1000x power given decent models of parts of the world and robotics, may be hard to contain. So in summary, I would say a panel of experts, rating the danger of the program running with 1000x computing power would be an ok first pass.
I know the architecture of Deep Mind, (It's reinforcement learning + deep learning, basically) and can guarantee you that 1000x computing power would have a hard time getting you to NES games, let alone anything dangerous.