JamesAndrix comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread. Show more comments above.

Comment author: JamesAndrix 17 January 2010 12:45:37AM 4 points [-]

Virtual Worlds doesn't buy you any safety, even if it can't break out of the simulator.

If you manage to make AI, you've got a Really Powerful Optimization Process. If it worked out simulated physics and has access to it's own source, it's probably smart enough to 'foom', even with the simulation. At which point you have a REALLY powerful optimizer, and no idea how to prove anything about it's goal system. An untrustable genie.

Also, spending all those cycles on that kind of simulated world would be hugely inefficient.

Comment author: blogospheroid 17 January 2010 07:22:59PM 2 points [-]

James, you can't blame me for responding to the question. Stuart has said that advice on giving up will not be accepted. The question is to minimise the fallout of a lucky stroke moving this guy's AI forward and fooming. Both of my suggestions were around that.

Comment author: JamesAndrix 17 January 2010 08:26:56PM 2 points [-]

You are quite right.