SilasBarta comments on Open Thread June 2010, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (606)
Another idea for friendliness/containment: run the AI in a simulated world with no communication channels. Right from the outset, give it a bounded utility function that says it has to solve a certain math/physics problem, deposit the correct solution in a specified place and stop. If a solution can't be found, stop after a specified number of cycles. Don't talk to it at all. If you want another problem solved, start another AI from a clean slate. Would that work? Are AGI researchers allowed to relax a bit if they follow these precautions?
ETA: absent other suggestions, I'm going to call such devices "AI bombs".
These ideas have already been investigated and documented:
Box: http://fragments.consc.net/djc/2010/04/the-singularity-a-philosophical-analysis.html
Stopping: http://alife.co.uk/essays/stopping_superintelligence/