Many people think you can solve the Friendly AI problem just by writing certain failsafe rules into the superintelligent machine's programming, like Asimov's Three Laws of Robotics. I thought the rebuttal to this was in "Basic AI Drives" or one of Yudkowsky's major articles, but after skimming them, I haven't found it. Where are the arguments concerning this suggestion?
It seems like an applause light to invoke international law as a solution to almost anything, particularly this problem. What aspect of having rules made in a compromise of politicing makes it less likely to have exploitable loopholes than any other system?
Fines? The misdoing we're worried about is seizing power. Fines would require power sufficient to punish an AI after its misdoings, and have nothing to do with programming it not to be harmful.
Somehow I do't think the solution to the problem of having powerful AIs that don't care about us (for better or worse) is to teach them Islamic law.