Will_Newsome comments on Open Thread June 2010, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (606)
If you know how to build a uFAI (or "probably somewhat reflective on its goal system but nowhere near provably Friendly" AI), build one and put it in an encrypted glass case. Ideally you would work out the AGI theory in your head, determine how long it would take to code the AGI after adjusting for planning fallacy, then be ready to start coding if doom is predictably going to occur. If doom isn't predictable then the safety tradeoffs are larger. This can easily go wrong, obviously.