Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 07 July 2011 08:03:38PM *  0 points [-]

My best guess about what's preferable to what is still this way, but I'm significantly less certain of its truth (there are analogies that make the answer come out differently, and level of rigor in the above comment is not much better than these analogies). In any case, I don't see how we can actually use these considerations. (I'm working in a direction that should ideally make questions like this more clear in the future.)

Comment author: Will_Newsome 08 July 2011 12:13:14PM *  0 points [-]

In any case, I don't see how we can actually use these considerations.

If you know how to build a uFAI (or "probably somewhat reflective on its goal system but nowhere near provably Friendly" AI), build one and put it in an encrypted glass case. Ideally you would work out the AGI theory in your head, determine how long it would take to code the AGI after adjusting for planning fallacy, then be ready to start coding if doom is predictably going to occur. If doom isn't predictable then the safety tradeoffs are larger. This can easily go wrong, obviously.