XiXiDu comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 07 July 2011 07:32:52PM 0 points [-]

That is, if you have an option of trading Doom for UFAI, while forsaking only negligible probability of FAI, you should take it.

Fascinating! Do you still agree with what you wrote there? Are you still researching this issues and do you plan on writing a progress report or an open problems post? Would you be willing to write a survey paper on decision theoretic issues related to acausal trade?

Comment author: Vladimir_Nesov 07 July 2011 08:03:38PM *  0 points [-]

My best guess about what's preferable to what is still this way, but I'm significantly less certain of its truth (there are analogies that make the answer come out differently, and level of rigor in the above comment is not much better than these analogies). In any case, I don't see how we can actually use these considerations. (I'm working in a direction that should ideally make questions like this more clear in the future.)

Comment author: Will_Newsome 08 July 2011 12:13:14PM *  0 points [-]

In any case, I don't see how we can actually use these considerations.

If you know how to build a uFAI (or "probably somewhat reflective on its goal system but nowhere near provably Friendly" AI), build one and put it in an encrypted glass case. Ideally you would work out the AGI theory in your head, determine how long it would take to code the AGI after adjusting for planning fallacy, then be ready to start coding if doom is predictably going to occur. If doom isn't predictable then the safety tradeoffs are larger. This can easily go wrong, obviously.