jimrandomh comments on What can you do with an Unfriendly AI? - Less Wrong

16 Post author: paulfchristiano 20 December 2010 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 20 December 2010 10:39:56PM *  2 points [-]

You are arguing that the stated assumptions about the genie's utility function are unrealistic (which may be true), but presenting it as though you had found a flaw in the proof that follows from those assumptions.

Comment author: Vladimir_Nesov 20 December 2010 10:56:18PM *  0 points [-]

You are arguing that the stated assumptions about the genie's utility function are unrealistic (which may be true), but presenting it as though you had found a flaw in the proof that follows from those assumptions.

It seems like the assumptions about utility, even if they hold, don't actually deliver the behavior you expect, because the genies can coordinate. Unless the incentive structure makes sure they won't try to take over the world in any case, it doesn't make sure that they won't try to take over the world if you only ask each of them to answer one binary question either.

Think of the genies as making a single decision that results in all the individual actions of all the individual genies. For the genies, having multiple actors just means raising the stakes by threatening to not free more genies, which you could've done for a single-question wish as easily instead of creating an elaborate questioning scheme. You could even let them explicitly discuss what to answer to your question, threatening to terminate them all if examination of their answer reveals any incorrectness!

Edit: See also Eliezer's comment.