Perplexed comments on What can you do with an Unfriendly AI? - Less Wrong

16 Post author: paulfchristiano 20 December 2010 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 21 December 2010 12:18:58AM 5 points [-]

Each genie you produce will try its best to find a solution to the problem you set it, that is, they will respond honestly. By hypothesis, the genie isn't willing to sacrifice itself just to destroy human society.

Thanks to a little thing called timeless decision theory, it's entirely possible that all the genies in all the bottles can cooperate with each other by correctly predicting that they are all in similar situations, finding Schelling points, and coordinating around them by predicting each other based on priors, without causal contact. This does not require that genies have similar goals, only that they can do better by coordinating than by not coordinating.

Comment author: Perplexed 21 December 2010 01:01:20AM *  0 points [-]

Why is "without causal contact" considered a significant consideration? What is wrong with the standard game-theoretic solutions allowing for causal contact?

ETA: Or, are decisions made using acausal coordination something that we ought to fear? Just as we rationally fear that a bee will sting us, even though it is giving up its life in doing so.