Wei_Dai comments on Thomas C. Schelling's "Strategy of Conflict" - Less Wrong

81 Post author: cousin_it 28 July 2009 04:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (148)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 30 July 2009 08:17:02PM *  4 points [-]

But sometimes, you'll find a better solution than if you only lived in a moment.

Yes, I see that your decision theory (is it the same as Eliezer's?) gives better solutions in the following circumstances:

  • dealing with Omega
  • dealing with copies of oneself
  • cooperating with a counterpart in another possible world

Do you think it gives better solutions in the case of AIs (who don't initially think they're copies of each other) trying to cooperate? If so, can you give a specific scenario and show how the solution is derived?