Wei_Dai comments on Thomas C. Schelling's "Strategy of Conflict" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (148)
Yes, I see that your decision theory (is it the same as Eliezer's?) gives better solutions in the following circumstances:
Do you think it gives better solutions in the case of AIs (who don't initially think they're copies of each other) trying to cooperate? If so, can you give a specific scenario and show how the solution is derived?