Yvain comments on Backward Reasoning Over Decision Trees - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (57)
See my response to steven0461 and my footnote. Yes, we will eventually be able to derive cooperation, but we will derive it by starting with selfish assumptions.
I don't think the math models motivation anyway. It's abstracted away, leaving each agent maximising * a * utility function. Neither is utility in the model (which is well defined) isomorphic to utility for a person making decisions in the real world (which is not). But our minds seem to learn things better when they are couched in terms of a story about people.
Hmm. Possibly one danger in this is assuming that your own internal story about what the equations mean is what they actually mean, such that you end up overconfident that the results of a decision in the real world will be like the story in your head.