jimmy comments on Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (70)
The latter. I haven't thought about this enough be comfortable knowing how similar his algorithm must be in order to cooperate, but if I ultimately decided to defect it'd be because I thought it qualified as sufficiently different.
So you fully expect in real life that you might defect and yet see the other person cooperate (with standard ethical disclaimers about how hard it is to true the PD such that you actually prefer to see that outcome).
Yes, that's correct. I also currently see a significant probability of choosing to cooperate and finding out that the other guy defected on me. Should I take your response as evidence to reconsider? As I said before, I don't claim to have this all sorted out.
As to your disclaimer, it seems like your impression says that it's much harder to true PD than mine says. If you think you can make the thing truly one shot without reputational consequences (which may be the hard part, but it seems like you think its the other part), then it's just a question of setting up the payoff table.
If you don't have personal connections to the other party, it seems that you don't care any more about him than the other 6 billion people on earth. If you can meet those conditions, even a small contribution to fighting existential risks (funded by your prize money) should outweigh anything you care about him.