Chris: Sorry Allan, that you won't be able to reply. But you did raise the question before bowing out...
I didn't bow out, I just had a lot of comments made recently. :)
I don't like the idea that we should cooperate if it cooperates. No, we should defect if it cooperates. There are benefits and no costs to defecting.
But if there are reasons for the other to have habits that are formed by similar forces
In light of what I just wrote, I don't see that it matters; but anyway, I wouldn't expect a paperclip maximizer to have habits so ingrained that it can't ever drop them. Even if it routinely has to make real trade-offs, it's presumably smart enough to see that - in a one-off interaction - there are no drawbacks to defecting.
Simpleton: No line of causality from one to the other is required.
Yeah, I get your argument now. I think you're probably right, in that extreme case.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If it's actually common knowledge that both players are "perfectly rational" then they must do whatever game theory says.
But if the paperclip maximizer knows that we're not perfectly rational (or falsely believes that we're not) it will try and achieve a better score than it could get if we were in fact perfectly rational. It will do this by cooperating, at least for a time.
I think correct strategy gets profoundly complicated when one side believes the other side is not fully rational.