Vulture comments on Dark Arts of Rationality - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (185)
Thanks for the feedback!
I disagree. The Prisoner's Dilemma does not specify that you are blind as to the nature of your opponent. "Visible source code" is a device to allow bots an analog of the many character analysis tools available when humans play against humans.
If you think you're playing against Omega, or if you use TDT and you think you're playing against someone else who uses TDT, then you should cooperate. I don't think an inability to reason about your opponent makes the game more "True".
This is one of the underlying insights, but another is "your monkey brain may be programmed to act optimally under strange parameters". Someone else linked a post by MBlume which makes a similar point (in, perhaps, a less aggravating manner).
It may be that you gain access to certain resources only when you believe things you epistemically shouldn't. In such cases, cultivating false beliefs (preferably compartmentalized) can be very useful.
I apologize for the aggravation. My aim was to be provocative and perhaps uncomfortable, but not aggravating.
The transparent version of the Prisoner's Dilemma, and the more complicated 'shared source code' version that shows up on LW, are generally considered variants of the basic PD.
In contrast to games where you can say things like "I cooperate if they cooperate, and I defect if they defect," in the basic game you either say "I cooperate" or "I defect." Now, you might know some things about them, and they might know some things about you, but there's no causal connection between your action and their action, like there is if they're informed of your action, they're informed of your source code, or they have the ability to perceive the future.
"Aggravating" may have been too strong a word; "disappointed" might have been better, that I saw content I mostly agreed with presented in a way I mostly disagreed with, with the extra implication that the presentation was possibly more important than the content.
To me, a "vanilla" Prisoner's Dilemma involves actual human prisoners who may reason about their partners. I don't mean to imply that I think "standard" PD involves credible pre-commitments nor perfect knowledge of the opponent. While I agree that in standard PD there's no causal connection between actions, there can be logical connections between actions that make for interesting strategies (eg if you expect them to use TDT).
On this point, I'm inclined to think that we agree and are debating terminology.
That's even worse! :-)
I readily admit that my presentation is tailored to my personality, and I understand how others may find it grating.
That said, a secondary goal of this post was to instill doubt in concepts that look sacred (terminal goals, epistemic rationality) and encourage people to consider that even these may be sacrificed for instrumental gains.
It seems you already grasp the tradeoffs between epistemic and instrumental rationality and that you can consistently reach mental states that are elusive to naive epistemically rational agents, and that you've come to these conclusions by a different means than I. By my analysis, there are many others who need a push before they are willing to even consider "terminal goals" and "false beliefs" as strategic tools. This post caters more to them.
I'd be very interested to hear more about how you've achieved similar results with different techniques!