ygert comments on Dark Arts of Rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (185)
Reading the ensuing disagreement, this seems like a good occasion to ask whether this is a policy suggestion, and if so what it is. I don't think So8res disagrees about any theorems (e.g. about dominant strategies) over formalisms of game theory/PD, so it seems like the scope of the disagreement is (at most) pretty much how one should use the phrase 'Prisoner's Dilemma', and that there are more direct ways of arguing that point, e.g. pointing to ways in which the term ('Prisoner's Dilemma') originally used for the formal PD also being used for e.g. playing against various bots causes thought to systematically go astray/causes confusion/etc.
Pretty much. Cashing out my disagreement as a policy recommendation: don't call a situation a true PD if that situation's feasible set doesn't include (C, D) & (D, C). Otherwise one might deduce that cooperation is the rational outcome for the one-shot, vanilla PD. It isn't, even if believing it is puts one in good company.
As I understand it, Hofstadter's advocacy of cooperation was limited to games with some sense of source-code sharing. Basically, both agents were able to assume their co-players had an identical method of deciding on the optimal move, and that that method was optimal. That assumption allows a rather bizarre little proof that cooperation is the result said method arrives at.
And think about it, how could a mathematician actually advocate cooperation in pure, zero knowledge vanilla PD? That just doesn't make any sense as a model of an intelligent human being's opinions.
Agreed. But here is what I think Hofstadter was saying: The assumption that is used can be weaker than the assumption that the two players have an identical method. Rather, it just needs to be that they are both "smart". And this is almost as strong a result as the true zero knowledge scenario, because most agents will do their best to be smart.
Why is he saying that "smart" agents will cooperate? Because they know that the other agent is the same as them in that respect. (In being smart, and also in knowing what being smart means.)
Now, there are some obvious holes in this, but it does hold a certain grain of truth, and is a fairly powerful result in any case. (TDT is, in a sense, a generalization of exactly this idea.)
Have you seen this explored in mathematical language? Cause it's all so weird that there's no way I can agree with Hofstadter to that extent. As yet, I don't know really know what "smart" means.
Yeah, I agree, it is weird. And I think that Hofstadter is wrong: With such a vague definition of being "smart", his conjecture fails to hold. (This is what you were saying: It's rather vague and undefined.)
That said, TDT is an attempt to put a similar idea on firmer ground. In that sense, the TDT paper is the exploration in mathematical language of this idea that you are asking for. It isn't Hofstadterian superrationality, but it is inspired by it, and TDT puts these amorphous concepts that Hofstadter never bothered solidifying into a concrete form.