Let's not pat ourselves on the back too much. Voters here absolutely respond to social cues (albeit unusual ones from the perspective of the wider culture) and to local status; the vote record on a post is not a totally dispassionate estimate of its quality.
That said, pure social awkwardness might limit a post's potential upvotes, but it usually isn't enough to get a post downvoted: that takes obvious bias, factual error, egregiously bad English, a perception of bad faith, or -- exceptionally -- attracting the ire of a serial downvoter. The truly clueless may risk pattern-matching to "bad faith", but that's fairly rare; the rest are more or less orthogonal to social skills.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
As I understand it, Hofstadter's advocacy of cooperation was limited to games with some sense of source-code sharing. Basically, both agents were able to assume their co-players had an identical method of deciding on the optimal move, and that that method was optimal. That assumption allows a rather bizarre little proof that cooperation is the result said method arrives at.
And think about it, how could a mathematician actually advocate cooperation in pure, zero knowledge vanilla PD? That just doesn't make any sense as a model of an intelligent human being's opinions.
Agreed. But here is what I think Hofstadter was saying: The assumption that is used can be weaker than the assumption that the two players have an identical method. Rather, it just needs to be that they are both "smart". And this is almost as strong a result as the true zero knowledge scenario, because most agents will do their best to be smart.
Why is he saying that "smart" agents will cooperate? Because they know that the other agent is the same as them in that respect. (In being smart, and also in knowing what being smart means.)
Now, there are some obvious holes in this, but it does hold a certain grain of truth, and is a fairly powerful result in any case. (TDT is, in a sense, a generalization of exactly this idea.)