Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn't just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.

If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.

If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.

You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.

I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.

In game theoretic terms, Mark was a Cooperational agent.

  1. Altruistic - MaxOther
  2. Cooperational - MaxSum
  3. Individualist - MaxOwn
  4. Equalitarian - MinDiff
  5. Competitive - MaxDiff
  6. Aggressive - MinOther

Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it "altruism" is because even the most parochial EA's care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.

Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the "A" in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.

A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.

The question then is,

How should a consequentialist act locally?

The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.

My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.

 

New Comment
10 comments, sorted by Click to highlight new comments since:

How should a consequentialist act locally?

That depends very much on the local environment.

Yep. It took me decades to learn this. I am a natural cooperator, and I often end up surprised that many other people aren't, and that my utility that I sacrificed for the benefit of the whole group is gone because of some other people infighting. "Help your group" must go together with "choose the right group".

Yes, I learned this lesson the hard way as well.

Coo as you've described it is probably my default personality type and it's not necessarily a good one. In particular always being willing to help/give means that other people sometimes get to direct a lot of the output of my productive effort. But people are often willing to spend the effort of others cheaply. There are lots of things I would be interested in having if someone were giving them away, but that I don't necessarily want very strongly.

Inasmuch as the people I associated with shared the norm that asking for favors was difficult and carried an expectation of repayment and obligation this strategy works very well. It lowers transaction barriers for beneficial networks of obligation. When I encounter people who do not find asking for help status-lowering or otherwise costly, or people who do not feel that favors create a sense of debt/obligation things can go very poorly very quickly. Obviously this is almost exactly what the theory predicts, but it's also very true in practice (at leas for n=1).

I'm not sure about your distinction between EAs and non-EAs. There are many EAs who I may share some terminal values with, but who I strongly disagree with about the relevant priorities in the short term. I don't think that giving them as many resources as they ask for is necessarily a good idea.

[-]gjm30

That is what Mark did.

Was it definitely that rather than being a pure cooperator with everyone, even non-EAs? (Or maybe "almost everyone" in some sense.)

For a minute after reading this article, it seemed unintuitive to me that anyone would find it surprising that someone else would Cooperate in their day-to-day real-life social interactions (which can be modeled as an iterated prisoner's dilemma). After all, people are supposed to more or less play Tit-for-Tat in real life, right?

I think that, in real life, there are lots of situations in which you can Cooperate or Defect to various degrees, such that sets of everyday social actions between two parties are better modeled by continuous iterated prisoner's dilemmas than by discrete iterated prisoner's dilemmas. This distinction (that continuous iterated prisoner's dilemmas model certain real life situations better than discrete iterated prisoner's dilemmas do) could be used to help explain why it feels weird when someone is really, really unusually nice, like Mark-- maybe it is normal to mostly Cooperate in social situations, but it is rare to completely Cooperate.

[-]V_V-10

Pointing out the obvious failure mode of this strategy left as an exercise for the reader .

One becomes vulnerable to Ind pretending to be Coo?

[-]V_V40

Exactly.

Thus requiring coordination and trust.

Evolution frequently solves this with costly signalling.

Levels of coordination decrease or increase substantially with intragroup trust, that is modulated by communication and mutual knowledge. Thus there will be incentives to remind everyone to do so, and incentives to remind everyone to keep eyes open. My post does the former job, V_V's original response the latter. Both are necessary.