You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Should EA's be Superrational cooperators?

8 diegocaleiro 16 September 2014 09:41PM

Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn't just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.

If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.

If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.

You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.

I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.

In game theoretic terms, Mark was a Cooperational agent.

  1. Altruistic - MaxOther
  2. Cooperational - MaxSum
  3. Individualist - MaxOwn
  4. Equalitarian - MinDiff
  5. Competitive - MaxDiff
  6. Aggressive - MinOther

Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it "altruism" is because even the most parochial EA's care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.

Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the "A" in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.

A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.

The question then is,

How should a consequentialist act locally?

The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.

My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.

 

17 Rules to Make a Definition that Avoids the 37 Ways of Words Being Wrong

15 mathnerd314 22 February 2014 05:16AM

Eliezer's writing style of A->B, then A, then B, though generally clear, results in a large amount of redundancy.

In this post, I have attempted to reduce the number of rules needed to remember by half. The numbers are the rules from the original post.

So, without further ado, a good definition for a word:

  1. can be shown to be wrong37 and is not the final13 authority18 19
  2. has strong justifications33 for the word's existence32 and its particular definition,20 which leave no room for an argument17 22
  3. agrees with conventional usage4
  4. explains what context the word depends on36
  5. limits its scope to avoid overlap with other meanings25
  6. does not assume that definitions are the best way of giving words semantics12
  7. directs a complex mental paintbrush35 to paint detailed pictures of the thing you're trying to think about23
  8. is a brain inference aid13 that refers to and instructs one on how to find a specific/unique24 similarity cluster21 that is apparent from empirical experience28 29 30, the cluster's size being inversely proportional to the word's length31
  9. is not a binary category9 11 and cannot be used for deductive inference27
  10. requires observing only14 a few3 real-world1 properties that can be easily5 verified2 and are less abstract6 than the word being defined (in particular, the definition cannot be circular16)
  11. is not just a list of random properties10 21
  12. contains no negated properties10 33
  13. specifies exhaustively all of the correct connotations of the word25 26
  14. makes the properties of a random object satisfying the definition be nearly independent34
  15. has examples6 which satisfy the definition, including the original example(s) that motivated the definition being given15 and typical/conventional examples7
  16. tells you which examples are more typical or less typical9
  17. captures enough characteristics of the examples to identify non-members8

And there you go. 17 rules, follow them all and you can't use words wrongly.

Prescriptive vs. descriptive and objective vs. subjective definitions

4 PhilGoetz 21 January 2014 11:21PM

Imagine you're writing a Field Guide to Boats, and you want to know what you should include in your field guide. Barges? Rafts? These things?

You want something like a dictionary definition of boat. A descriptive definition that includes anything people commonly think of as a boat; an objective definition, because you're only writing one book, not a separate version for each reader.

Now imagine you're stranded on an island, and you open a bottle, and a genie comes out and gives you one wish, and you say, "I wish for a boat!", and the genie says, "Well, what's a boat?" And you know, because you've read stories, that the genie will take your definition of "boat" and try to screw you over. You'd better not read out the dictionary definition, or the genie will give you a toy boat, or a boat with a hole in it, or a kayak too small for you to fit into. You need a prescriptive, subjective definition of a thing that will transport you over water.

continue reading »

You'll be who you care about

21 Stuart_Armstrong 20 September 2011 05:52PM

Eliezer wonders about the thread of conscious experience: "I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."

Instead of wondering whether we should be selfish towards our future selves, let's reverse the question. Let's define our future selves as agents that we can strongly influence, and that we strongly care about. There are other aspects that round out our intuitive idea of future selves (such as having the same name and possessions, and a thread of conscious experience), but this seems the most fundamental one.

In future, this may help clarify issues of personal identity once copying is widespread:

These two future copies, Mr. Jones, are they both 'you'? "Well yes, I care about both, and can influence them both."

Mr Jones Alpha, do you feel that Mr Jones Beta, the other current copy, is 'you'? "Well no, I only care a bit about him, and have little control over his actions."

Mr Evolutionary-Jones Alpha, do you feel that Mr Evolutionary-Jones Beta, the other current copy, is 'you'? "To some extent; I care strongly about him, but I only control his actions in an updateless way."

Mr Instant-Hedonist-Jones, how long have you lived? "Well, I don't care about myself in the past or in the future, beyond my current single conscious experience. So I'd say I've lived a few seconds, a minute at most. The other Mr Instant-Hedonist-Jones are strangers to me; do with them what you will. Though I can still influence them strongly, I suppose; tell you what, I'll sell my future self into slavery for a nice ice-cream. Delivered right now."