Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

February 27 2011 Southern California Meetup

7 JenniferRM 24 February 2011 05:05AM

January 2011 Southern California Meetup

8 JenniferRM 18 January 2011 04:50AM

There will be a meetup for Southern California this Sunday, January 23, 2011 at 4PM and running for three to five hours.  The meetup is happening at Marco's Trattoria.  The address is:

8200 Santa Monica Blvd
West Hollywood, CA 90046

If all the people (including guests and high end group estimates) show up we'll be at the limit of the space with 24 attendees.  Previous meetups had room for walk-ins and future meetups should as well, but this one is full.  If you didn't RSVP in time for this one but want to get an email reminder when the February meetup is scheduled send me a PM with contact info.

continue reading »

Blackmail, Nukes and the Prisoner's Dilemma

20 Stuart_Armstrong 10 March 2010 02:58PM

This example (and the whole method for modelling blackmail) are due to Eliezer. I have just recast them in my own words.

We join our friends, the Countess of Rectitude and Baron Chastity, in bed together. Having surmounted their recent difficulties (she paid him, by the way), they decide to relax with a good old game of prisoner's dilemma. The payoff matrix is as usual:

(Baron, Countess)
Cooperate
Defect
Cooperate
(3,3) (0,5)
Defect
(5,0) (1,1)

Were they both standard game theorists, they would both defect, and the payoff would be (1,1). But recall that the baron occupies an epistemic vantage over the countess. While the countess only gets to choose her own action, he can choose from among four more general tactics:

  1. (Countess C, Countess D)→(Baron D, Baron C)   "contrarian" : do the opposite of what she does
  2. (Countess C, Countess D)→(Baron C, Baron C)   "trusting soul" : always cooperate
  3. (Countess C, Countess D)→(Baron D, Baron D)   "bastard" : always defect
  4. (Countess C, Countess D)→(Baron C, Baron D)   "copycat" : do whatever she does

Recall that he counterfactually considers what the countess would do in each case, while assuming that the countess considers his decision a fixed fact about the universe. Were he to adopt the contrarian tactic, she would maximise her utility by defecting, giving a payoff of (0,5). Similarly, she would defect in both trusting soul and bastard, giving payoffs of (0,5) and (1,1) respectively. If he goes for copycat, on the other hand, she will cooperate, giving a payoff of (3,3).

Thus when one player occupies a superior epistemic vantage over the other, they can do better than standard game theorists, and manage to both cooperate.

"Isn't it wonderful," gushed the Countess, pocketing her 3 utilitons and lighting a cigarette, "how we can do such marvellously unexpected things when your position is over mine?"

continue reading »

The Blackmail Equation

13 Stuart_Armstrong 10 March 2010 02:46PM

This is Eliezer's model of blackmail in decision theory at the recent workshop at SIAI, filtered through my own understanding. Eliezer help and advice were much appreciated; any errors here-in are my own.

The mysterious stranger blackmailing the Countess of Rectitude over her extra-marital affair with Baron Chastity doesn't have to run a complicated algorithm. He simply has to credibly commit to the course of action:

"If you don't give me money, I will reveal your affair."

And then, generally, the Countess forks over the cash. Which means the blackmailer never does reveal the details of the affair, so that threat remains entirely counterfactual/hypothetical. Even if the blackmailer is Baron Chastity, and the revelation would be devastating for him as well, this makes no difference at all, as long as he can credibly commit to Z. In the world of perfect decision makers, there is no risk to doing so, because the Countess will hand over the money, so the Baron will not take the hit from the revelation.

Indeed, the baron could replace "I will reveal our affair" with Z="I will reveal our affair, then sell my children into slavery, kill my dogs, burn my palace, and donate my organs to medical science while boiling myself in burning tar" or even "I will reveal our affair, then turn on an unfriendly AI", and it would only matter if this changed his pre-commitment to Z. If the Baron can commit to counterfactually doing Z, then he never has to do Z (as the countess will pay him the hush money), so it doesn't matter how horrible the consequences of Z are to himself.

To get some numbers in this model, assume the countess can either pay up or not do so, and the baron can reveal the affair or keep silent. The payoff matrix could look something like this:

(Baron, Countess)
Pay
Not pay
Reveal
 (-90,-110) (-100,-100)
Silent
(10,-10) (0,0)

continue reading »

Collective Apathy and the Internet

29 Eliezer_Yudkowsky 14 April 2009 12:02AM

Previously in seriesBeware of Other-Optimizing
Followup toBystander Apathy

Yesterday I convered the bystander effect, aka bystander apathy: given a fixed problem situation, a group of bystanders is actually less likely to act than a single bystander.  The standard explanation for this result is in terms of pluralistic ignorance (if it's not clear whether the situation is an emergency, each person tries to look calm while darting their eyes at the other bystanders, and sees other people looking calm) and diffusion of responsibility (everyone hopes that someone else will be first to act; being part of a crowd diminishes the individual pressure to the point where no one acts).

Which may be a symptom of our hunter-gatherer coordination mechanisms being defeated by modern conditions.  You didn't usually form task-forces with strangers back in the ancestral environment; it was mostly people you knew.  And in fact, when all the subjects know each other, the bystander effect diminishes.

So I know this is an amazing and revolutionary observation, and I hope that I don't kill any readers outright from shock by saying this: but people seem to have a hard time reacting constructively to problems encountered over the Internet.

Perhaps because our innate coordination instincts are not tuned for:

  • Being part of a group of strangers.  (When all subjects know each other, the bystander effect diminishes.)
  • Being part of a group of unknown size, of strangers of unknown identity.
  • Not being in physical contact (or visual contact); not being able to exchange meaningful glances.
  • Not communicating in real time.
  • Not being much beholden to each other for other forms of help; not being codependent on the group you're in.
  • Being shielded from reputational damage, or the fear of reputational damage, by your own apparent anonymity; no one is visibly looking at you, before whom your reputation might suffer from inaction.
  • Being part of a large collective of other inactives; no one will single out you to blame.
  • Not hearing a voiced plea for help.
continue reading »

Bystander Apathy

25 Eliezer_Yudkowsky 13 April 2009 01:26AM

The bystander effect, also known as bystander apathy, is that larger groups are less likely to act in emergencies - not just individually, but collectively.  Put an experimental subject alone in a room and let smoke start coming up from under the door.  75% of the subjects will leave to report it.  Now put three subjects in the room - real subjects, none of whom know what's going on.  On only 38% of the occasions will anyone report the smoke.  Put the subject with two confederates who ignore the smoke, and they'll only report it 10% on the time - even staying in the room until it becomes hazy.  (Latane and Darley 1969.)

On the standard model, the two primary drivers of bystander apathy are:

  • Diffusion of responsibility - everyone hopes that someone else will be first to step up and incur any costs of acting.  When no one does act, being part of a crowd provides an excuse and reduces the chance of being held personally responsible for the results.
  • Pluralistic ignorance - people try to appear calm while looking for cues, and see... that the others appear calm.

Cialdini (2001):

Very often an emergency is not obviously an emergency.  Is the man lying in the alley a heart-attack victim or a drunk sleeping one off?  ...  In times of such uncertainty, the natural tendency is to look around at the actions of others for clues.  We can learn from the way the other witnesses are reacting whether the event is or is not an emergency.  What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too.  Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us.  Therefore everyone is likely to see everyone else looking unruffled and failing to act.

Cialdini suggests that if you're ever in emergency need of help, you point to one single bystander and ask them for help - making it very clear to whom you're referring.  Remember that the total group, combined, may have less chance of helping than one individual.

continue reading »

Rationality: Common Interest of Many Causes

39 Eliezer_Yudkowsky 29 March 2009 10:49AM

Previously in seriesChurch vs. Taskforce

It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander.  Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts.  The Institute Which May Not Be Named was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

But of course, not all the rationalists I create will be interested in my own project—and that's fine.  You can't capture all the value you create, and trying can have poor side effects.

If the supporters of other causes are enlightened enough to think similarly...

Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side.  They won't capture all the value they create.  And that's fine.  They'll capture some of the value others create.  Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

But this requires—I know I'm repeating myself here, but it's important—that you be willing not to capture all the value you create.  It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever.  It requires that you don't regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support.  You only reap some of your own efforts, but you reap some of others' efforts as well.

If you and they don't agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement.  (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)

continue reading »

Altruist Coordination -- Central Station

5 MBlume 27 March 2009 10:24PM

Related to: Can Humanism Match Religion's Output?

I thought it would be helpful for us to have a central space to pool information about various organizations to which we might give our money and/or time.  Honestly, a wiki would be ideal, but it seems this should do nicely.

Comment to this post with the name of an organization, and a direct link to where we can donate to them.  Provide a summary of the group's goals, and their plans for reaching them.  If you can link to outside confirmation of the group's efficiency and effectiveness, please do so.

Respond to these comments adding information about the named group, whether to criticize or praise it.

Hopefully with the voting system, we should be able to collect the most relevent information we have available reasonably quickly.

If you choose to contribute to a group, respond to that group's comment with a dollar amount, so that we can all see how much we have raised for each organization.

Feel free to replace "dollar amount" with "dollar amount/month" in the above, if you wish to make such a commitment.  Please do not do this unless you are (>95%) confident that said commitment will last at least a year.

If possible, mention this page, or this site, while donating.

Your Price for Joining

44 Eliezer_Yudkowsky 26 March 2009 07:16AM

Previously in seriesWhy Our Kind Can't Cooperate

In the Ultimatum Game, the first player chooses how to split $10 between themselves and the second player, and the second player decides whether to accept the split or reject it—in the latter case, both parties get nothing.  So far as conventional causal decision theory goes (two-box on Newcomb's Problem, defect in Prisoner's Dilemma), the second player should prefer any non-zero amount to nothing.  But if the first player expects this behavior—accept any non-zero offer—then they have no motive to offer more than a penny.  As I assume you all know by now, I am no fan of conventional causal decision theory.  Those of us who remain interested in cooperating on the Prisoner's Dilemma, either because it's iterated, or because we have a term in our utility function for fairness, or because we use an unconventional decision theory, may also not accept an offer of one penny.

And in fact, most Ultimatum "deciders" offer an even split; and most Ultimatum "accepters" reject any offer less than 20%.  A 100 USD game played in Indonesia (average per capita income at the time: 670 USD) showed offers of 30 USD being turned down, although this equates to two week's wages.  We can probably also assume that the players in Indonesia were not thinking about the academic debate over Newcomblike problems—this is just the way people feel about Ultimatum Games, even ones played for real money.

There's an analogue of the Ultimatum Game in group coordination.  (Has it been studied?  I'd hope so...)  Let's say there's a common project—in fact, let's say that it's an altruistic common project, aimed at helping mugging victims in Canada, or something.  If you join this group project, you'll get more done than you could on your own, relative to your utility function.  So, obviously, you should join.

But wait!  The anti-mugging project keeps their funds invested in a money market fund!  That's ridiculous; it won't earn even as much interest as US Treasuries, let alone a dividend-paying index fund.

Clearly, this project is run by morons, and you shouldn't join until they change their malinvesting ways.

Now you might realize—if you stopped to think about it—that all things considered, you would still do better by working with the common anti-mugging project, than striking out on your own to fight crime.  But then—you might perhaps also realize—if you too easily assent to joining the group, why, what motive would they have to change their malinvesting ways?

Well...  Okay, look.  Possibly because we're out of the ancestral environment where everyone knows everyone else... and possibly because the nonconformist crowd tries to repudiate normal group-cohering forces like conformity and leader-worship...

...It seems to me that people in the atheist/libertarian/technophile/sf-fan/etcetera cluster often set their joining prices way way way too high.  Like a 50-way split Ultimatum game, where every one of 50 players demands at least 20% of the money.

continue reading »

Tolerate Tolerance

49 Eliezer_Yudkowsky 21 March 2009 07:34AM

Followup toWhy Our Kind Can't Cooperate

One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.  This doesn't strictly follow.  You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size.  But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.

That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.

For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone.  Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots.  (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.)  (Please do not pronounce his True Name correctly or he will be summoned here.)

But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.

And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.

Is that a realistic standard?  Especially if different people are annoyed in different amounts by different things?

But it's hard to remember that when Ben is being nice to so many idiots.

Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection.  So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly.  But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.

continue reading »

View more: Next