Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why Our Kind Can't Cooperate

124 Post author: Eliezer_Yudkowsky 20 March 2009 08:37AM

Previously in series: Rationality Verification

From when I was still forced to attend, I remember our synagogue's annual fundraising appeal.  It was a simple enough format, if I recall correctly.  The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.

Straightforward, yes?

Let me tell you about a different annual fundraising appeal.  One that I ran, in fact; during the early years of a nonprofit organization that may not be named.  One difference was that the appeal was conducted over the Internet.  And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd.  (To point in the rough direction of an empirical cluster in personspace.  If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)

I crafted the fundraising appeal with care.  By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years.  The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal.  I sent it out to several mailing lists that covered most of our potential support base.

And almost immediately, people started posting to the mailing lists about why they weren't going to donate.  Some of them raised basic questions about the nonprofit's philosophy and mission.  Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them.  (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)

Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"

Hold on to that thought.

Because people were donating.  We started getting donations right away, via Paypal.  We even got congratulatory notes saying how the appeal had finally gotten them to start moving.  A donation of $111.11 was accompanied by a message saying, "I decided to give **** a little bit more.  One more hundred, one more ten, one more single, one more dime, and one more penny.  All may not be for one, but this one is trying to be for all."

But none of those donors posted their agreement to the mailing list.  Not one.

So far as any of those donors knew, they were alone.  And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn't have donated.  The criticisms, the justifications for not donating—only those were displayed proudly in the open.

As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.

I know someone with a rationalist cause who goes around plaintively asking, "How come the Raelian flying-saucer cult can get tens of thousands of members [probably around 40,000] interested in complete nonsense, but we can't even get a thousand people working on this?"

The obvious wrong way to finish this thought is to say, "Let's do what the Raelians do!  Let's add some nonsense to this meme!"  For the benefit of those not immediately stopped by their ethical inhibitions, I will observe that there may be a hundred failed flying-saucer cults for every one that becomes famous.  And the Dark Side may require non-obvious skills, which you, yes you, do not have:  Not everyone can be a Sith Lord.  In particular, if you talk about your planned lies on the public Internet, you fail.  I'm no master criminal, but even I can tell certain people are not cut out to be crooks.

So it's probably not a good idea to cultivate a sense of violated entitlement at the thought that some other group, who you think ought to be inferior to you, has more money and followers.  That path leads to—pardon the expression—the Dark Side.

But it probably does make sense to start asking ourselves some pointed questions, if supposed "rationalists" can't manage to coordinate as well as a flying-saucer cult.

How do things work on the Dark Side?

The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves.  So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.

("Pluralistic ignorance" is the standard label for this.)

If anyone is still unpersuaded after that, they leave the group (or in some places, are executed)—and the remainder are more in agreement, and reinforce each other with less interference.

(I call that "evaporative cooling of groups".)

The ideas themselves, not just the leader, generate unbounded enthusiasm and praise.  The halo effect is that perceptions of all positive qualities correlate—e.g. telling subjects about the benefits of a food preservative made them judge it as lower-risk, even though the quantities were logically uncorrelated.  This can create a positive feedback effect that makes an idea seem better and better and better, especially if criticism is perceived as traitorous or sinful.

(Which I term the "affective death spiral".)

So these are all examples of strong Dark Side forces that can bind groups together.

And presumably we would not go so far as to dirty our hands with such...

Therefore, as a group, the Light Side will always be divided and weak.  Atheists, libertarians, technophiles, nerds, science-fiction fans, scientists, or even non-fundamentalist religions, will never be capable of acting with the fanatic unity that animates radical Islam.  Technological advantage can only go so far; your tools can be copied or stolen, and used against you.  In the end the Light Side will always lose in any group conflict, and the future inevitably belongs to the Dark.

I think that one's reaction to this prospect says a lot about their attitude towards "rationality".

Some "Clash of Civilizations" writers seem to accept that the Enlightenment is destined to lose out in the long run to radical Islam, and sigh, and shake their heads sadly.  I suppose they're trying to signal their cynical sophistication or something.

For myself, I always thought—call me loony—that a true rationalist ought to be effective in the real world.

So I have a problem with the idea that the Dark Side, thanks to their pluralistic ignorance and affective death spirals, will always win because they are better coordinated than us.

You would think, perhaps, that real rationalists ought to be more coordinated?  Surely all that unreason must have its disadvantages?  That mode can't be optimal, can it?

And if current "rationalist" groups cannot coordinate—if they can't support group projects so well as a single synagogue draws donations from its members—well, I leave it to you to finish that syllogism.

There's a saying I sometimes use:  "It is dangerous to be half a rationalist."

For example, I can think of ways to sabotage someone's intelligence by selectively teaching them certain methods of rationality.  Suppose you taught someone a long list of logical fallacies and cognitive biases, and trained them to spot those fallacies in biases in other people's arguments.  But you are careful to pick those fallacies and biases that are easiest to accuse others of, the most general ones that can easily be misapplied.  And you do not warn them to scrutinize arguments they agree with just as hard as they scrutinize incongruent arguments for flaws.  So they have acquired a great repertoire of flaws of which to accuse only arguments and arguers who they don't like.  This, I suspect, is one of the primary ways that smart people end up stupid.  (And note, by the way, that I have just given you another Fully General Counterargument against smart people whose arguments you don't like.)

Similarly, if you wanted to ensure that a group of "rationalists" never accomplished any task requiring more than one person, you could teach them only techniques of individual rationality, without mentioning anything about techniques of coordinated group rationality.

I'll write more later (tomorrow?) on how I think rationalists might be able to coordinate better.  But today I want to focus on what you might call the culture of disagreement, or even, the culture of objections, which is one of the two major forces preventing the atheist/libertarian/technophile crowd from coordinating.

Imagine that you're at a conference, and the speaker gives a 30-minute talk.  Afterward, people line up at the microphones for questions.  The first questioner objects to the graph used in slide 14 using a logarithmic scale; he quotes Tufte on The Visual Display of Quantitative Information.  The second questioner disputes a claim made in slide 3.  The third questioner suggests an alternative hypothesis that seems to explain the same data...

Perfectly normal, right?  Now imagine that you're at a conference, and the speaker gives a 30-minute talk.  People line up at the microphone.

The first person says, "I agree with everything you said in your talk, and I think you're brilliant."  Then steps aside.

The second person says, "Slide 14 was beautiful, I learned a lot from it.  You're awesome."  Steps aside.

The third person—

Well, you'll never know what the third person at the microphone had to say, because by this time, you've fled screaming out of the room, propelled by a bone-deep terror as if Cthulhu had erupted from the podium, the fear of the impossibly unnatural phenomenon that has invaded your conference.

Yes, a group which can't tolerate disagreement is not rational.  But if you tolerate only disagreement—if you tolerate disagreement but not agreement—then you also are not rational.  You're only willing to hear some honest thoughts, but not others.  You are a dangerous half-a-rationalist.

We are as uncomfortable together as flying-saucer cult members are uncomfortable apart.  That can't be right either.  Reversed stupidity is not intelligence.

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

Doing worse with more knowledge means you are doing something very wrong.  You should always be able to at least implement the same strategy you would use if you are ignorant, and preferably do better.  You definitely should not do worse.  If you find yourself regretting your "rationality" then you should reconsider what is rational.

On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge.  I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

We would seem to be stuck in an awful valley of partial rationality where we end up more poorly coordinated than religious fundamentalists, able to put forth less effort than flying-saucer cultists.  True, what little effort we do manage to put forth may be better-targeted at helping people rather than the reverse—but that is not an acceptable excuse.

If I were setting forth to systematically train rationalists, there would be lessons on how to disagree and lessons on how to agree, lessons intended to make the trainee more comfortable with dissent, and lessons intended to make them more comfortable with conformity.  One day everyone shows up dressed differently, another day they all show up in uniform.  You've got to cover both sides, or you're only half a rationalist.

Can you imagine training prospective rationalists to wear a uniform and march in lockstep, and practice sessions where they agree with each other and applaud everything a speaker on a podium says?  It sounds like unspeakable horror, doesn't it, like the whole thing has admitted outright to being an evil cult?  But why is it not okay to practice that, while it is okay to practice disagreeing with everyone else in the crowd?  Are you never going to have to agree with the majority?

Our culture puts all the emphasis on heroic disagreement and heroic defiance, and none on heroic agreement or heroic group consensus.  We signal our superior intelligence and our membership in the nonconformist community by inventing clever objections to others' arguments.  Perhaps that is why the atheist/libertarian/technophile/sf-fan/Silicon-Valley/programmer/early-adopter crowd stays marginalized, losing battles with less nonconformist factions in larger society.  No, we're not losing because we're so superior, we're losing because our exclusively individualist traditions sabotage our ability to cooperate.

The other major component that I think sabotages group efforts in the atheist/libertarian/technophile/etcetera community, is being ashamed of strong feelings.  We still have the Spock archetype of rationality stuck in our heads, rationality as dispassion.  Or perhaps a related mistake, rationality as cynicism—trying to signal your superior world-weary sophistication by showing that you care less than others.  Being careful to ostentatiously, publicly look down on those so naive as to show they care strongly about anything.

Wouldn't it make you feel uncomfortable if the speaker at the podium said that he cared so strongly about, say, fighting aging, that he would willingly die for the cause?

But it is nowhere written in either probability theory or decision theory that a rationalist should not care.  I've looked over those equations and, really, it's not in there.

The best informal definition I've ever heard of rationality is "That which can be destroyed by the truth should be."  We should aspire to feel the emotions that fit the facts, not aspire to feel no emotion.  If an emotion can be destroyed by truth, we should relinquish it.  But if a cause is worth striving for, then let us by all means feel fully its importance.

Some things are worth dying for.  Yes, really!  And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects.  You've got to teach both sides of it, "That which can be destroyed by the truth should be," and "That which the truth nourishes should thrive."

I've heard it argued that the taboo against emotional language in, say, science papers, is an important part of letting the facts fight it out without distraction.  That doesn't mean the taboo should apply everywhere.  I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry.  When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

We need to keep our efforts to expose counterproductive causes and unjustified appeals, from stomping on tasks that genuinely need doing.  You need both sides of it—the willingness to turn away from counterproductive causes, and the willingness to praise productive ones; the strength to be unswayed by ungrounded appeals, and the strength to be swayed by grounded ones.

I think the synagogue at their annual appeal had it right, really.  They weren't going down row by row and putting individuals on the spot, staring at them and saying, "How much will you donate, Mr. Schwartz?"  People simply announced their pledges—not with grand drama and pride, just simple announcements—and that encouraged others to do the same.  Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them.  That's probably about the way things should be in a sane human community—taking into account that people often have trouble getting as motivated as they wish they were, and can be helped by social encouragement to overcome this weakness of will.

But even if you disagree with that part, then let us say that both supporting and countersupporting opinions should have been publicly voiced.  Supporters being faced by an apparently solid wall of objections and disagreements—even if it resulted from their own uncomfortable self-censorship—is not group rationality.  It is the mere mirror image of what Dark Side groups do to keep their followers.  Reversed stupidity is not intelligence.

 

Part of the sequence The Craft and the Community

Next post: "Tolerate Tolerance"

Previous post: "3 Levels of Rationality Verification"

Comments (185)

Comment author: MBlume 20 March 2009 09:22:57AM *  3 points [-]

To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.

If someone understands the phrase "empirical cluster in personspace," they probably are who you're talking about. =)

Comment author: Eliezer_Yudkowsky 20 March 2009 07:12:26PM 5 points [-]

That was what the first draft said, but I considered it for a few moments and realized that as eloquent statements go, it suffered the unfortunate flaw of not actually being true.

Comment author: prase 20 March 2009 09:34:12AM 2 points [-]

I have to agree completely.

Comment author: CannibalSmith 20 March 2009 12:01:38PM 0 points [-]

Me too!

Comment author: diegocaleiro 17 December 2010 04:37:12PM *  1 point [-]

I don't have to agree completely. But I choose to.

I also choose to link the donation's page for the SIAI here.

http://singinst.org/donate

Yes, this felt great... my emotions seem to be in tune with my high-level goals.

Comment author: AnnaSalamon 20 March 2009 09:48:31AM *  16 points [-]

Many points that are both new and good. Like prase, and like a selection of other fine LW-ers with whom I hope to be agreeing soon, I think your post is awesome :)

One root of the agreement/disagreement asymmetry is perhaps that many of us aspiring rationalists are intellectual show-offs, and we want our points to show everyone how smart we are. Status feels zero-sum, as though one gains smart-points from poking holes in others' claims and loses smart-points from affirming others' good ideas. Maybe we should brainstorm some schemas for expressing agreement while adding intellectual content and showing our own smarts, like "I think your point on slide 14 is awesome. And I bet it can be extended to new context __", or "I love the analogy you made on page 5; now that I read it, I see how to take my own research farther..."

Related: maybe we feel self-conscious about speaking if we don't have anything "new" to add to the conversation, and we don't notice "I, too, agree" as something new. One approach here would be to voice, not just agreement, but the analysis that's going into each individual's agreement, e.g. "I agree; that sounds just like my own experience trying to get an atheists club started", or "I'm adopting these beliefs now, because I trust Eliezer's judgment here, but I have little confirming evidence of my own, so don't double-count my agreement as new evidence". Voicing the causal structure of our agreement would:

  • Give us practice seeing how others navigate evidence and Aumann-type issues;
  • Expose us to others' evidence;
  • Guard against information cascades (assuming honesty in those participating);
  • Let us affirm our identities as smart rationalists, while we express agreement. :)
Comment author: MBlume 20 March 2009 09:55:32AM 7 points [-]

Related: maybe we feel self-conscious about speaking if we don't have anyting "new" to add to the conversation, and we don't notice "I, too, agree" as something new.

I've often wrestled with this myself, and hesitated to comment for just this reason.

Comment author: CannibalSmith 20 March 2009 11:58:54AM *  1 point [-]

Me too!

Comment author: MichaelGR 21 March 2009 12:42:48AM 1 point [-]

Me too.

Comment author: Davorak 05 February 2011 02:03:15AM 0 points [-]

I would encourage you to make this a fornt page post if you have the time. I think these thoughts and strategies are positive, rational and necessary group building skills for any long term group that fulfills rationalist goals. Or maybe it should be in the community guidelines(do these exist? I imagine the sequences as extended community guidelines) so most new members read them over.

Comment author: jacoblyles 20 March 2009 09:54:09AM *  6 points [-]

There is no guarantee of a benevolent world, Eliezer. There is no guarantee that what is true is also beneficial. There is no guarantee that what is beneficial for an individual is also beneficial for a group.

You conflate many things here. You conflate what is true with what is right and what is beneficial. You assume that these sets are identical, or at least largely overlapping. However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this. The prisoners dilemma and the tragedy of the commons are not new ideas. Telling people to act in the group interest because God said so is effective. It is easy to see how informing people of the costs of action, because truth is noble and people ought not be lied to, can be counter-effective.

Perhaps we should stop striving for the maximum rational society, and start pursuing the maximum rational society which is stable in the long term. That is, maybe we ought to set our goal to minimizing irrationality, recognizing that we will never eliminate it.

If we cannot purposely introduce a small bit of beneficial irrationality into our group, then fine: memetic evolution will weed us out and there is nothing we can do about it. People will march by the millions to the will of saints and emperors while rational causes whither on the vine. Not much will change.

Robin made an excellent post along similar lines, which captures half of what I want to say:

http://lesswrong.com/lw/j/the_costs_of_rationality/

I'll be writing up the rest of my thoughts soon.

Sorry, I can't find the motivation to jump on the non-critical bandwagon today. I had the idea about a week ago that there is no guarantee that truth= justice = prudence, and that is going to be the hobby-horse I ride until I get a good statement of my position out, or read one by someone else.

Comment author: conchis 20 March 2009 01:32:23PM 4 points [-]

"However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case."

Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.

"Irrational belief systems often thrive because they overcome the prisoner dilemmas that individual rational action creates on a group level. Rational people cannot mimic this."

Really? Most of the "individual rationality -> suboptimal outcomes" results assume that actors have no influence over the structure of the games they are playing. This doesn't reflect reality particularly well. We may not have infinite flexibility here, but changing the structure of the game is often quite feasible, and quite effective.

Comment author: Annoyance 20 March 2009 02:23:15PM -1 points [-]

"Except that we are free to adopt any version of rationality that wins."

There's only one kind of rationality.

Comment author: Nick_Novitski 20 March 2009 04:21:38PM 4 points [-]

I agree, but that one kind is able to determine an optimal response in any universe, except one where no observable event can ever be reliably statistically linked to any other, which seems like it could be a small subset, and not one we're likely to encounter except

Certainly, there are any number of world-states or day-to-day situations where a full rigorous/sceptical/rational and therefore lengthy investigation would be a sub-optimal response. Instinct works quickly, and if it works well enough, then it's the best response. But obviously, instinct cannot self-analyze and determine whether and in what cases it works "well enough," and therefore what factors contribute to it so working, etc. etc.

Passing the problem of a gun jamming the Rationality-Function might return the response, "If the gun doesn't fire, 90% of the time, pulling the lever action will solve the problem. The other 10% of the time, the gun will blow up in your hand, leading to death. However, determining to reasonable certainty which type of problem you're experiencing, in the middle of a firefight, will lead to death 90% of the time. Therefore, train your Instinct-Function to pull the lever action 100% of the time, and rely on it rather than me when seconds count."

Does this sound like what you mean by a "beneficial irrationality"?

Also: I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right. To me, these assertions appear uncontroversial, but you seem to disagree. What about them bothers you, and when will we get to see your article?

Comment author: jacoblyles 20 March 2009 06:24:11PM *  -1 points [-]

"Does this sound like what you mean by a "beneficial irrationality"?"

No. That's not really what I meant at all. Take nationalism or religion, for example. I think both are based on some false beliefs. However, a belief in one or the other may make a person more willing to sacrifice his well-being for the good of his tribe. This may improve the average chances of survival and reproduction of an individual in the tribe. So members of irrational groups out-compete the rational ones.

In the post above Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point.

There is some evidence that we have brain structures specialized for religious experience. One would think that these structures could only have evolved if they offered some reproductive benefit to animals becoming self-aware in the land of tooth and claw.

In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn't recommend it to a 13th century peasant.

"I propose that what seems truly beneficial, seems both true and beneficial, and what seems beneficial to the highest degree, seems right."

This is not true a priori. That is my point. My challenge to you, Eliezer, and the other denizens of this site is simply: "prove it".

And I offer this challenge especially to Eliezer. Eliezer, I am calling you out. Justify your optimism in the prudence of truth.

Disprove the parable of Eve and the fruit of the tree of knowledge.

Comment author: conchis 20 March 2009 07:07:40PM *  0 points [-]

"Eliezer is basically lamenting that when people behave rationally, they refuse to act against their self-interest, and damn it, it's hurting the rational tribe. That's informative, and sort of my point."

So if that's Eliezer's point, and it's also your point, what is it that you actually disagree about?

I take Eliezer to be saying that sometimes rational individuals fail to co-operate, but that things needn't be so. In response, you seem to be asking him to prove that rational individuals must co-operate - when he already appears to have accepted that this isn't true.

Isn't the relevant issue whether it is possible for rational individuals to co-operate? Provided we don't make silly mistakes like equating rationality with self-interest, I don't see why not - but maybe this whole thread is evidence to the contrary. ;)

Comment author: jacoblyles 20 March 2009 07:14:52PM *  1 point [-]

My point isn't exactly clear for a few reasons. First, I was using this post opportunistically to explore a topic that has been on my mind for awhile. Secondly, Eliezer makes statements that sometimes seem to support the "truth = moral good = prudent" assumption, and sometimes not.

He's provided me with links to some of his past writing, I've talked enough, it is time to read and reflect (after I finish a paper for finals).

Comment author: Eliezer_Yudkowsky 20 March 2009 07:08:21PM 1 point [-]

Reply here.

Comment author: pjeby 20 March 2009 07:08:55PM 5 points [-]

Disprove the parable of Eve and the fruit of the tree of knowledge.

I don't know 'bout no Eve and fruits, but I do know something about the "god-shaped hole". It doesn't actually require religion to fill, although it is commonly associated with religion and religious irrationalities. Essentially, religion is just one way to activate something known as a "core state" in NLP.

Core states are emotional states of peace, oneness, love (in the universal-compassion sense), "being", or just the sense that "everything is okay". You could think of them as pure "reward" or "satisfaction" states.

The absence of these states is a compulsive motivator. If someone displays a compulsive social behavior (like needing to correct others' mistakes, always blurting out unpleasant truths, being a compulsive nonconformist, etc.) it is (in my experience) almost always a direct result of being deprived of one of the core states as a child, and forming a coping response that seems to get them more of the core state, or something related to it.

Showing them how to access the core state directly, however, removes the compulsion altogether. Effectively, it's like wireheading directly to the core state internally drops the reward/compulsion link to the specific behavior, restoring choice in that area.

Most likely, this is because it's the unconditional presence of core states that's the evolutionary advantage you refer to. My guess would be that non-human animals experience these core states as a natural way of being, and that both our increased ability to anticipate negative futures, and our more-complex social requirements and conditions for interpersonal acceptance actually reduce the natural incidence of reaching core states.

Or, to put it more briefly: core states are supposed to be wireheaded, but in humans, a variety of mechanisms conspire to break the wireheading.... and religion is a crutch that reinstates it externally, by exploiting the compulsion mechanism.

Appropriately trained rationalists, on the other hand, can simply reinstate the wireheading internally, and get the benefits without "believing in" anything. (In fact, application of the process tends to surface and extinguish left-over religious ideas from childhood!)

Explaining the actual technique would require considerably more space than I have here, however; the briefest training I've done on the subject was over an hour in length, although the technique itself is simple enough to be done in a few minutes. A little googling will find you plenty on the subject, although it's extremely difficult to learn from the short checklist versions of the technique you're likely to find on the 'net.

The original book on the subject, Core Transformation, is somewhat better, but it also mixes in a lot of irrelevant stuff based on the outdated "parts" metaphor in NLP -- "parts" are just a way of keeping people detached from their responses, and that's really orthogonal to the primary purpose of the technique, which is really sort of a "stack trace" of active unconscious/emotional goals to uncover the system's root goal (and thereby access the core state of "pure utility" underneath).

In the harsh world that prevailed up until just the last few centuries, religion provided people comfort. Happy people are less susceptible to disease, more ambitious, and generally more successful. Atheism has always been as true as it is today. However, I wouldn't recommend it to a 13th century peasant.

Anyone who knows how to access their core states has the ability to call up mystical states of peace, bliss, and what-not, at any moment they actually need or want them. An external idea isn't necessary to provide comfort -- the necessary state already exists inside of you, or religion couldn't possibly activate it.

Comment author: Pierre-Andre 20 March 2009 04:26:06PM 3 points [-]

True, but that "one kind of rationality" might not be what you think it is. Conchis's point holds if you use "rationality" = "everything should always be taken into account, if possible" or something alike.

A "rational" solution to a problem should always take into account those "but in the real word it doesn't work like that...". Those are part of the problem, too.

For example, a political leader acting "rationally" will take into account the opinion of the population (even if they are "wrong" and/or give to much importance to X) if it can affect his results in the next election. The importance of this depends on his "goal" (position of power? well being of the population?) and on the alternative if not elected (will my opponent's decisions do more harm?).

Comment author: pjeby 20 March 2009 04:24:49PM 5 points [-]

For example, we could establish a social norm that compulsive public disagreement is a shameful personal habit, and that you can't be even remotely considered "formidable" if you haven't gotten rid of the urge to seek status by pulling down others.

Comment author: Nebu 20 March 2009 06:33:19PM -1 points [-]

I disagree.

Comment author: jacoblyles 20 March 2009 05:53:49PM *  2 points [-]

"Except that we are free to adopt any version of rationality that wins. "

In that case, believing in truth is often non-rational.

Many people on this site have bemoaned the confusing dual meanings of "rational" (the economic utility maximizing definition and the epistemological believing in truth definition). Allow me to add my name to that list.

I believe I consistently used the "believing in truth" definition of rational in the parent post.

Comment author: conchis 20 March 2009 06:48:31PM *  4 points [-]

I agree that the multiple definitions are confusing, but I'm not sure that you consistently employ the "believing in truth" version in your post above.* It's not "believing in truth" that gets people into prisoners' dilemmas; it's trying to win.

*And if you did, I suspect you'd be responding to a point that Eliezer wasn't making, given that he's been pretty clear on his favored definition being the "winning" one. But I could easily be the one confused on that. ;)

"In that case, believing in truth is often non-rational."

Fair enough. Though I wonder whether, in most of the instances where that seems to be true, it's true for second-best reasons. (That is, if we were "better" in other (potentially modifiable) ways, the truth wouldn't be so harmful.)

Comment author: Nebu 20 March 2009 06:32:39PM 2 points [-]

However, unless a galactic overlord designed the universe to please homo sapien rationalists, I don't see any compelling rational reason to believe this to be the case.

Except that we are free to adopt any version of rationality that wins. Rationality should be responsive to a given universe design, not the other way around.

I don't think your argument applies to jacoblytes' argument. Jacoblytes claims that there is no reason for "rational" to equal "(morally/ethically) right", unless an intelligent designer designed the universe in line with our values.

So it's not about winning versus losing. It's that unless the rules of the game are set up just in a certain way, then winning may entail causing suffering to others (e.g. to our rivals).

Comment author: jacoblyles 20 March 2009 06:54:56PM *  2 points [-]

My writing in these comments has not been perfectly clear, but Nebu you have nailed one point that I was trying to make: "there is no guarantee that morally good actions are beneficial".

The Christian morality is interesting, here. Christians admit up front that following their religion may lead to persecution and suffering. Their God was tortured and killed, after all. They don't claim that what is good will be pleasant, as the rationalists do. To that degree, the Christians seem more honest and open-minded. Perhaps this is just a function of Christianity being an old religion and having the time to work out the philosophical kinks.

Of course, they make up for it by offering infinite bliss in the next life, which is cheating. But Christians do have a more honest view of this world in some ways.

Maybe we conflate true, good, and prudent because our "religion" is a hard sell otherwise. If we admitted that true and morally right things may be harmful, our pitch would become "Believe the truth, do what is good, and you may become miserable. There is no guarantee that our philosophy will help you in this life, and there is no next life". That's a hard sell. So we rationalists cheat by not examining this possibility.

There is some truth to the Christian criticism that Atheists are closed-minded and biased, too.

Comment author: Eliezer_Yudkowsky 20 March 2009 07:07:14PM 8 points [-]

I one-box on Newcomb's Problem, cooperate in the Prisoner's Dilemma against a similar decision system, and even if neither of these were the case: life is iterated and it is not hard to think of enforcement mechanisms, and human utility functions have terms in them for other humans. You conflate rationality with selfishness, assume rationalists cannot build group coordination mechanisms, and toss in a bit of group selection to boot. These and the referenced links complete my disagreement.

Comment author: jacoblyles 20 March 2009 07:11:19PM *  5 points [-]

Thanks for the links, your corpus of writing can be hard to keep up with. I don't mean this as a criticism, I just mean to say that you are prolific, which makes it hard on a reader, because you must strike a balance between reiterating old points and exploring new ideas. I appreciate the attention.

Also, did you ever reply to the Robin post I linked to above? Robin is a more capable defender of an idea than I am, so I would be intrigued to follow the dialog.

Comment author: Davorak 05 February 2011 02:28:17AM 0 points [-]

If you are rational enough, perceptive enough and EY's writing is consistant enough at some point you will not have to read everything EY writes to have a pretty good idea of what his views on a matter will be. I would bet a good some of money that EY would prefer to have his reader gain this ability then read all of his writings.

Comment author: whpearson 20 March 2009 10:33:23AM 2 points [-]

"Those who had nothing to give, stayed silent; those who had objections, chose some later or earlier time to voice them. That's probably about the way things should be in a sane human community"

Personally I think that you were speaking to the wrong crowd when trying to fund raise. Or perhaps I should say too wide a crowd. Like trying to fundraise for tokamak fusion in a mailing list where people are interested in fusion in the generality. People who don't believe that tokamaks will ever be stable/usable are duty bound to try and convince the other people of that so they won't waste their money (also it means less money in the pot for their projects).

Geek cooperative projects can work, but generally only if there is a mathematical or empirical way to get everyone on the right page, or you have to filter the group you are trying to work with by philosophical position.

With regards to signaling agreement, I think part of the problem is that agreements tend to give little information. If everyone on a certain mailing list said I agree and here is how much money I am donating, I would consider it spam, too much bandwidth for not enough new information... Polls would probably be better, or the organiser of the fund raiser could give running updates (which I believe you did, IIRC).

Comment author: nazgulnarsil 20 March 2009 10:51:53AM -2 points [-]

I don't see how individualism can beat out collectivism as long as groups = more power. for individualism to work each person would have to wield equal power to any group.

Comment author: Nick_Novitski 20 March 2009 04:40:30PM 1 point [-]

One view doesn't need to "beat out" the other; for each societal state, there's a corresponding equilibrium between individualistic- and group-think (or rather, group-think for varying sizes of groups) as each person weigh the costs and benefits of adherence for them. In a world of individuals, an organized and specialized group of any size "= more power." Witness sedentary farmers displacing hunter-gatherers. On the other hand, in a world of groups, a rogue individualistic prisoner's-dilemma-defector is king. Witness sociopaths in corporate structures, or the plots of far too many Star Trek episodes.

The balance of power can shift as Individualism becomes a better choice, due to its risks lessening and rewards increasing, whether due to culture, technology, or extensive debates on websites.

Comment author: JulianMorrison 20 March 2009 10:52:16AM 11 points [-]

To be honest, I suspect a lot of those folks, and I include myself here, were anti-collectivists first.

In my own mind, the emotive rule "I might follow, but I must never obey" is built over a long childhood war and an eventual hard-fought and somewhat Pyrrhic victory. I know it's reversed stupidity, but it's hard to let go.

What good rationalist techniques are there for changing such things?

Comment author: Emile 20 March 2009 11:53:00AM *  14 points [-]

Recognizing that "I might follow, but I must never obey" is an emotional rule is already a good first step, much better than trying to rationalize it.

I've recognized that same pattern in myself - a bad feeling in response to the idea of following / obeying even when it's an objectively good idea to do so. I imagined an "asshole with a time machine" who would follow me around, observe what I did (buy a ham sandwitch for lunch, enter a book store...), go back in time a few seconds before my decision and order me to do it.

Once I realized I was much more angry against this hypothetical asshole than it was reasonable to, I tried getting rid of that anger. I guess I succeeded (the idea doesn't bug me as much), but I don't know if it means I won't have any more psychological resistance to obeying. I am probably still pretty biased towards individualism / giving more value to my opinion just because it's my own, but I'd like to find ways to get rid of that..

Comment author: Annoyance 20 March 2009 02:22:12PM 3 points [-]

"What good rationalist techniques are there for changing such things?"

Carefully examining the potential reasons for going along with someone else. Emile's point below is a very good one.

'Obedience' implies that we must go along with what someone says we should do. It's much better to think (hopefully accurately) that we've choosing to do something which coincidentally is also what someone has suggested. We don't need to choose to obey to go along.

Carefully examining the justifications for actions is also important. If there are compelling reasons to do X, the fact that we've been "ordered" to do X is irrelevant, just as being ordered NOT to do X is.

Comment author: pjeby 20 March 2009 04:18:49PM 14 points [-]

Ask "what's bad about obeying?" Imagine a specific concrete instance of obeying, and then carefully observe your automatic, unconscious response. What bad thing do you expect is going to happen?

Most likely, you will get a response that says something about who you are as a person: your social image, like, "then I'll be weak". You can then ask how you learned that obeying makes someone weak... which may be an experience like your peers teasing you (or someone else) for obeying. You can then rationally examine that experience and determine whether you still think you have valid evidence for reaching that conclusion about obedience.

Please note, however, that you cannot kill an emotional decision like this without actually examining your own evidence for the proposition, as well as against it. The mere knowledge that your rule is irrational is not sufficient to modify it. You need to access (and re-assess) the actual memor(ies) the rule is based on.

Comment author: Davorak 05 February 2011 02:14:44AM 0 points [-]

Stating that you are not obeying and that you are take a particular course of action because it is a good idea seems to work/help some people.

Realize that the anti-collectivist pull is an explotable weakness it leaves you vulnerable to people who are perceptive and want to harm you. Some would say that you should just avoid getting people to want to harm you, however a consequence is that you would have to avoid standing up to people who harm the world, people you care for and some time yourself.

Comment deleted 20 March 2009 11:33:26AM [-]
Comment author: prase 20 March 2009 02:26:50PM 16 points [-]

In fact, agreement is a sort of spam - it consumes space and usually doesn't bring new thoughts. When I imagine a typical conference where the participants are constantly running out of time, visualising the 5-minute question interval consumed by praise to the speaker helps me a lot in rationalising why the disagreement culture is necessary. Not that it would be the real reason why I would flee screaming out of the room, I would probably do even if the time wasn't a problem.

When I read the debates at e.g. daylightatheism.org I am often disgusted by how much agreement there is (and it is definitely not a Dark Side blog). So I think I am strongly immersed in the disagreement culture. But, all cultural prejudices aside, I will probably always find a discussion consisting of "you are brilliant" type statements extraordinarily boring.

Comment author: Nominull 20 March 2009 03:36:40PM 1 point [-]

I agree!

Comment author: pjeby 20 March 2009 04:07:31PM 11 points [-]

It doesn't have to bring new thoughts to serve a purpose. A chorus of agreement is an emotional amplifier.

Comment author: AndrewH 20 March 2009 09:02:25PM 4 points [-]

Not only that, it becomes a glue that binds people together, the more agreement the stronger the binding (and the more that get bound). At least that is the analogy that I use when I look at this; we (rationalists) have no glue, they (religions) have too much.

Comment author: MichaelGR 21 March 2009 12:42:12AM 2 points [-]

I think you are focusing too much on discussions.

There are other activities where success can depend heavily on <em>not acting alone</em>, and it is in those types of activities (such as fundraising, seizing political power, reforming institutions, etc) that rationalist-types are disadvantaged by their lack of coordination.

Comment author: Court_Merrigan 21 March 2009 02:35:20AM -1 points [-]

You didn't read Eliezer's post very carefully, did you? You need more practice in agreement and conformity. There are a limited number of "right" answers out there. It's alright to agree on them, when they are found.

Comment author: Davorak 05 February 2011 01:54:59AM 2 points [-]

Agreement does not need to be contentless and therefore spam. It can fill in holes in the argument, take a different perspective(helping a different segment of the reading population), add specific details to the argument that were glossed over and much more.

I will probably always find a discussion consisting of "you are brilliant" type statements extraordinarily boring.

It sounds like you have a problem with lack of content more then you do with agreement. I am sure you would find contentless disagreement just a boring.

Comment author: prase 06 February 2011 03:27:19PM 2 points [-]

Agreements are a lot more often contentless, as a rule. When disagreeing, people feel motivated to include some reasons, and even if they don't, the one who was disagreed with feels motivated to ask for the reasons. But in principle you are right that my objections don't primarily aim at agreement.

Comment author: ciphergoth 20 March 2009 12:32:24PM *  32 points [-]

In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There's also a sense that "there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine".

However, that's not the only reason I might hesitate to post my agreement; I might prefer only to post when I have something to add, which would more usually be disagreement. Since I don't only vote up things I agree with, perhaps I should start hacking on the feature that allows you to say "6 members marked their broad agreement with this point (click for list of members)".

Comment deleted 20 March 2009 01:15:43PM [-]
Comment author: Court_Merrigan 21 March 2009 02:32:19AM 5 points [-]

That would be a great feature, I think. Ditto on on broad disagreements.

Comment author: Emile 20 March 2009 01:31:05PM 7 points [-]

In this community, agreeing with a poster such as yourself signals me as sycophantic and weak-minded; disagreement signals my independence and courage. There's also a sense that "there are leaders and followers in this world, and obviously just getting behind the program is no task for so great a mind as mine".

Does it really signal that to other readers, or is that just in your mind? If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?

Comment author: Annoyance 20 March 2009 02:18:07PM 5 points [-]

"If you see someone posting an agreement, do you really judge him as a weak-minded sycophant?"

It depends greatly on what they're agreeing with, and what they've said and done before.

Comment author: Nebu 20 March 2009 05:53:52PM 17 points [-]

If they post just a "Amazing post, as usual Eliezer" without further informative contribution, then I too get this mild sense of "sucking up" going on.

Actually, this whole blog (as well as Overcoming Bias) does have this subtle aura of "Eliezer is the rationality God that we should all worship". I don't blame EY for this; more probably, people are just naturally (evolutionarily?) inclined to religious behaviour, and if you hang around LW and OB, then you might project towards the person who acts like the alpha-male of the pack. In fact, it might not even need to have any religious undertones to it. It could just be "alpha-male mammalian evolution society" stuff.

Eliezer is a very smart person. Certainly much smarter than me. But so is Robin Hanson. (I won't get into which one is "smarter", as they are both at least two levels above me) and I feel he is often-- "under-appreciated" perhaps is the closest word?-- perhaps because he doesn't posts as often, but perhaps also because people tend to "me too" Eliezer a lot more often than they "me too" Robin (but again this might be because EY posts much more frequently than RH).

Comment author: pjeby 20 March 2009 06:21:29PM 20 points [-]

It's simpler than that: 1) Eliezer expresses certainty more often than Robin, and 2) he self-discloses to a greater degree. The combination of the two induces tendency to identification and aspiration. (The evolutionary reasons for this are left as an exercise for the reader.)

Please note that this isn't a denigration -- I do exactly the same things in my own writing, and I also identify with and admire Eliezer. Just knowing what causes it doesn't make the effect go away.

(To a certain extent, it's just audience-selection -- expressing your opinions and personality clearly will make people who agree/like what they hear become followers, those who disagree/dislike become trolls, and those who don't care one way or the other just go away altogether. NOT expressing these things clearly, on the other hand, produces less emotion either way. I love the information I get from Robin's posts, but they don't cause me to feel the same degree of personal connection to their author.)

Comment author: Gray 24 March 2011 04:43:50AM 3 points [-]

This is a good point, but I think there's a ready solution to that. Agreement and disagreement, by themselves, are rather superficial. Arguments, on the other hand, rationalists have more respect for. When you agree with someone, it seems that you don't have the burden to formulate an argument because, implicitly, you're referring to the first person's argument. But when you disagree with someone, you do have the burden of formulating a counterargument. So I think this is why rationalists tend to have more respect for disagreement than agreement, because disagreement requires an argument, whereas agreement doesn't need to.

But on reflection, this arrangement is fallacious. Why shouldn't agreement also require an argument? I think it may seem to add to the strength of an argument if multiple people agree that it is sound, but I don't think it does in reality. If multiple people develop the same argument independently, then the argument might be somewhat stronger; but clearly this isn't the kind of agreement we're talking about here. If I make an argument, you read my argument, and then you agree that my argument is sound, you haven't developed the same argument independently. Worse, I've just biased you towards my argument.

The better alternative is, when you agree with an argument, there should be the burden of devising a different argument that argues for the same conclusion. Of course, citing evidence also counts as an "argument". In this manner, a community of rationalists can increase the strength of a conclusion through induction; the more arguments there are for a conclusion, the stronger that conclusion is, and the better it can be relied upon.

Comment author: CuSithBell 24 March 2011 04:52:38AM 1 point [-]

In that case you're "writing the last line first", I suspect it might not reduce bias. Personally, I often try to come up with arguments against positions I hold or am considering, which sometimes work and sometimes do not. Of course, this isn't foolproof either, but might be less problematic.

Comment author: Gray 24 March 2011 04:59:06AM *  1 point [-]

Sorry, I'm not exactly sure what "writing the last line first" means. I'm guessing you referring to the syllogism, and you take my proposal to mean arguing backwards from the conclusion to produce another argument for the same conclusion. Is this correct?

Comment author: CuSithBell 24 March 2011 05:33:28AM 0 points [-]

I'm referring to this notion of knowing what you want to conclude, and then fitting the argument to that specification. My intuition, at least, is that it would be more useful to focus on weaknesses of your newly adopted position - and if it's right, you're bound to end up with new arguments in favor of it anyway.

I agree, though, that agreement should not be taken as license to avoid engaging with a position.

I suppose I should note, given the origin of these comments, that I recommend these things only in a context of collaboration - and if we're talking about a concrete suggestion for action or the like rather than an airy matter of logic, the rules are somewhat different.

Comment author: [deleted] 24 March 2011 05:31:28AM 3 points [-]

In real life this is common, and the results are not always bad. It's incredibly common in mathematics. For example, Fermat's Last Theorem was a "last line" for a long time, until someone finally filled in the argument. It may also be worth mentioning that the experimental method is also "last line first". That is, at the start you state the hypothesis that you're about to test, and then you test the hypothesis - which test, depending on the result, may amount to an argument from evidence for the hypothesis.

Another case in point, this time from history: Darwin and natural selection. At some point in his research, natural selection occurred to him. It wasn't, at that point, something that he had very strong evidence for, which is why he spent a lot of time gathering evidence and building argument for it. So there's another "last line first" which turned out pretty well in the end.

Comment author: CuSithBell 24 March 2011 05:38:16AM 0 points [-]

I think the thing which is jumping out as strange to me is doing this after you've been convinced, seemingly to enhance your credence. Still, this is a good point.

Comment author: [deleted] 24 March 2011 05:50:13AM 4 points [-]

The danger that Eliezer warns against is absolutely real. So what's special about math? In the case of math, I think that there is something special, and that is that it's really, really hard to make a bogus argument in math and pass it by somebody who's paying attention. In the case of experimental science, the experiment is deliberately constructed to take the result out of the hands of the experimenter. At least it should be. The experimenter only controls certain variables.

So why is there ever a danger? The problem seems to arise with the mode of argument that involves "the preponderance of evidence". That kind of argument is totally exposed to cherry-picking, allowing the cherry-picker to create whatever preponderance he wants. It is, unfortunately, maybe the most common argument that you'll find in the world.

Comment author: JGWeissman 24 March 2011 06:02:26AM 1 point [-]

It may also be worth mentioning that the experimental method is also "last line first". That is, at the start you state the hypothesis that you're about to test, and then you test the hypothesis - which test, depending on the result, may amount to an argument from evidence for the hypothesis.

No. When you state the hypothesis, it means that, depending on the evidence you are about to gather, your bottom line will be that the hypothesis is true or that the hypothesis is false (or that you can't tell if the hypothesis is true or false). Writing the Bottom Line First would be deciding in advance to conclude that the hypothesis is true.

Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis, which the social process of science compensates for by requiring lots of evidence.

Comment author: [deleted] 24 March 2011 06:21:24AM *  3 points [-]

Deciding in advance to conclude that the hypothesis is true is not a danger if the way you decide to do that is by some means that in reality won't let you do that if the hypothesis is false. Keep in mind: you can decide to do something and still be unable to do it.

Suppose I believe that a hypothesis is true. I believe it so strongly, that I believe a well-designed experiment will prove that it is true. So I decide in advance to conclude that the hypothesis is true by doing what I am positive in advance will prove the hypothesis, which is to run a well-designed experiment which will convince the doubters. So I do that, and (suppose) that the experiment supports my hypothesis. The fact that my intentions were to prove the hypothesis don't invalidate the result of the experiment. The experiment is by its own good design protected from my intentions.

A well-designed experiment will yield truth whatever the intentions of the experimenter. What makes an experiment good isn't good intentions on the part of the experimenter. That's the whole point of the experiment: we can't trust the experimenter, and so the experiment by design renders the experimenter powerless. (Of course, we can increase our confidence even further by replicating the experiment.)

Now let's change both the intention and the method. Suppose you don't know whether a hypothesis is true and decide to discover whether it is true by examining the evidence. The method you choose is "preponderance of evidence". It is quite possible for you completely erroneously and unintentionally to in effect cherry-pick evidence for the hypothesis you were trying to test. People make procedural mistakes like this all the time without intending to do so. For example, you see one bit of evidence, and make note of the fact that this particular bit of evidence makes the the hypothesis appear to be true. But now, uh oh! You're subject to confirmation bias! That means that you will automatically, without meaning to, start to pay attention to confirming and ignore disconfirming evidence. And you didn't mean to!

Depending on where the hypothesis came from, the experimental method may be Privileging the Hypothesis

Absolutely, but privileging the hypothesis is a danger whether or not you have decided in advance to conclude the hypothesis. Look at Eliezer's own description:

Then, one of the detectives says, "Well... we have no idea who did it... no particular evidence singling out any of the million people in this city... but let's consider the hypothesis that this murder was committed by Mortimer Q. Snodgrass, who lives at 128 Ordinary Ln.  It could have been him, after all."

This detective has, importantly, not decided in advance to conclude that Snodgrass is the murderer.

Comment author: CuSithBell 24 March 2011 05:44:50AM 1 point [-]

Y'know, you may be right. I also suspect this is something that depends to a significant extent on the type of proposition under consideration.

Comment author: mark_spottswood 20 March 2009 02:01:11PM 6 points [-]

Good points.

This may be why very smart folks often find themselves unable to commit to an actual view on disputed topics, despite being better informed than most of those who do take sides. When attending to informed debates, we hear a chorus of disagreement, but very little overt agreement. And we are wired to conduct a head count of proponents and opponents before deciding whether an idea is credible. Someone who can see the flaws in the popular arguments, and who sees lots of unpopular expert ideas but few ideas that informed people agree on, may give up looking for the right answer.

The problem is that smart people don't give much credit to informed expressions of agreement when parceling out status. The heroic falsfier, or the proposer of the great new idea, get all the glory.

Comment author: Annoyance 20 March 2009 02:17:15PM 5 points [-]

As the old joke says: What do you mean 'we', white man?

The real reason ostensibly smart people can't seem to cooperate is that most of them have no experience with reaching actual conclusions. We train people to make whatever position they espouse look good, not to choose positions well.

Comment author: Nick_Novitski 20 March 2009 03:20:29PM 2 points [-]

What makes a position well-chosen or more likely to assit in reaching actual conclusions?

Comment author: Annoyance 20 March 2009 03:28:57PM 2 points [-]

The logical structure of the best argument supporting it, the quality of the evidence in that argument, and the extensiveness of that evidence.

Instead of those things, most of us pay attention to rhetoric and status.

Take a look at high school speech and debate organizations, and the things they stress. What development of skills and techniques do their debates encourage?

Comment author: Technologos 22 March 2009 10:01:14PM 4 points [-]

A good point, and a serious problem. When I was in high school debate (Lincoln-Douglas), I hated the degree to which the competition was really about jargon and citation of overwhelming but irrelevant "evidence." I think the tipping point was when somebody claimed that teaching religion in public schools would lead to an environmental catastrophe (and even more, it was purely an argument from authority).

At one point, I ran a case that relied on no empirical evidence whatsoever (however abhorrent that may sound here): it was a quasi-Aristotelian argument that if you accept the value in the first premise--I believe it was "knowledge"--then the remainder followed. The whole case was perhaps three minutes long, half the allowed time, and formatted to make the series of premises and conclusions very obvious.

Best I could tell, there was only one weak link in the argument that was easily debatable. I correctly guessed that the people I was debating were more used to listing "evidence" than arguing logic, and most people had absolutely no idea how to handle even clearly stated premises and conclusions.

I was arguing against the position I actually hold, which is why there was still a flaw in the argument, but it won the majority of the debates nonetheless. Sad, more than anything.

Comment author: diegocaleiro 17 December 2010 04:54:37PM 0 points [-]

This "best argument" idea disconsiders the danger of one argument against an army http://lesswrong.com/lw/ik/one_argument_against_an_army/

Comment author: Daniel_Burfoot 20 March 2009 03:08:26PM *  4 points [-]

I think there's an interesting moral of the anecdote, but I'm not sure it's the one you expressed.

My conclusion is: rationalists who desire to discard the burdensome yoke of their cultural traditions, linked inextricably as they are to religion, will have to relearn an entirely new set of cultural traditions from scratch. For example, they will need to learn a new mechanism design that allows them to cooperate in donating money to cause that is accepted as being worthwhile (I think the "ask for money and then wait for people to call out contributions" scheme is damned brilliant).

Comment author: pjeby 20 March 2009 04:12:19PM 2 points [-]

Here's an even better one, under the right circumstances:

"Would everyone please stand up for a moment? Thank you. Now, please remain standing if you believe that our organization is doing important things for the good of the world. Terrific, terrific. Okay, please continue to stand if you're going to make a pledge of at least $X. Fantastic! Now, please continue to stand if you're going to make a pledge of at least $X*2..."

Of course, it won't work very well on a room full of non-conformists... you might have trouble getting them to stand in the first place, especially if they know what's coming.

Comment author: Eliezer_Yudkowsky 20 March 2009 07:10:21PM 5 points [-]

That only works once, if that much. People don't like feeling forced and manipulated.

Comment author: pjeby 20 March 2009 07:31:04PM 3 points [-]

"Right circumstances" includes support for your cause and rapport with your audience, such that most of them don't feel manipulated. The one time I saw that method used, the speaker already had the audience in the palm of his hand, such that they felt they'd already gotten their money's worth just from having listened to him. The stand-up/opt-out trick was just to push an already-high expected conversion rate higher.

(An example of how good a rapport he had: early in the presentation, he asked that people please promise to not even attempt to give him any money that day... and several people laughed and shouted "No!")

Of course, I suppose if you're that good, the trick is moot. On the other hand, the public approach your synagogue used is equally manipulative... it just builds the conformity pressure more slowly, instead of all at once.

Comment author: SoullessAutomaton 20 March 2009 03:11:31PM 10 points [-]

I'm going to agree with the people saying that agreement often has little to no useful information content (the irony is acknowledged). Note, for instance, that content-free "Me too!" posts have been socially contraindicated on the internet since time immemorial, and content-free disagreement is also generally verboten. This also explains the conference example, I expect. Significantly, if this is actually the root of the issue, we don't want to fight it. Informational content is a good thing. However, we may need to find ways to counteract the negative effects.

Personally, having been somewhat aware of this phenomenon, when I've agreed with what someone said I sometimes try to contribute something positive; a possible elaboration on one of their points, a clarification of an off-hand example if it's something I know well, an attempt to extend their argument to other territory, &c.

In cases like the fundraising one, where the problem is more individual misperception of group trends, we probably want something like an anonymous poll--i.e., "Eliezer needs your help to fund his new organization to encourage artistic expression from rationalists. Would you donate money to this cause?", with a poll and a link to a donation page. I would expect you'd actually get a slightly higher percentage voting "yes" than actually donating, though I don't know if that would be a problem. You'd still get the same 90% negative responses, but people would also see that maybe 60% said they would donate.

Comment author: JulianMorrison 20 March 2009 03:46:22PM 8 points [-]

"A slightly higher percentage"? More like: no correlation.

I recall that McDonalds were badly burned by "would you X". Would people buy salads? oh god yes, they'd love an opportunity to eat out and stick to their diets. Did they buy salads, once McDonalds had added them? Nope.

Similarly I recall that last US election the Ron Paul Blimp campaign was able to get a lot more chartable pledges than real-world money, and pretty quickly died from underfunding.

Comment author: Annoyance 20 March 2009 03:53:32PM 4 points [-]

Yes, excellent point that should be underlined for the readers here.

People's metaknowledge is very poor. Their knowledge about themselves, especially so.

Comment author: Nebu 20 March 2009 06:37:10PM 3 points [-]

I recall that McDonalds were badly burned by "would you X". Would people buy salads? oh god yes, they'd love an opportunity to eat out and stick to their diets. Did they buy salads, once McDonalds had added them? Nope.

Someone[1] must be buying those salads, as McDonalds is keeping them on the market, and given that food spoils, it doesn't make financial sense for them to keep offering a product which doesn't sell.

1: I've actually tried the McDonalds salad 3 times. The first time, it was very (and surprisingly) good. The other two times it was mediocre.

Comment author: CarlShulman 21 March 2009 06:21:52AM 3 points [-]

You can keep small stocks of an item, and it can have positive effects beyond direct revenues, e.g. if families with one dieting or vegetarian member don't avoid McDonald's because that person can eat a salad.

Comment author: ciphergoth 21 March 2009 08:06:21AM 4 points [-]

I think the positive effect is that they can say that they sell salads, people can convince themselves they intend to buy the salad, and so on.

Comment author: Matt_Simpson 21 March 2009 08:09:56AM 1 point [-]

I think the positive effect...

Or rather, another positive effect. These explanations aren't mutually exclusive.

That being said, nice insight.

Comment author: homunq 08 May 2009 09:59:25PM 3 points [-]

I saw a study recently that said that the mere presence of a salad on the menu increases people's consumption. I deeply doubt that fast food chains were surprised by that result.

From the nature of the study, it's not even about convincing themselves they intend to buy a salad. By merely seriously having considered the option, they give themselves virtue points which offset the vice of more consumption.

Comment author: SoullessAutomaton 20 March 2009 08:35:16PM 3 points [-]

"A slightly higher percentage"? More like: no correlation.

You make an excellent point, I was not really thinking clearly there.

However, I will note that my intent was not that it should produce an accurate prediction of donations, but to better gauge public opinion on the idea to counteract the tendency to agree silently but disagree loudly.

Comment author: Demosthenes 20 March 2009 04:22:36PM *  10 points [-]

I've worked for a number of non profits and in analysis of our direct mailings, we would get a better response from a mailing that included one of two things

  1. A single testimonial mentioning the amount that some person gave
  2. Some sort of comment about the group average (listeners are making pledges of $150 this season)

This is one of the reasons that some types of nonprofits choose to create levels of giving; my guess is that it is gaming these common level of giving ideas by creating artificial norms of participation. Note You can base your levels on actual evidence and not just round numbers! (plus inflation, right?)

We also generally found that people respond well to the idea of a matching donation (which is rational since your gift is now worth more).

I do believe that anonymous fund raising removes information about community participation that is very valuable to potential donors. Part of making a donation is responding the signal that you are not the only one sending a check to a hopeless office somewhere.

Anonymous polls might be a good idea, but especially among rational types, you might want the individual testimony: you get to see some of the reasoning!

I think the synagogue in the story picked up on these ideas and used them effectively. But the nice thing about raising money through direct mailing and the internet is that you can run experiments!

Comment author: SoullessAutomaton 20 March 2009 08:48:07PM 2 points [-]

I do believe that anonymous fund raising removes information about community participation that is very valuable to potential donors. Part of making a donation is responding the signal that you are not the only one sending a check to a hopeless office somewhere.

The reason I specified anonymity was to reduce the likelihood of a social stigma attached to not donating. The idea of pressuring people into an otherwise voluntary gesture of support makes me very uncomfortable.

However, I may be overcautious on that aspect, and I defer to your greater experience with fundraising. Do you have any other empirical observations about response to fundraising efforts? You could consider submitting an article on the subject, either as it relates to instrumental rationality, or for the benefit of anyone else who might want to organize a rationality-related non-profit.

Comment author: Demosthenes 21 March 2009 04:17:23AM 3 points [-]

I think your caution is warranted, the fact that you can see the other people in the synagogue who don't stand up could be very hurtful to the nonparticipants. Highlighting individual donors or small groups is a good way to show public support without giving away to much information about your membership's participation as a whole.

If you are interested in more rigorous studies (we did ours in excel), you might want to try Dean Karlan's "Does Price Matter in Charitable Giving? Evidence from a Large-Scale Natural Field Experiment " <http://karlan.yale.edu/p/MatchingGrant.pdf>

I will try to dig up some other papers online

Comment author: TheOtherDrJones 02 December 2010 03:39:30AM 0 points [-]

Amongst a group of people who know and interact with each other regularly such as a synagogue those who have the means to donate money and those who do not would be an extremely obvious piece of information to the members of that group.

There are actually two actions taken here by members, they either do not donate or they donate a certain amount. To the members of the group the amount donated is as much of an information channel as the choice to donate or not to donate. Those who donate a lot and are rich may cause offence by donating less than expected, those who donate a little when there is no expectation may gain esteem.

You are proposing a situation in which an individual donates less than expected by such a magnitude that it seriously affects people's esteem for them. This is possible, although given social pressures unlikely. It can occur at all because the magnitude of the donation combined with the wealth of the individual and the support for the cause are all easily calculable. Magnitude of donation is known, wealth is implied by clothes, status symbols or frank discussions about income, and support for the place of worship is expected to be high.

In a group of rational people donating to support a cause they have the option of donating, not donating and voicing support or criticism. You have established a reasonable grounds for why people do not arbitrarily voice support, and for why people voice criticism. But let's look at the amount donated and imagine it were being done publicly, is there a state where people can be hurt by donation or non-donation?

Even if the amount donated and a reasonable guess at the wealth of the individual are available, the amount donated can still vary by the level of support the person feels for the cause. There is no level of donation that is 'incorrect' just as there is no arbitrary 'correct' level of support. Therefore the situation is most unlikely to cause social harm to the individual donating, or those who do not donate as there is a rational reason for any level of donation.

Comment author: Cassandra 20 March 2009 03:24:10PM 2 points [-]

I have been thinking about this subject for a while because I saw the same type of culture of disagreement prevent a group I was a member of from doing anything worthwhile. The problem is very interesting to me because I come from the opposite side of the spectrum being heavily collectivist. I take pleasure in conforming to a group opinion and being a follower but I also have nurtured a growing rationalist position for the last few years. So despite my love of being a follower I often find myself aspiring to a leadership position in order to weld my favored groups into a cohesive whole rather than an un-unified mob. The only solution I have been able to come up with so far is forming a core of beliefs and values which the group can accept without criticism, even if some of the members disagree with some of the parts. This is of course very hard to do.

Comment author: Annoyance 20 March 2009 03:40:46PM 1 point [-]

People are also unwilling to express agreement because they know, and fear, group consensus and the pressure to fit in. Those usually lead to groupspeak and groupthink.

Given that one of the primary messages of the local Powers That Be is that other people's evaluations should be a factor in your own - that other people's conclusions should be considered as evidence when you try to conclude - and that's incompatible with effective rationality, as well as the techniques needed to prevent self-reinforcing mob consensus.

Comment author: Johnicholas 20 March 2009 03:41:41PM 2 points [-]

To some extent, this was discussed in "The Starfish and the Spider", which is about "leaderless groups". The book praises the power of decentralized, individualistic cultures (that you describe as "Light Side"). However, it admits that they're slower and less-well coordinated than hierarchical organizations (like the military, or some corporations).

You've outlined some of the benefits (recruitment, coordinated action) of encouraging public agreement and identifying with the group. You've also outlined some of the dangers (pluralistic ignorance, etc.).

Possibly the appropriate answer is to create multiple groups, so that each can be a check against the others turning into cults. Possibly even a fractal of groups and subgroups.

Comment author: JamesAndrix 20 March 2009 04:01:47PM 7 points [-]

On 'What Do We Mean By "Rationality"?' when you said "If that seems like a perfectly good definition, you can stop reading here; otherwise continue." - I took your word for it and stopped reading. But apparently comments aren't enabled there.

You have significantly altered my views on morality (Views which I put a GREAT deal of mental and emotional effort into.) I suspect I am not alone in this.

I think there's a fine line between tolerating the appearance of a fanboy culture, and becoming a fanboy culture. The next rationalist pop star might not be up to the challenge.

And for that matter, how many time would you want to risk be subjected to agreement without succumbing? It's not wireheading, but people do get addicted.

Comment author: AnnaSalamon 20 March 2009 07:21:36PM *  9 points [-]

I think there's a fine line between tolerating the appearance of a fanboy culture, and becoming a fanboy culture. The next rationalist pop star might not be up to the challenge.

Agreement and disagreement look more like skills that we can develop (and can improve at both of) than ends of a continuum (where moving toward one means moving away from the other).

I mean, we can reduce the apparent and actual extent to which we're an Eliezer fan-club or echo chamber, and improve our armor against the emotional and social pressures that "we all think the Great Leader is perfect" tends to form. And we can also, simulateously, improve our ability to endorse good ideas even when someone else already said that idea, and to actually coordinate to get stuff done in groups.

Comment author: Alicorn 20 March 2009 04:50:42PM *  5 points [-]

I'm not sure if this was at work in your fundraiser, but I know I tend to see exhortations from others that I give to charitable causes/nonprofits as attempts at guilt tripping. (I react the same way when I'm instructed to vote, or brush my teeth twice a day, or anything else that sounds less like new information and more like a self-righteous command.) For this reason, I try to keep quiet when I'm tempted to encourage others to give to my pet charity/donate blood/whatever, for fear that I'll inspire the opposite reaction and hurt my goal. I don't always succeed, but that's an explanation other than a culture of disagreement for why some people might not have contributed to the discussion from a pro-giving position.

Comment author: Skylar626 20 March 2009 06:10:29PM 3 points [-]

Isnt the secret power of Rationality that it can stand up to review? Religious cults are able to demand extreme loyalty because the people are not presented alternatives and are not able to question the view they are handed. One of our strengths seems to be in discernment and argumentation which naturally leads to fractious in-fighting. What would we call "withholding criticism for the Greater Good"?

Comment author: pjeby 20 March 2009 07:20:04PM 3 points [-]

The difference is simply in the critic's motivation: are they trying to improve the situation, or just trying to avoid the expected outcome of agreement? E.g., are you criticizing charities because you want them to do better, or because you don't want to shell out the money AND don't want to admit it? (I'm unashamedly in the "I don't want to send money to Africa and I don't care if I have a logical reason for it" camp, and so have no need to make up a bunch of reasons it's bad.)

If the critic were really interested in improvement, they'd be suggesting improvements or better yet, DOING something about improvement.

Comment author: jimrandomh 20 March 2009 06:30:13PM 12 points [-]

In hindsight, the problem with your fundraiser was obvious. There were two communications channels: one private channel for people who contributed, and one channel for everyone else. Very few people will post a second message after they've already posted one, so the existence of the private channel prevented contributors from posting on the mailing list. Removing all the contributors from the public channel left only nay-sayers and an environment that favored further nay-saying. The fix would be to merge the two channels: publish the messages received from contributors, unless they request otherwise.

Comment author: Yvain 20 March 2009 07:36:19PM *  5 points [-]

Wait a second, now we're using Jews trying to run a synagogue as an example of a group who cooperate and don't always disagree with each other for the sake of disagreeing? Your synagogue must have been very different from mine. You never heard the old "Ten Jews, ten opinions - or twenty if they're Reform" joke? Or the desert island joke?

I also agree with everyone. In particular, I agree with Cameron and Prase that it's tough to just say "I agree". I agree with ciphergoth that I worry that I'm sucking up to you too much. I agree with Anna Salamon that we tend to be intellectual show-offs. I agree with Julian that many of us probably started off with a contrarian streak and then became rationalists. I agree with Jacob Lyles that there's a strong game theory element here - I lose big if rationalists don't cooperate, I win a little if we all cooperate under Eliezer's benevolent leadership, but to a certain way of thinking I win even more if we all cooperate under my benevolent leadership and there's no universally convincing proof that cooperating under someone else is always the highest utility option. And I agree with practically everything in the main post.

One thing I don't agree with: being ashamed of strong feelings isn't a specifically rationalist problem. It's a broader problem with upper/middle class society. Possibly more on this later.

Comment author: Eliezer_Yudkowsky 20 March 2009 07:43:38PM 2 points [-]

I've never been dragged to any other religious institution, so I wouldn't have any other example to use. I expect these forces are much stronger at Jesus Camp or the Raelians. But yes, even Jewish institutions still coordinate better than atheist ones.

Comment author: KevinC 22 November 2010 10:35:53AM 2 points [-]

Granting that the jokes you refer to are generally accurate, wouldn't that make the synagogue a better example for a rationalist Cat Herd than some other religious organization where people "think" in lockstep with the Dear Leader? The synagogue would represent an example of a group of people who manage to cooperate effectively even with a high level of dissensus (neologism for the opposite of consensus). Which, as I understand it, is the goal Eliezer is aiming for in this post.

Comment author: Davorak 05 February 2011 02:39:49AM 0 points [-]

And you win the most when the group is so rational that almost anyone could serve as the benevolent leader.

Comment author: wedrifid 05 February 2011 10:22:04AM 1 point [-]

And you win the most when the group is so rational that almost anyone could serve as the benevolent leader.

The group trait required is not rationality - it is other traits that also share positive affect.

Comment author: Davorak 06 February 2011 01:15:01AM 0 points [-]

I was not asserting that rationality is all that you need to make the most efficient group, if that was what you are getting at.

I think we agree that starting with groups A and B both with x skills if group A is more rational it will also be the more effective group.

My argument was as the ability of the group to act rationally increases, the utility difference between being a member and being the leader will decreases as the group becomes better at judging the leaders value.

Comment author: PhilGoetz 20 March 2009 07:54:18PM *  16 points [-]

Two observations:

  • In American culture, when you give money to a charity, you aren't supposed to tell people. Christian doctrine frowns heavily on that, and we are all partly indoctrinated with that doctrine. That's why no one sent their "yes" response to the list.

  • You just wrote a post with 22 web links, and 19 of them were to your own writings. I think that says more about why we can't cooperate than anything else in the post.

Comment author: Technologos 21 March 2009 04:02:38AM 19 points [-]

Far from being a negative aspect of the post, the self-linking is a key element of Eliezer's effort to build a common vocabulary for rationalists. I've personally found them extremely helpful for reminding myself of the context of the words, when I've forgotten. They're basically footnotes.

How can we cooperate if we don't even speak the same language?

Comment author: PhilGoetz 21 March 2009 08:36:02PM 6 points [-]

It's better to have those links than not to have them. It's a bit as if Eliezer were writing a large, hypertext book that we are writing footnotes in. But the lack of links to the writings of other people shows a lack of engagement and a self-preoccupation that smart people tend to have. Too often, when we ask others for co-operation, we really mean "get behind my ideas and my agenda".

Cooperation involves compromise. It involves participating in the critique of those ideas. It requires, as a prerequisite, believing that others are smart enough to look at the same evidence and see things that you missed. In a forum like this, actual interest in cooperation is evidence by writing relatively short posts, and then responding at length to many of the comments; rather than by writing extremely long posts, and then making a few short responses to comments.

Comment author: Eliezer_Yudkowsky 21 March 2009 09:06:10PM 13 points [-]

I link to myself because I know what I have written.

Comment author: Nominull 21 March 2009 09:16:11PM 12 points [-]

Maybe you should read something written by somebody else sometime.

Comment author: Davorak 05 February 2011 02:20:56AM 0 points [-]

This is an unhelpful comment and did not contribute to the conversation and I interpret it as an attack. Instead of attack why not engage EY on why he thinks it is so important to link to want he has written rather then what other people have written.

Any time I get the urge to use a "witty" oneliner I instead ask for the persons reasoning, perspective and logic that lead them to their conclusion.

Comment author: Perplexed 05 February 2011 02:50:54AM 7 points [-]

This is an unhelpful comment and did not contribute to the conversation.

I disagree. It is a very appropriate response to Eliezer's flip dismissal of Goetz's quite sincere (and to my mind, good) suggestions.

Eliezer is, of course, very well-read for a man of his age, but he is actually a bit parochial given the breadth of his ambitions and the authoritative, didactic writing style. His credibility, his communication ability, his fundraising, and even his ideas could probably benefit if he made a conscious effort to make his writing a bit more scholarly.

I understand that Eliezer is both very busy and very prolific, but I thought that his excuse (that he cited himself so much only for reasons of convenience (or laziness)) was much too dismissive of Phil's arguments - in large part because I think his excuse is quite likely the truth.

Comment author: Davorak 05 February 2011 03:15:44AM *  -1 points [-]

With only a sentence and without back and forth conversation do you have the ability to pull out flippant intent from:

I link to myself because I know what I have written.

I do not know EY so I can not assign myself a high probability of doing so. In truth I subconsciously assigned a high probability that Nominull was in the same boat as me, in other words I jumped to conclusions. Do you assign yourself a high probability of determining EY's intent from the above? If so please share if you can.

I can imagine EY's statement made with helpful intent(I could have made that statement with helpful intent), responding to it as if it was made with unhelpful intent with out evidence does not seem rational/helpful to me.

Comment author: Perplexed 05 February 2011 05:19:39AM *  3 points [-]

I think you are attaching too much importance to inferring the intent (flippant vs helpful) of Eliezer's one-line response to several dozen lines of discussion, and attaching too little importance to assessing the tone. In any case, the dictionary definition of flippant:

frivolously disrespectful, shallow, or lacking in seriousness; characterized by levity

seems to be about tone, rather than intent. Eliezer's comment qualifies as flippant. Nominull's response was also flippant by this definition. This matching tone strikes me as appropriate - which is exactly what I said.

At the point where Eliezer made his comment, he was being mildly criticized. His flippant comment, which I think was exactly truthful, carried the subtext that he was not particularly interested in discussing those criticisms at that time. He is totally within his rights sending that message. The criticism was mild, and formulating a serious and thoughtful response to the criticism is not something he was required to do. He could have just ignored it. He chose not to.

Sometimes clever, conversation-stopping responses don't stop conversations. Particularly when they are a little bit rude. Eliezer got a clever and rude response back. And for almost two years, everyone was satisfied with that ending.

Comment author: Davorak 05 February 2011 06:55:15AM -2 points [-]

You can replace intent with tone and I would stand by that point. I could make the same remark without disrespectful, shallow, lacking seriousness, and with out levity.

Sometimes clever, conversation-stopping responses don't stop conversations. Particularly when they are a little bit rude. Eliezer got a clever and rude response back.

By your description Eliezer makes a true but rude remark and receives a rude response back and this is "appropriate." I do not see how a rude response to what is believed to be a rude comment is productive, it does not bring any logic or new data to the table.

Comment author: Davorak 05 February 2011 07:03:05AM *  1 point [-]

Eliezer got a clever and rude response back. And for almost two years, everyone was satisfied with that ending.

I think there is a high probability that lack of further comments is just due to the propensity not to post in old conversations.

I figured if the sequences and in post links are to be taken seriously then the comments should be too. Old comments should not be treated as if they were perserved in carbonite but living arguments.

Comment author: Nominull 05 February 2011 04:54:33AM 8 points [-]

First let me say that I do not think that attacks are by their very nature impermissible, and if you do, how dare you put "witty" in scare quotes? That's just flat-out unkind.

Anyway, it's a little hard for me to defend my comments of two years ago against attack, because I no longer remember what prompted me to make them. I will do my best to reconstruct my mental state leading up to the comment I made.

I don't think I was necessarily on PhilGoetz's side when I read his comment. I think I agreed, and still agree, with Technologos. But when I read the Wise Master's response to it, it didn't sit right with me. It read like an attempt to fight back against attack with anything that came to hand, rather than an attempt to seek truth. Surely, I must have felt, if the Wise Master were thinking clearly, he would see that unfamiliarity with the works of others is not an excuse, but in fact the entire problem. I feel that I wanted to communicate this insight. I chose the form that I did probably because it was the first one that came to mind. I hang out on some pretty rough and tumble internet forums, described by one disgruntled former poster as "geek bevis[sic] and buthead[sic] humour[sic]". Sharp, witty-without-the-scare-quotes one-liners are built into my muscle memory at this point, and I view a well-executed burn as having aesthetic value in and of itself. I dunno, there is something to be said for short, elegant responses to provoke thought, rather than long plodding walls of text.

Anyway, that's my reasoning, perspective, and logic. I hope you found this enlightening.

Comment author: Davorak 05 February 2011 06:36:35AM 0 points [-]

"witty" was describing my remark, as in the remarks I hold back on may not actually be witty, I was not trying to reference your remark though in retrospect it does seem easy to infer that so I apologize for communicating sloppily.

Attacks that do not forward the conversation are not useful. If the attacker does not expose the logic and data behind their attack then the person being attacked has no logic or data to pick a part and respond to and has no reason to believe that the attacker is earnest in seeking the truth.

Comment author: wedrifid 05 February 2011 07:27:30AM *  3 points [-]

"witty" was describing my remark, as in the remarks I hold back on may not actually be witty, I was not trying to reference your remark though in retrospect it does seem easy to infer that so I apologize for communicating sloppily.

Your attack against Nominull was, in fact, stronger and less ambiguous than Nominull's.

Attacks that do not forward the conversation are not useful. If the attacker does not expose the logic and data

The logic behind the point was actually quite obvious, which is not to say I would have presented it in this context. As Perplexed points out, sometimes there are benefits to taking the effort that you do know what other people have written. (Incidentally, I upvoted both Eliezer Phil and left Nominull alone).

Nominull's comment, discourteous or not, furthered the actual conversation while yours did not (and nor did mine). So that isn't the deciding factor here of why your kind of attack is different from Nominull's kind. I think the difference in perception is that you are responding to provocation, which many people perceive as a whole different category - but that can depend which side you empathise with.

Comment author: Davorak 05 February 2011 08:12:36AM -3 points [-]

Your attack against Nominull was, in fact, stronger and less ambiguous than Nominull's.

You use the terms "Stronger", "less ambiguous" when I did not make the claim of weaker or more ambiguous. Are you implying that I am untruthful in your first quote of me, if so it is a misinterpretation on your part.

The logic behind the point was actually quite obvious, which is not to say I would have presented it in this context.

The logic on why Nominull values EY linking and quoting philosophical works is not obvious to me. Nor is it obvious to me what Nominull's mental model on why EY has not been linking an quoting philosophical works(from 2009 comment). With out making that mental model clear and pointing out supporting evidence I do not see who it is useful.

As Perplexed points out, sometimes there are benefits to taking the effort that you do know what other people have written.

I do not see any one denying that there are benefits to this in this conversation. I can not tell if you have a deeper point.

I think the difference in perception is that you are responding to provocation

That does not fit to how I view my response. It seems to me that the conversation could have taken a much different and more productive route right after EY's comment and Nominull's comment discouraged it. I gave the alternative of engaging EY on "why he thinks it is so important to link to want he has written rather then what other people have written" that I thought would lead to a more productive conversation. I want to encrage productive conversation if I am going to be a community member of lesswrong.

Comment author: wedrifid 05 February 2011 07:05:28AM 0 points [-]

21 March 2009 09:16:11PM

It is long past time for chastisement, if it was ever required.

Comment author: Davorak 05 February 2011 07:19:47AM *  1 point [-]

I respond to a similar comment here.

It is not about chastisement, it is about the people, like me, who come and read it later.

Comment author: wedrifid 05 February 2011 07:39:57AM 2 points [-]

You seem to be remarkably willing to assert how your comments should be interpreted with respect to intent, meaning and social implications. Yet you do not seem to have paid Nominull that same courtesy.

Comment author: Davorak 05 February 2011 08:28:45AM -3 points [-]

Well I know what my intent is I know what I want my social implications to be. It makes sense that I try and communicate them. I accept that Nominull hangs "out on some pretty rough and tumble internet forums" and did not have unproductive intentions. I have not claimed that Nominull had unproductive intentions.

An example of impoliteness is needed if you want to continue this conversation.

Comment author: dfranke 20 March 2009 08:33:05PM 19 points [-]

The nice thing about karma/voting sites like this one is that they provide an efficient and socially acceptable mechanism for signaling agreement: just hit the upmod button. Nobody wants to read or listen to page after page of "me too"; forcing people to tolerate this would be bad enough to negate the advantage of making agreement visible. Voting accomplishes the same visibility without the irritating side-effects.

Comment author: Nebu 20 March 2009 08:48:11PM 4 points [-]

There's a bit of noise, as I sometimes vote up someone I disagree with if they raise an interesting point, and I very, very rarely vote someone down just because I disagree with them.

This "bit of noise" becomes significant on sites with a small number of subscribers, as a +/-2 vote is a "big deal".

Comment author: dfranke 20 March 2009 08:58:57PM 4 points [-]

I think that's a feature, not a bug. What an upvote expresses is nearer to "you should listen to this guy" than to "I agree with this guy", but I think the former is more useful information.

Comment author: diegocaleiro 17 December 2010 05:07:30PM 4 points [-]

There should be an emotional display of how many upvotes a post got.

Numbers are, well, too numbery for that.

Either a smile with ever growing smile.

or a ballon that grows bigger and bigger (for posts that really get way too upvoted, the ballon could explode into colorfull bright carnival paper, or candy, or Brad Pitt, or Russian Redheads...)

Ok, ballon or smile, who is with me?

Comment author: notriddle 30 April 2013 01:24:37AM 1 point [-]

I like the idea, but they seem kind of gimmicky. (thinking of LW's comments section, it would be hard to give another icon the kind of prominence we want, without making it too big). How about a green/red bar, like the one on YouTube?

Comment author: Psy-Kosh 20 March 2009 08:54:31PM 0 points [-]

Hrm, overall makes sense. But now, HOW do you suggest, for something here, an online forum, actually doing that sort of thing in the general case without it translating to a whole bunch of people going, effectively, "me too"?

I do remember when for a certain unnamed organization you started the "donate today and tomorrow" drive (or whatever you called it, something to that effect), I did post to a certain mailing list my thoughts that both led me to donate and what I was thinking in response to that sort of appeal, etc etc.

Comment author: Vladimir_Golovin 20 March 2009 09:02:37PM *  18 points [-]

Heh, it seems like this post has primed me for agreement, and I upvoted a lot more comments than I usually do. And it looks like many others did this as well -- look at the upvote counts! I was reading and voting with Kibitzer on, and was surprised to see the numbers.

(Have I just lowered my status by signaling that I'm susceptible to priming?)

Comment author: pjeby 20 March 2009 09:37:37PM 11 points [-]

Nah, you've raised it, by signaling that you're honest. At least, that's how it would work among true rationalists (as opposed to anti-irrationalists). ;-)

Comment author: jschulter 19 January 2011 02:16:56AM 0 points [-]

They surprised me too. (I actually felt the urge to use an unnecessary exclamation point there the priming's made me so enthusiastic...)

And I think that the status gained from the fact that you noticed being primed probably outweighed any lost due to it us being told it happened. Though now that we're noticing it, we need to decide which frequency of upvoting we should be using so we can avoid the effect.

Comment author: rhollerith 20 March 2009 09:44:00PM *  1 point [-]

Those who suspect me of advocating my unconventional moral position to signal my edgy innovativeness or my nonconformity should consider that I have held the position since 1992, but only since 2007 have I posted about it or discussed it with anyone but a handful of friends.

Comment author: Cyan 20 March 2009 09:58:14PM 0 points [-]

Unfortunately, they can't consider that you have have held the position since 1992 -- all they can consider is that you claim to have done so. You could get your handful of friends to testify, I suppose...

Comment author: rhollerith 21 March 2009 05:41:33AM *  1 point [-]

Cyan points out, correctly, that all the reader can consider is that I claim to have held a certain position since 1992. But that is useful information for evaluating my claim that I am not just signaling because a person is less likely to have deceived himself about having held a position than about his motivations for a sequence of speech acts! And I can add a second piece of useful information in the form of the following archived email. Of course I could be lying when I say that I found the following message on my hard drive, but participants in this conversation willing to lie outright are (much) less frequent than participants who have somehow managed to deceive themselves about whether they really held a certain position since 1992, who in turn are less frequent than participants who have somehow managed to deceive themselves about their real motivation for advocating a certain position.

1995 Jul 4 16:20

Subject: Re: July 15th

Russell Brand writes:

Will you be able to join us at my house to hear John David Garcia talk about the mechanisms for thought, creativity and quantum mechanics?

I certainly would like to join you. Garcian ethics has become an important part of my philosophy, and I want to meet people who assign a similar importance to the ethical principles outlined in Creative Transformation.

Comment author: Cyan 22 March 2009 07:06:43AM 0 points [-]

I don't disagree with the above post -- I just wanted to make a pedantic distinction between claims and facts in evidence. (Also, my choice of the pronoun "they" rather than "we" was deliberate.)

Comment author: Court_Merrigan 21 March 2009 02:33:32AM -1 points [-]

I don't believe you.

Comment author: rhollerith 21 March 2009 05:31:09AM 0 points [-]

Don't believe my advocacy of the moral position is not really just signaling or don't believe I've held the moral position since 1992?

Comment author: Court_Merrigan 21 March 2009 05:50:23AM *  0 points [-]

I don't know how long you've held the position, or much care - I don't think it's relevant. But it is signaling, I think, for 2 reasons:

  • Your public concern with saying it's not signaling is just a way of signaling;
  • Claiming a certain timespan of belief is just an old locker room way of saying "I got here first." Which surely is signaling.

This is the sort of thing that causes unnecessary splintering in groups. I have a very visceral reaction to this sort of signaling (which I would label preening, actually). Perhaps I should examine that.

Comment author: AnnaSalamon 21 March 2009 02:41:07AM 2 points [-]

I believe rhollerith. I met him the other week and talked in some detail; he strikes me as someone who's actually trying. Also, he shared the intellectual roots of his moral position, and the roots make sense as part of a life-story that involves being strongly influenced by John David Garcia's apparently similar moral system some time ago.

Hollerith doesn't mean he was applying his moral position to AI design since '92, he means that since '92, he's been following out a possible theory of value that doesn't assign intrinsic value to human life, to human happiness, or to similar subjective states. I'm not sure why people are stating their disbelief.

Comment deleted 21 March 2009 05:30:46AM *  [-]
Comment author: Eliezer_Yudkowsky 21 March 2009 05:33:49AM 0 points [-]

Off-topic until May, all.

Comment author: roland 21 March 2009 02:59:08AM 1 point [-]

Way to go Eliezer, you have my full support! And another great posting, btw!

Comment author: AndrewKemendo 21 March 2009 03:12:08AM 7 points [-]

I personally see public disagreements as a way to refine the intent of the person under the spotlight rather than a social display of individualism. When I disagree with someone it is not for the sake of disagreeing but rather to refine what I may think is a good idea that has a few weak points. I do this to those I respect and agree with because I hope that others will do this to me.

I think the broader question here is not whether we should encourage widespread agreement in order to create cohesion - but rather if we can ensure that the tenets we collectively agree on are correct conclusions. That is in my mind the main difference between rationalists and what I would call tribalists - in general the majority agree on tenets which are have serious rational flaws or they do simply not raise contest with said tenets. Otherwise if we do follow the leader, then if there are true flaws in that particular modus - we will never discover them.

I agree that it is hard to start a movement based on this - however I see this as a positive attribute. Just as the (flawed) idea of representative democracy was supposed to slow government to a crawl - the rationalist mindset slows group think and confirmation bias to a near halt. It is however a strong movement, however slow.

Comment author: MBlume 21 March 2009 04:00:36AM 21 points [-]

I must admit, I think I do find myself going into Vulcan mode when posting on LW. I find myself censoring very simple social cues -- expressions of gratitude, agreement, emotion -- because I imagine them being taken for noise. I think I'm going to make an effort to snap myself out of this.

Comment author: Vladimir_Golovin 21 March 2009 07:40:49AM *  11 points [-]

Same here. It's very natural for me to thank people when they say or do something awesome, to encourage promising newbies, and to express my agreement when I do agree, but I got the impression that such things are generally frowned upon here, so I found myself suppressing them.

Actually, I didn't mind that much -- the power of ideas discussed here way outweighs these social inconveniences, and I can easily live with that. But personally, I would prefer to be able to express my agreement and gratitude without spending too much calories on worrying about my tribal status.

(Of course we'll need to keep the signal/noise ratio in check, but I'll post my ideas on that in a separate comment).

Comment author: [deleted] 21 March 2009 09:56:24AM -1 points [-]

I think that there are parts of life where we should learn to applaud strong emotional language, eloquence, and poetry. When there's something that needs doing, poetic appeals help get it done, and, therefore, are themselves to be applauded.

That may be, but I generally find YOUR poetic appeals to make me throw up in my mouth. I read my mother your bit about how amazing it was that love was born out of the cruelty of natural selection, and even she thought it was sappy.

Comment author: MBlume 21 March 2009 10:06:42AM 6 points [-]

I read my mother your bit about how amazing it was that love was born out of the cruelty of natural selection, and even she thought it was sappy.

I, on the other hand, nearly started sobbing, so I guess it takes all kinds.

Comment author: Corey_Newsome 18 February 2010 03:07:30AM 0 points [-]

Source?

Comment author: MBlume 18 February 2010 04:18:08AM 0 points [-]
Comment author: patrissimo 21 March 2009 05:55:42PM 4 points [-]

You're awesome, Eli. I love the mix of rationality and emotion here. Emotion is a powerful tool for motivating people. We of the Light Side are rightfully uncomfortable with its power to manipulate, but that doesn't mean we have to abandon it completely.

I recently suggested a rationality "cult" where the group affirmation and belonging exercise is to circle up and have each person in turn say something they disagree with about the tenets of the group. Then everyone cheers and applauds, giving positive feedback. But now I see that this is going too far towards disagreement - better would be for each person to state one area of agreement and one of disagreement with the cult's principles, or today's sermon or exercises, and then be applauded.

Comment author: JoshuaFox 21 March 2009 07:25:19PM *  2 points [-]

Then clearly your fund-raising drive would have benefited from a mechanism for publicizing and externalizing support.

Charitable organizations commonly use a variety of such methods. The example you gave is just one. If correctly designed the mechanisms do not cause support to be swamped by criticism, and they can operate without suppressing any free thought or speech.

E.g. publishing (with their agreement) the names of donors, the amounts, and endorsements; using that information to solicit from other donors; getting endorsements from respected people; appointing wealthy donors to use their own donations as an example when leading solicitation drives among other wealthy donors etc.

The situation does not seem as dire as you suggest.

And you'd better bet that synagogue fund-raising drives get all the gripes that you received, and more!

Comment author: Loren 21 March 2009 11:07:23PM 0 points [-]

Rather than ourselves making the drastic cultural changes that Eli talks about, perhaps it would be more efficient to piggyback on to another movement which is further down that path of culture change, so long as that movement isn't irrational. See this URL:

http://www.thankgodforevolution.com/node/1711

Check out the rest of the web site if you have time, or better yet, buy and read the book the web site is promoting. As you can see from the URL above, cooperation is an important value in the group.

I have been observing the spiritual practices promoted by this web site for just a few weeks, and already it's been giving me tremendous personal benefit. My relationship with my wife and kids is better, I have more enthusiasm for life when I get up in the morning, I no longer find doing chores so onerous, it's much easier for me to refrain from my vices, and I just generally feel more satisfied with the way things are. That's quite a bit for just a few weeks, and I sense the benefits are going to continue to grow with time so long as I adhere to the spiritual practices.

Even though I support Eli's non-profit (that can't be named), I have a very strong urge to give 10-fold as much money to the group that makes such an immediate and real difference in my life.

The really cool thing, though, is that the group is completely compatible with what Eli is trying to do, and should be able to help the cause rather than hinder it, unless we dismiss the group out of hand because their culture is more like a religion than a group of rationalists.

If you think the material on the web site URL I posted above is in any way irrational, please let me know about it. I'd like to hear what you're thinking.

Comment author: Eliezer_Yudkowsky 21 March 2009 11:46:47PM 1 point [-]

This isn't a comment, this is an attempted post in which you say in more detail what's going on over there and which "practices" you're talking about. It then gets voted up or voted down. In any case, don't try to do this sort of thing in one comment.

...though I see you don't have enough karma yet to post; but that's exactly what we've got the system for, eh?

Comment author: byrnema 24 March 2009 05:02:11AM *  -1 points [-]

I'm a beginner that thinks meta-discussions are fun..

Eliezer is asking about whether we should tolerate tolerance. Let's suppose -- for the sake of argument -- that we do not tolerate tolerance. If X is intolerable, then the tolerance of X is intolerable.

So if Y tolerates X, then Y is intolerable. And so on.

Thus, if we accept that we cannot tolerate toleration, then also we cannot tolerate toleration of tolerance, and also we cannot tolerate toleration of toleration of tolerance.

I would think of tolerance as a relationship between X and Y in which Y acquires the intolerability of X.

Comment author: thomblake 24 March 2009 02:42:14PM *  2 points [-]

But none of those donors posted their agreement to the mailing list. Not one.

Couldn't you just ask contributors for the right to make their donations public?

Comment author: whynot 18 September 2009 04:39:52PM 1 point [-]

The Christian and other ethics often demand that the left hand not know what the right hand is doing. However, you can certainly indicate the sum of donations so far without violating anyone's privacy.

The commitment of those who do donate may be more inspiring than the excuses of those who do not.

Comment author: Davorak 05 February 2011 02:37:42AM 1 point [-]

An automated reply system could make a post with the donated amount and unique anonymous user name. That way people reading the counter arguments see people donating between some posts.

Comment author: RickJS 08 September 2009 06:18:43PM *  9 points [-]

BRAVO, Eliezer! Huuzah! It's about time!

I don't know if you have succeeded in becoming a full rationalist, but I know I haven't! I keep being surprised / appalled / amused at my own behavior. Intelligence is way overrated! Rationalism is my goal, but I'm built on evolved wet ware that is often in control. Sometimes my conscious, chooses-to-be-rationalist mind is found to be in the kiddy seat with the toy steering wheel.

I haven't been publicly talking about my contributions to the Singularity Institute and others fighting to save us from ourselves. Part of that originates in my father's attitude that it is improper to brag.

I now publicly announce that I have donated at least $11,000 to the Singularity Institute and its projects over the last year. I spend ~25 hours per week on saving humanity from Homo Sapiens.

I say that to invite others to JOIN IN. Give humanity a BIG term in your utility function. Extinction is Forever. Extinction is for ... us?

Thank you, Eliezer! Once again, you've shown me a blind spot, a bias, an area where I can now be less wrong than I was.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens™ :-|

Comment author: Psy-Kosh 08 September 2009 06:45:02PM 2 points [-]

Cool!

Just am curious.. What do you do for 25 hours a week to save humanity from itself?

Comment author: RickJS 09 September 2009 04:56:38PM *  3 points [-]

Mostly, I study. I also go to a few conferences (I'll be at the Singularity Summit) and listen. I even occasionally speak on key issues (IMO), such as (please try thinking WITH these before attacking them. Try agreeing for at least a while.):

  • "There is no safety in assuring we have a power switch on a super-intelligence. That would be power at a whole new level. That's pretty much Absolute Power and would bring out the innate corruption / corruptibility / self-interest in just about anybody."
  • "We need Somebody to take the dangerous toys (arsenals) away."
  • "Just what is Humanity up to that requires 6 Billion individuals?"

<strikeout> All of that is IN MY OPINION. </strikeout> <-- OK, the comments to this post showed me the error of my ways. I'm leaving this here because comments refer to it.

Edited 07/14/2010 because I've learned since 2009-09 that I said a lot of nonsense.

Comment author: thomblake 09 September 2009 05:33:51PM 4 points [-]

IN MY OPINION

I'm not sure what this was supposed to add, especially with emphasis. Whose opinion would we think it is?

Comment author: RickJS 11 September 2009 12:01:38AM 0 points [-]

I've been told that my writing sounds preachy or even religious-fanatical. I do write a lot of propositions without saying "In my opinion" in front of each one. I do have a standard boilerplate that I am to put at the beginning of each missive:

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either.

Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered.

There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity.

Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down.

-=-= END Boilerplate

In that post, I got lazy and just threw in the tag line at the end. My mistake. I apologize. I won't do that again.

With respect and high regard,
Rick Schwall
Saving Humanity from Homo Sapiens (playing the game to win, but not claiming I am the star of the team)

Comment author: Vladimir_Nesov 11 September 2009 08:34:32AM *  1 point [-]

This only makes it worse, because you can't excuse a signal. (See rationalization, signals are shallow).

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

Comment author: RickJS 11 September 2009 07:11:08PM 1 point [-]

Vladimir_Nesov wrote on 11 September 2009 08:34:32AM:

This only makes it worse, because you can't excuse a signal.

This only makes what worse? Does it makes me sound more fanatical?

Please say more abut "you can't excuse a signal". Did you mean I can't reverse the first impression the signal inspired in somebody's mind? Or something else?

Also: just because you believe you are not fanatical, doesn't mean you are not. People can be caught in affective death spirals even around correct beliefs.

OK I'll start with a prior = 10% that I am fanatical and / or caught in an affective death spiral.

What do you recommend I do about my preachy style?

I appreciate your writings on LessWrong. I'm learning a lot.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Comment author: Wei_Dai 11 September 2009 07:25:08PM 6 points [-]

What do you recommend I do about my preachy style?

I suggest trying to determine your true confidence on each statement you write, and use the appropriate language to convey the amount of uncertainty you have about its truth.

If you receive feedback that indicates that your confidence (or apparent confidence) is calibrated too high or too low, then adjust your calibration. Don't just issue a blanket disclaimer like "All of that is IN MY OPINION."

Comment author: RickJS 19 September 2009 11:01:55PM 3 points [-]

OK.

Actually, I'm going to restrain myself to just clarifying questions while I try to learn the assumed, shared, no-need-to-mention-it body of knowledge you fellows share.

Thanks.

Comment author: Jack 09 September 2009 05:54:25PM *  3 points [-]

I can't help but think that those activities aren't going to do much to save humanity. I don't want to send you into an existential crisis or anything but maybe you should tune down your job description. "Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman. It might be affably egotistical for someone who does preventive counter-terrorism re: experimental bioweapons. "Saving Humanity from Homo Sapiens one academic conference at a time" doesn't really do it for me.

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

Comment author: RickJS 11 September 2009 06:06:05PM *  -1 points [-]

Jack wrote on 09 September 2009 05:54:25PM :

I can't help but think that those activities aren't going to do much to save humanity.

I hear that. I wasn't clear. I apologise.

I DON'T KNOW what I can do to turn humanity's course. And, I decline to be one more person who uses that as an excuse to go back to the television set. Those activities are part of my search for a place where I can make a difference.

"Saving Humanity from Homo Sapiens™" is maybe acceptable for Superman.

... but not acceptable from a mere man who cares, eh?

(Oh, all right, I admit, the ™ was tongue-in-cheek!)

Skip down to END BOILERPLATE if and only if you've read version v44m

First, please read this caveat: Please do not accept anything I say as True.

Ever.

I do write a lot of propositions, without saying, "In My Opinion" before each one. It can sound preachy, like I think I've got the Absolute Truth, Without Error. I don't completely trust anything I have to say, and I suggest you don't, either.

Second, I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".

If you have a reaction (e.g. "That's WRONG!"), please gently save it aside for later. For just a while, please try on the concept, test drive it, use the idea in your life. Perhaps you'll see something even beyond what I offered.

There will plenty of time to criticize, attack, and destroy it AFTER you've "panned for the gold". You won't be missing an opportunity.

Third, I want you to "get" what I offered. When you "get it", you have it. You can pick it up and use it, and you can put it down. You don't need to believe it or understand it to do that. Anything you BELIEVE is "glued to your hand"; you can't put it down.

-=-= END BOILERPLATE version 44m

I think we may have different connotations. I'm going to reluctantly use an analogy, but it's just a temporary crutch. Please drop it as soon as you get how I'm using the word 'saving'.

If I said, "I'm playing football," I wouldn't be implying that I'm a one-man team, or that I'm the star, or that the team always loses when I'm not there. Rigorously, it only means that I'm playing football.

However, it is possible to play football for the camaraderie, or the exercise, or to look good, or to avoid losing. A person can play football to win. Regardless of the position played. It's about attitude, commitment, and responsibility SEIZED rather than reluctantly accepted.

I DECLARE that I am saving humanity from Homo Sapiens. That's a declaration, a promise, not a description subject to True / probability / False. I'm playing to win.

Maybe I'll never be allowed to get on the field. I remember the movie Rudy, about Dan Ruettiger. THAT is what it is to be playing football in the face of being a little guy. That points toward what it is to be Saving Humanity from Homo Sapiens in the face of no evidence and no agreement.

You could give me a low probability of ever making a difference . But before you do, ask yourself, "What will this cause?"

It occurs to be that this little sub-thread beginning with "Mostly, I study. " illustrates what Eliezer was pointing out in "Why Our Kind Can't Cooperate.".

  • "Some things are worth dying for. Yes, really! And if we can't get comfortable with admitting it and hearing others say it, then we're going to have trouble caring enough - as well as coordinating enough - to put some effort into group projects. You've got to teach both sides of it, "That which can be destroyed by the truth should be," and "That which the truth nourishes should thrive." "

You, too, can be Saving Humanity from Homo Sapiens. You start by saying so.

The clock is ticking.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, even if I NEVER get on the field)

Comment author: RickJS 11 September 2009 06:32:45PM *  -1 points [-]

Jack wrote on 09 September 2009 05:54:25PM:

Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.

I don't wish for it. That part was inside parentheses with a question mark. I merely suspect it MAY be needed.

Please explain to me how the destruction follows from the rule of a god-like totalitarian.

Thank you for your time and attention.

With respect and high regard,
Rick Schwall, Ph.D.
Saving Humanity from Homo Sapiens (seizing responsibility, (even if I NEVER get on the field)

Comment author: Jack 12 September 2009 06:47:46PM -1 points [-]

Maybe some Homo Sapiens would survive, humanity wouldn't. Are the human animals in 1984 "people"? After Winston Smith dies is there any humanity left?

I can envision a time when less freedom and more authority is necessary for our survival. But a god-like totalitarian pretty much comes out where extinction does in my utility function.

Comment author: RickJS 19 September 2009 11:41:34PM *  1 point [-]

Oh. My mistake. When you wrote, "Plus wishing for all people to be under the rule of a god-like totalitarian sounds to me like the best way to destroy humanity.", I read:

  • [Totalitarian rule... ] ... [is] ... the best way to destroy humanity, (as in cause and effect.)
  • OR maybe you meant: wishing ... [is] ... the best way to destroy humanity

It just never occurred to me you meant, "a god-like totalitarian pretty much comes out where extinction does in my utility function".

Are you willing to consider that totalitarian rule by a machine might be a whole new thing, and quite unlike totalitarian rule by people?

Comment author: pdf23ds 23 September 2009 10:02:24AM 0 points [-]

IIRC, Winston Smith doesn't die; by the end, his spirit is completely broken and he's practically a living ghost, but alive.

Comment author: coolcortex 18 September 2009 10:37:53AM 1 point [-]

Eliezer, I applaud your post. Bravo. I agree.

I'm new to this site and I was compelled to sign up immediately.

There's not much to add here, but that I hope people appreciate the significance of not shutting off all emotions, much like you argue in this post.

Comment author: TheOtherDave 21 November 2010 09:31:24PM 18 points [-]

Two thoughts.

  1. In any relationship where I have influence, I expect to get more of what I model.

For example, in a community where I have influence, I expect demonstrating explicit support to push community norms towards explicit support, and demonstrating criticism to push norms towards criticism.

This creates the admittedly frustrating situation where, if a community is too critical and insufficiently supportive, it is counterproductive for me to criticize that. That just models criticism, which gets me more criticism; the more compelling and powerful my criticism, the more criticism I'll get in return.

If a community is too critical and insufficiently supportive, I do better to model agreement as visibly and as consistently as I can, and to avoid modeling criticism. For example, to criticize people privately and support them publicly.

  1. In any relationship where I have influence, I expect to get more of what I reward.

If a community is too critical and insufficiently supportive, I do well to be actively on the lookout for others' supportive contributions and to reward them (for example: by praising them, by calling other people's attention to them, and/or by paying attention to them myself). I similarly do well to withhold those rewards from critical contributions.

Comment author: Vaniver 21 November 2010 09:35:51PM 4 points [-]

Voted up. (Explicit support and rewards, ahoy!)

Comment author: timtyler 02 January 2011 05:15:24PM *  0 points [-]

organizing atheists has been compared to herding cats, because they tend to think independently and will not conform to authority - The God Delusion

Maybe - but they seem to work together well enough - if you pay them.

Comment author: shokwave 02 January 2011 05:25:16PM 3 points [-]

Whereas theists will pay tithes to be ordered around.

Comment author: timtyler 02 January 2011 05:48:46PM 0 points [-]

They war with other theists as well. Cooperation benefits from a shared mission.

Comment author: BenLowell 28 June 2011 11:18:27PM 1 point [-]

It makes me happy that those traits you list as what rationalists are usually thought of ----disagreeable, unemotional, cynacal, loners---are unfamiliar. The rationalists I have grown up in the past few years reading this site are both optimistic and caring, along with many other qualities.

Comment author: novalis 26 September 2011 04:45:38PM *  4 points [-]

"[A] survey of 186 societies found, belief in a moralising God is indeed correlated with measures of group cohesion and size." - God as Cosmic CCTV, Dan Jones

Comment author: MoreOn 01 January 2012 07:51:39PM 11 points [-]

“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

That’s been my non-verbal reasoning for years now! Not just here: everywhere. People have been telling me, with various degrees of success, that I never even speak except to argue. To those who have been successful in getting through to me, I would respond with, “Maybe it sounds like I’m arguing, but you’re WRONG. I’m not arguing!”

Until I read this post, I wasn’t even aware that I was doing it. Yikes!

Comment author: Omegaile 22 January 2012 08:32:40PM 4 points [-]

“If I agree, why should I bother saying it? Doesn’t my silence signal agreement enough?”

The fact is that there is a strong motive to disagree: either I change my opinion, or you do.

On the other hand, the motives for agreeing are much more subtle: there is an ego boost; and I can influence other people to conform. Unless I am a very influent person, these two reasons are important as a group, but not much individually.

Which lead us to think: There is a similar problem with elections, and why economists don´t vote .

Anyway there is a nice analogy with physics: eletromagnetic force are much stronger than gravitational, but at large scale gravity is much more influent. (which is kinda obvius and made me think why no one pointed this on this post before)

Comment author: MagnetoHydroDynamics 23 January 2012 08:08:46PM 1 point [-]

This is very interesting; I have usually refrained from replying because I could not think of anything to say that wasn't trivial. Will take care to voice agreement in th future where applicable.

Comment author: amitpamin 18 June 2012 10:31:18PM 1 point [-]

Wow. I don't identify as a cynic or spock, but of the many articles I have read on Less Wrong since I discovered it yesterday, this one is perhaps the most perspective changing.

Comment author: Epiphany 03 September 2012 07:36:19PM 0 points [-]

An alternate explanation: I've noticed a trend where rationalists seem more likely to criticize ideas in general. Perhaps a key experience that needs to happen before some people choose to undergo the rigors of becoming a rationalist is a "waking up" after some trauma that makes them err on the side of being paranoid. I have observed that most people without a "wake up" trauma prefer to simply retain optimism bias and tend to conserve thinking resources for other uses. Someone who thinks as much as you do probably does not feel a need to conserve thinking resources, and probably finds this concept ridiculous, but for most people, stamina for how much thinking they can do in a day is a factor - sad, but true. So, a trauma might be needed to make skepticism appeal to people. It may be that rational thought is often implemented as a defense mechanism and this leads them to create strong habits of doing rational thought in ways that tear ideas down without doing a comparable amount of practice in confirming ideas.

In my opinion, I think the solution to this would be to assist them in reaching a point of satiation when it comes to being great at tearing ideas down. If it's a self-defense mechanism, no amount of brilliant rational appeals will make them give it up. Even if one starts by explaining the risks of tearing ideas down too much, that's only confusing to the self-defense system, people won't know what to do with the cognitive dissonance that causes, so they're likely to reject it. If they feel secure because of a high level of ability with tearing ideas down, they'll probably be more open to seeing the limitations to that and doing more practice with methods of confirming ideas.

Comment author: TraderJoe 21 November 2012 07:58:28PM 2 points [-]

On the other hand, if you are only half-a-rationalist, you can easily do worse with more knowledge. I recall a lovely experiment which showed that politically opinionated students with more knowledge of the issues reacted less to incongruent evidence, because they had more ammunition with which to counter-argue only incongruent evidence.

What exactly is the problem with this? The more knowledge I have, the smaller a weighting I place on any new piece of data.

Comment author: 395b78 16 December 2012 06:02:51PM *  3 points [-]

I completely agree with this post. It's heartwarmingly and mindnumbingly agreeable, I would like to praise it and applaud it forever and ever. On a more serious note, personally it feels like not contributing anything into the conversation if you're just agreeing. Like for an example if I read a 100 posts in here, I don't feel compelled to add a comment which says just "I agree." to each of them because it feels like it doesn't add to the substance of the issue. - So I'm totally doing what the post predicts.

I have really read a hundred or so posts and I think the majority of them are brilliant, and to be honest I don't think there have been any posts by Eliezer in particular that I have read which I would've considered really bad. I think they're great. I'm not even stretching it very far when I'm saying that they've changed my look on life.

Personally I truly hope that whoever comes up with the first functional AIs has concern for the future of humanity and takes the time and trouble to ponder moral issues and is responsible about it in general. In fact I believe the world would be a little better place if a larger number of our leaders and political decision makers would demonstrate similar interests - for an example if they could sit down every now and then and contemplate on the meaning of altruism or caring for one another - or they would stop by and read a post on this website.

So this seems like the perfect post to just agree with and add the following suggestion to the conversation: If it feels like you don't want to just agree to something, even if you do really agree, try and find a way to do that while also making a contribution, additional detail or insight. :)

Awesome posts!

Comment author: dspeyer 10 January 2013 08:33:15AM 0 points [-]

I wonder if one person can have a big effect on this sort of thing.

For example, I've known charity organizers to publish the number of donors and the total money donated every few days. Even without identifying donors, that does a lot to make people feel less alone.