Filter All time

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Thoughts on the Singularity Institute (SI)

252 HoldenKarnofsky 11 May 2012 04:31AM

This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them.

September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements.

The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.)

I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell.

Summary of my views

  • The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More
  • SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More
  • A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More
  • My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.)
  • I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More
  • There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me to change my mind that are likely to work better than posting comments. (Of course I encourage people to post comments; I'm just noting in advance that this action, alone, doesn't guarantee that I will consider your argument.) More

Intent of this post

I did not write this post with the purpose of "hurting" SI. Rather, I wrote it in the hopes that one of these three things (or some combination) will happen:

  1. New arguments are raised that cause me to change my mind and recognize SI as an outstanding giving opportunity. If this happens I will likely attempt to raise more money for SI (most likely by discussing it with other GiveWell staff and collectively considering a GiveWell Labs recommendation).
  2. SI concedes that my objections are valid and increases its determination to address them. A few years from now, SI is a better organization and more effective in its mission.
  3. SI can't or won't make changes, and SI's supporters feel my objections are valid, so SI loses some support, freeing up resources for other approaches to doing good.

Which one of these occurs will hopefully be driven primarily by the merits of the different arguments raised. Because of this, I think that whatever happens as a result of my post will be positive for SI's mission, whether or not it is positive for SI as an organization. I believe that most of SI's supporters and advocates care more about the former than about the latter, and that this attitude is far too rare in the nonprofit world.

continue reading »

Generalizing From One Example

245 Yvain 28 April 2009 10:00PM

Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective

"Everyone generalizes from one example. At least, I do."

   -- Vlad Taltos (Issola, Steven Brust)

My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:

There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.

The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.

Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.

continue reading »

Diseased thinking: dissolving questions about disease

224 Yvain 30 May 2010 09:16PM

Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses

Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity

             -- George Will, townhall.com

Sandy is a morbidly obese woman looking for advice.

Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?

Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.

Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.

When she tells each of her friends about the opinions of the others, things really start to heat up.

Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.

Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.

Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.

Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.

Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.

The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.

continue reading »

Reason as memetic immune disorder

207 PhilGoetz 19 September 2009 09:05PM

A prophet is without dishonor in his hometown

I'm reading the book "The Year of Living Biblically," by A.J. Acobs.  He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year.  He quickly found that

  • a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and
  • this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God.

You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion.  People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them.  Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore.  New converts sometimes try to actually do what their religion tells them to do.

I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases.  We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says...

How do we explain the blindness of people to a religion they grew up with?

continue reading »

Eight Short Studies On Excuses

197 Yvain 20 April 2010 11:01PM

The Clumsy Game-Player

You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button.

"Uh, sorry," says your partner. "My finger slipped."

"I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it."

"Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation."

"True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse."

"How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn."

You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.

After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful."

continue reading »

The Neglected Virtue of Scholarship

171 lukeprog 05 January 2011 07:22AM

Eliezer Yudkowsky identifies scholarship as one of the Twelve Virtues of Rationality:

Study many sciences and absorb their power as your own. Each field that you consume makes you larger... It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...

I think he's right, and I think scholarship doesn't get enough praise - even on Less Wrong, where it is regularly encouraged.

First, consider the evangelical atheist community to which I belong. There is a tendency for lay atheists to write "refutations" of theism without first doing a modicum of research on the current state of the arguments. This can get atheists into trouble when they go toe-to-toe with a theist who did do his homework. I'll share two examples:

  • In a debate with theist Bill Craig, agnostic Bart Ehrman paraphrased David Hume's argument that we can't demonstrate the occurrence of a miracle in the past. Craig responded with a PowerPoint slide showing Bayes' Theorem, and explained that Ehrman was only considering prior probabilities, when of course he needed to consider the relevant conditional probabilities as well. Ehrman failed to respond to this, and looked as though he had never seen Bayes' Theorem before. Had Ehrman practiced the virtue of scholarship on this issue, he might have noticed that much of the scholarly work on Hume's argument in the past two decades has involved Bayes' Theorem. He might also have discovered that the correct response to Craig's use of Bayes' Theorem can be found in pages 298-341 of J.H. Sobel’s Logic and Theism.

  • In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?” Craig pointed this out and handily won that part of the debate. Had Hitchens had a passing understanding of science or explanation, he could have avoided looking foolish, and also spent more time on substantive objections to theism. (One can give a "Who made God?" objection to theism that has some meat, but that's not the one Hitchens gave. Hitchens' objection concerned an infinite regress of explanations, which is just as much a feature of science as it is of theism.)

The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of diligent workers before you, and get off somewhere near where the road finally fades into fresh jungle. Study enough to have a view of the current state of the debate so you don't waste your time on paths that have already dead-ended, or on arguments that have already been refuted. Catch up before you speak up.

This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.

The Less Wrong community is a particularly smart and well-read bunch, but of course it doesn't always embrace the virtue of scholarship.

Consider the field of formal epistemology, an entire branch of philosophy devoted to (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory. These are central discussion topics at Less Wrong, and yet my own experience suggests that most Less Wrong readers have never heard of the entire field, let alone read any works by formal epistemologists, such as In Defense of Objective Bayesianism by Jon Williamson or Bayesian Epistemology by Luc Bovens and Stephan Hartmann.

continue reading »

Dying Outside

168 HalFinney 05 October 2009 02:45AM

A man goes in to see his doctor, and after some tests, the doctor says, "I'm sorry, but you have a fatal disease."

Man: "That's terrible! How long have I got?"

Doctor: "Ten."

Man: "Ten? What kind of answer is that? Ten months? Ten years? Ten what?"

The doctor looks at his watch. "Nine."

Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years.

There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside.

The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition.

continue reading »

Attention control is critical for changing/increasing/altering motivation

167 kalla724 11 April 2012 12:48AM

I’ve just been reading Luke’s “Crash Course in the Neuroscience of Human Motivation.” It is a useful text, although there are a few technical errors and a few bits of outdated information (see [1], updated information about one particular quibble in [2] and [3]).

There is one significant missing piece, however, which is of critical importance for our subject matter here on LW: the effect of attention on plasticity, including the plasticity of motivation. Since I don’t see any other texts addressing it directly (certainly not from a neuroscientific perspective), let’s cover the main idea here.

Summary for impatient readers: focus of attention physically determines which synapses in your brain get stronger, and which areas of your cortex physically grow in size. The implications of this provide direct guidance for alteration of behaviors and motivational patterns. This is used for this purpose extensively: for instance, many benefits of the Cognitive-Behavioral Therapy approach rely on this mechanism.

continue reading »

Cached Selves

166 AnnaSalamon 22 March 2009 07:34PM

by Anna Salamon and Steve Rayhawk (joint authorship)

Related to: Beware identity

A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future

To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing.  So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards.  Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.  If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends.  If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.

All familiar phenomena, right?  You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas.  But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena.  And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence.  (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)

continue reading »

Schelling fences on slippery slopes

164 Yvain 16 March 2012 11:44PM

Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien:

"Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on the road to things like banning unpopular political positions or religions, and we right-thinking people oppose that, so we won't ban Holocaust denial."

And the alien might well respond: "But you could just ban Holocaust denial, but not ban unpopular political positions or religions. Then you right-thinking people get the thing you want, but not the thing you don't want."

This post is about some of the replies you might give the alien.

Abandoning the Power of Choice

This is the boring one without any philosophical insight that gets mentioned only for completeness' sake. In this reply, giving up a certain point risks losing the ability to decide whether or not to give up other points.

For example, if people gave up the right to privacy and allowed the government to monitor all phone calls, online communications, and public places, then if someone launched a military coup, it would be very difficult to resist them because there would be no way to secretly organize a rebellion. This is also brought up in arguments about gun control a lot.

I'm not sure this is properly thought of as a slippery slope argument at all. It seems to be a more straightforward "Don't give up useful tools for fighting tyranny" argument.

The Legend of Murder-Gandhi

Previously on Less Wrong's The Adventures of Murder-Gandhi: Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn't want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse.

But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer.

Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again.

Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals.

Unfortunately, original Gandhi isn't the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn't care quite as much about pacifism as original Gandhi did. He still doesn't want to become a murderer, but it wouldn't be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody.

What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he's rampaging through the streets of Delhi, killing everything in sight.

Now we're tempted to say Gandhi shouldn't even take the first pill. But this also seems odd. Are we really saying Gandhi shouldn't take what's basically a free million dollars to turn himself into 99%-Gandhi, who might well be nearly indistinguishable in his actions from the original?

Maybe Gandhi's best option is to "fence off" an area of the slippery slope by establishing a Schelling point - an arbitrary point that takes on special value as a dividing line. If he can hold himself to the precommitment, he can maximize his winnings. For example, original Gandhi could swear a mighty oath to take only five pills - or if he didn't trust even his own legendary virtue, he could give all his most valuable possessions to a friend and tell the friend to destroy them if he took more than five pills. This would commit his future self to stick to the 95% boundary (even though that future self is itching to try to the same precommitment strategy to stick to its own 90% boundary).

Real slippery slopes will resemble this example if, each time we change the rules, we also end up changing our opinion about how the rules should be changed. For example, I think the Catholic Church may be working off a theory of "If we give up this traditional practice, people will lose respect for tradition and want to give up even more traditional practices, and so on."

Slippery Hyperbolic Discounting

One evening, I start playing Sid Meier's Civilization (IV, if you're wondering - V is terrible). I have work tomorrow, so I want to stop and go to sleep by midnight.

At midnight, I consider my alternatives. For the moment, I feel an urge to keep playing Civilization. But I know I'll be miserable tomorrow if I haven't gotten enough sleep. Being a hyperbolic discounter, I value the next ten minutes a lot, but after that the curve becomes pretty flat and maybe I don't value 12:20 much more than I value the next morning at work. Ten minutes' sleep here or there doesn't make any difference. So I say: "I will play Civilization for ten minutes - 'just one more turn' - and then I will go to bed."

Time passes. It is now 12:10. Still being a hyperbolic discounter, I value the next ten minutes a lot, and subsequent times much less. And so I say: I will play until 12:20, ten minutes sleep here or there not making much difference, and then sleep.

And so on until my empire bestrides the globe and the rising sun peeps through my windows.

This is pretty much the same process described above with Murder-Gandhi except that here the role of the value-changing pill is played by time and my own tendency to discount hyperbolically.

The solution is the same. If I consider the problem early in the evening, I can precommit to midnight as a nice round number that makes a good Schelling point. Then, when deciding whether or not to play after midnight, I can treat my decision not as "Midnight or 12:10" - because 12:10 will always win that particular race - but as "Midnight or abandoning the only credible Schelling point and probably playing all night", which will be sufficient to scare me into turning off the computer.

(if I consider the problem at 12:01, I may be able to precommit to 12:10 if I am especially good at precommitments, but it's not a very natural Schelling point and it might be easier to say something like "as soon as I finish this turn" or "as soon as I discover this technology").

Coalitions of Resistance

Suppose you are a Zoroastrian, along with 1% of the population. In fact, along with Zoroastrianism your country has fifty other small religions, each with 1% of the population. 49% of your countrymen are atheist, and hate religion with a passion.

You hear that the government is considering banning the Taoists, who comprise 1% of the population. You've never liked the Taoists, vile doubters of the light of Ahura Mazda that they are, so you go along with this. When you hear the government wants to ban the Sikhs and Jains, you take the same tack.

But now you are in the unfortunate situation described by Martin Niemoller:

First they came for the socialists, and I did not speak out, because I was not a socialist.
Then they came for the trade unionists, and I did not speak out, because I was not a trade unionist.
Then they came for the Jews, and I did not speak out, because I was not a Jew.
Then they came for me, but we had already abandoned the only defensible Schelling point

With the banned Taoists, Sikhs, and Jains no longer invested in the outcome, the 49% atheist population has enough clout to ban Zoroastrianism and anyone else they want to ban. The better strategy would have been to have all fifty-one small religions form a coalition to defend one another's right to exist. In this toy model, they could have done so in an ecumenial congress, or some other literal strategy meeting.

But in the real world, there aren't fifty-one well-delineated religions. There are billions of people, each with their own set of opinions to defend. It would be impractical for everyone to physically coordinate, so they have to rely on Schelling points.

In the original example with the alien, I cheated by using the phrase "right-thinking people". In reality, figuring out who qualifies to join the Right-Thinking People Club is half the battle, and everyone's likely to have a different opinion on it. So far, the practical solution to the coordination problem, the "only defensible Schelling point", has been to just have everyone agree to defend everyone else without worrying whether they're right-thinking or not, and this is easier than trying to coordinate room for exceptions like Holocaust deniers. Give up on the Holocaust deniers, and no one else can be sure what other Schelling point you've committed to, if any...

...unless they can. In parts of Europe, they've banned Holocaust denial for years and everyone's been totally okay with it. There are also a host of other well-respected exceptions to free speech, like shouting "fire" in a crowded theater. Presumably, these exemptions are protected by tradition, so that they have become new Schelling points there, or are else so obvious that everyone except Holocaust deniers is willing to allow a special Holocaust denial exception without worrying it will impact their own case.

Summary

Slippery slopes legitimately exist wherever a policy not only affects the world directly, but affects people's willingness or ability to oppose future policies. Slippery slopes can sometimes be avoided by establishing a "Schelling fence" - a Schelling point that the various interest groups involved - or yourself across different values and times - make a credible precommitment to defend.

View more: Next