Bridge Collapse: Reductionism as Engineering Problem

44 RobbBB 18 February 2014 10:03PM

Followup to: Building Phenomenological Bridges

Summary: AI theorists often use models in which agents are crisply separated from their environments. This simplifying assumption can be useful, but it leads to trouble when we build machines that presuppose it. A machine that believes it can only interact with its environment in a narrow, fixed set of ways will not understand the value, or the dangers, of self-modification. By analogy with Descartes' mind/body dualism, I refer to agent/environment dualism as Cartesianism. The open problem in Friendly AI (OPFAI) I'm calling naturalized induction is the project of replacing Cartesian approaches to scientific induction with reductive, physicalistic ones.


 

I'll begin with a story about a storyteller.

Once upon a time — specifically, 1976 — there was an AI named TALE-SPIN. This AI told stories by inferring how characters would respond to problems from background knowledge about the characters' traits. One day, TALE-SPIN constructed a most peculiar tale.

Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned.

Since Henry fell in the river near his friend Bill, TALE-SPIN concluded that Bill rescued Henry. But for Henry to fall in the river, gravity must have pulled Henry. Which means gravity must have been in the river. TALE-SPIN had never been told that gravity knows how to swim; and TALE-SPIN had never been told that gravity has any friends. So gravity drowned.

TALE-SPIN had previously been programmed to understand involuntary motion in the case of characters being pulled or carried by other characters — like Bill rescuing Henry. So it was programmed to understand 'character X fell to place Y' as 'gravity moves X to Y', as though gravity were a character in the story.1

For us, the hypothesis 'gravity drowned' has low prior probability because we know gravity isn't the type of thing that swims or breathes or makes friends. We want agents to seriously consider whether the law of gravity pulls down rocks; we don't want agents to seriously consider whether the law of gravity pulls down the law of electromagnetism. We may not want an AI to assign zero probability to 'gravity drowned', but we at least want it to neglect the possibility as Ridiculous-By-Default.

When we introduce deep type distinctions, however, we also introduce new ways our stories can fail.

continue reading »

A Voting Puzzle, Some Political Science, and a Nerd Failure Mode

88 ChrisHallquist 10 October 2013 02:10AM

In grade school, I read a series of books titled Sideways Stories from Wayside School by Louis Sachar, who you may know as the author of the novel Holes which was made into a movie in 2003. The series included two books of math problems, Sideways Arithmetic from Wayside School and More Sideways Arithmetic from Wayside School, the latter of which included the following problem (paraphrased):

The students have Mrs. Jewl's class have been given the privilege of voting on the height of the school's new flagpole. She has each of them write down what they think would be the best hight for the flagpole. The votes are distributed as follows:

  • 1 student votes for 6 feet.
  • 1 student votes for 10 feet.
  • 7 students vote for 25 feet.
  • 1 student votes for 30 feet.
  • 2 students vote for 50 feet.
  • 2 students vote for 60 feet.
  • 1 student votes for 65 feet.
  • 3 students vote for 75 feet.
  • 1 student votes for 80 feet, 6 inches.
  • 4 students vote for 85 feet.
  • 1 student votes for 91 feet.
  • 5 students vote for 100 feet.

At first, Mrs. Jewls declares 25 feet the winning answer, but one of the students who voted for 100 feet convinces her there should be a runoff between 25 feet and 100 feet. In the runoff, each student votes for the height closest to their original answer. But after that round of voting, one of the students who voted for 85 feet wants their turn, so 85 feet goes up against the winner of the previous round of voting, and the students vote the same way, with each student voting for the height closest to their original answer. Then the same thing happens again with the 50 foot option. And so on, with each number, again and again, "very much like a game of tether ball."

Question: if this process continues until it settles on an answer that can't be beaten by any other answer, how tall will the new flagpole be?

Answer (rot13'd): fvkgl-svir srrg, orpnhfr gung'f gur zrqvna inyhr bs gur bevtvany frg bs ibgrf. Naq abj lbh xabj gur fgbel bs zl svefg rapbhagre jvgu gur zrqvna ibgre gurberz.

Why am I telling you this? There's a minor reason and a major reason. The minor reason is that this shows it is possible to explain little-known academic concepts, at least certain ones, in a way that grade schoolers will understand. It's a data point that fits nicely with what Eliezer has written about how to explain things. The major reason, though, is that a month ago I finished my systematic read-through of the sequences and while I generally agree that they're awesome (perhaps moreso than most people; I didn't see the problem with the metaethics sequence), I thought the mini-discussion of political parties and voting was on reflection weak and indicative of a broader nerd failure mode.

TLDR (courtesy of lavalamp):

  1. Politicians probably conform to the median voter's views.
  2. Most voters are not the median, so most people usually dislike the winning politicians.
  3. But people dislike the politicians for different reasons.
  4. Nerds should avoid giving advice that boils down to "behave optimally". Instead, analyze the reasons for the current failure to behave optimally and give more targeted advice.

continue reading »

The best 15 words

12 apophenia 03 October 2013 09:08AM

People want to tell everything instead of telling the best 15 words.  They want to learn everything instead of the best 15 words.  In this thread, instead post the best 15-words from a book you've read recently (or anything else).  It has to stand on its own. It's not a summary, the whole value needs to be contained in those words.

 

  • It doesn't need to cover everything in the book, it's just the best 15 words.
  • It doesn't need to be a quote, it's just the best 15 words.
  • It doesn't have to be 15 words long, it's just the best "15" words.
  • It doesn't have to be precisely true, it's just the best 15 words.
  • It doesn't have to be the main 15 words, it just has to be the best 15 words.
  • It doesn't have to be the author's 15 words, it just has to be the best 15 words.
  • Edit: It shouldn't just be a neat quote--the point of the exercise is to struggle to move from a book down to 15 words.

 

I'll start in the comments below.

(Voted by the Schelling study group as the best exercise of the meeting.)

The Up-Goer Five Game: Explaining hard ideas with simple words

29 RobbBB 05 September 2013 05:54AM

xkcd's Up-Goer Five comic gave technical specifications for the Saturn V rocket using only the 1,000 most common words in the English language.

This seemed to me and Briénne to be a really fun exercise, both for tabooing one's words and for communicating difficult concepts to laypeople. So why not make a game out of it? Pick any tough, important, or interesting argument or idea, and use this text editor to try to describe what you have in mind with extremely common words only.

This is challenging, so if you almost succeed and want to share your results, you can mark words where you had to cheat in *italics*. Bonus points if your explanation is actually useful for gaining a deeper understanding of the idea, or for teaching it, in the spirit of Gödel's Second Incompleteness Theorem Explained in Words of One Syllable.

As an example, here's my attempt to capture the five theses using only top-thousand words:

  • Intelligence explosion: If we make a computer that is good at doing hard things in lots of different situations without using much stuff up, it may be able to help us build better computers. Since computers are faster than humans, pretty soon the computer would probably be doing most of the work of making new and better computers. We would have a hard time controlling or understanding what was happening as the new computers got faster and grew more and more parts. By the time these computers ran out of ways to quickly and easily make better computers, the best computers would have already become much much better than humans at controlling what happens.
  • Orthogonality: Different computers, and different minds as a whole, can want very different things. They can want things that are very good for humans, or very bad, or anything in between. We can be pretty sure that strong computers won't think like humans, and most possible computers won't try to change the world in the way a human would.
  • Convergent instrumental goals: Although most possible minds want different things, they need a lot of the same things to get what they want. A computer and a human might want things that in the long run have nothing to do with each other, but have to fight for the same share of stuff first to get those different things.
  • Complexity of value: It would take a huge number of parts, all put together in just the right way, to build a computer that does all the things humans want it to (and none of the things humans don't want it to).
  • Fragility of value: If we get a few of those parts a little bit wrong, the computer will probably make only bad things happen from then on. We need almost everything we want to happen, or we won't have any fun.

If you make a really strong computer and it is not very nice, you will not go to space today.

Other ideas to start with: agent, akrasia, Bayes' theorem, Bayesianism, CFAR, cognitive bias, consequentialism, deontology, effective altruism, Everett-style ('Many Worlds') interpretations of quantum mechanics, entropy, evolution, the Great Reductionist Thesis, halting problem, humanism, law of nature, LessWrong, logic, mathematics, the measurement problem, MIRI, Newcomb's problem, Newton's laws of motion, optimization, Pascal's wager, philosophy, preference, proof, rationality, religion, science, Shannon information, signaling, the simulation argument, singularity, sociopathy, the supernatural, superposition, time, timeless decision theory, transfinite numbers, Turing machine, utilitarianism, validity and soundness, virtue ethics, VNM-utility

The genie knows, but doesn't care

54 RobbBB 06 September 2013 06:42AM

Followup to: The Hidden Complexity of Wishes, Ghosts in the Machine, Truly Part of You

Summary: If an artificial intelligence is smart enough to be dangerous, we'd intuitively expect it to be smart enough to know how to make itself safe. But that doesn't mean all smart AIs are safe. To turn that capacity into actual safety, we have to program the AI at the outset — before it becomes too fast, powerful, or complicated to reliably control — to already care about making its future self care about safety. That means we have to understand how to code safety. We can't pass the entire buck to the AI, when only an AI we've already safety-proofed will be safe to ask for help on safety issues! Given the five theses, this is an urgent problem if we're likely to figure out how to make a decent artificial programmer before we figure out how to make an excellent artificial ethicist.


 

I summon a superintelligence, calling out: 'I wish for my values to be fulfilled!'

The results fall short of pleasant.

Gnashing my teeth in a heap of ashes, I wail:

Is the AI too stupid to understand what I meant? Then it is no superintelligence at all!

Is it too weak to reliably fulfill my desires? Then, surely, it is no superintelligence!

Does it hate me? Then it was deliberately crafted to hate me, for chaos predicts indifference. But, ah! no wicked god did intervene!

Thus disproved, my hypothetical implodes in a puff of logic. The world is saved. You're welcome.

On this line of reasoning, Friendly Artificial Intelligence is not difficult. It's inevitable, provided only that we tell the AI, 'Be Friendly.' If the AI doesn't understand 'Be Friendly.', then it's too dumb to harm us. And if it does understand 'Be Friendly.', then designing it to follow such instructions is childishly easy.

The end!

 

...

 

Is the missing option obvious?

 

...

 

What if the AI isn't sadistic, or weak, or stupid, but just doesn't care what you Really Meant by 'I wish for my values to be fulfilled'?

When we see a Be Careful What You Wish For genie in fiction, it's natural to assume that it's a malevolent trickster or an incompetent bumbler. But a real Wish Machine wouldn't be a human in shiny pants. If it paid heed to our verbal commands at all, it would do so in whatever way best fit its own values. Not necessarily the way that best fits ours.

continue reading »

How to Have Space Correctly

22 fowlertm 25 June 2013 03:47AM

[NOTE: This post has undergone substantial revisions following feedback in the comments section.  The basic complaint was that it was too airy and light on concrete examples and recommendations.  So I've said oops, applied the virtue of narrowness, gotten specific, and hopefully made this what it should've been the first time.]  

 

Take a moment and picture a master surgeon about to begin an operation.  Visualize the room (white, bright overhead lights), his clothes (green scrubs, white mask and gloves), the patient, under anesthesia and awaiting the first incision. There are several other people, maybe three or four, strategically placed and preparing for the task ahead.  Visualize his tools - it's okay if you don't actually know what tools a surgeon uses, but imagine how they might be arranged.  Do you picture them in a giant heap which the surgeon must dig through every time he wants something, or would they be arranged neatly (possibly in the order they'll be used) and where they can be identified instantly by sight?  Visualize their working area.  Would it be conducive to have random machines and equipment all over the place, or would every single item within arms reach be put there on purpose because it is relevant, with nothing left over to distract the team from their job for even a moment?    

Space is important.  You are a spatially extended being interacting with spatially extended objects which can and must be arranged spatially.  In the same way it may not have occurred to you that there is a correct way to have things, it may not have occurred to you that space is something you can use poorly or well.  The stakes aren't always as high as they are for a surgeon, and I'm sure there are plenty of productive people who don't do a single one of the things I'm going to talk about.  But there are also skinny people who eat lots of cheesecake, and that doesn't mean cheesecake is good for you.  Improving how you use the scarce resource of space can reduce task completion time, help in getting organized, make you less error-prone and forgetful, and free up some internal computational resources, among other things.  

 

What Does Using Space Well Mean?

It means consciously manipulating the arrangement, visibility, prominence, etc. of objects in your environment to change how they affect cognition (yours or other people's).  The Intelligent Use of Space (Kirsh, "The Intelligent Use of Space", 1995) is a great place to start if you're skeptical that there is anything here worth considering.  It's my primary source for this post because it is thorough but not overly technical, contains lots of clear examples, and many of the related papers I read were about deeper theoretical issues.  

The abstract of the paper reads:

How we manage the spatial arrangement of items around us is not an afterthought: it is an integral part of the way we think, plan, and behave. The proposed classification has three main categories: spatial arrangements that simplify choice; spatial arrangements that simplify perception; and spatial dynamics that simplify internal computation. The data for such a classification is drawn from videos of cooking, assembly and packing, everyday observations in supermarkets, workshops and playrooms, and experimental studies of subjects playing Tetris, the computer game. This study, therefore, focuses on interactive processes in the medium and short term: on how agents set up their workplace for particular tasks, and how they continuously manage that workplace.

The 'three main categories' of simplifying choice, perception, and internal computation can be further subdivided:

  simplifying choice

      reducing or emphasizing options.

      creating the potential for useful new choices.

  simplifying perception

      clustering like objects.

      marking an object.

      enhancing perceptual ability.

  simplfying internal computation

     doing more outside of your head.

These sub-categories are easier to picture and thus more useful when trying to apply the concept of using space correctly, and I've provided more illustrations below. It's worth pointing out that (Kirsh, "The Intelligent Use of Space", 1995) only considered the behavior of experts.  Perhaps effective space management partially explains expert's ability to do more of their processing offline and without much conscious planning.  An obvious follow up would be in examining how novices utilize space and looking for discrepancies.  

 

What Does Using Space Well Look Like?

The paper walks the reader through a variety of examples of good utilization of space.  Consider an expert cook going through the process of making a salad with many different ingredients, and ask how you would accomplish the same task differently:

...one subject we videotaped, cut each vegetable into thin slices and laid them out in tidy rows. There was a row of tomatoes, of mushrooms, and of red peppers, each of different length...To understand why lining up the ingredients in well ordered, neatly separated rows is clever, requires understanding a fact about human psychophysics: estimation of length is easier and more reliable than estimation of area or volume. By using length to encode number she created a cue or signal in the world which she could accurately track. Laying out slices in lines allows more precise judgment of the property relative number remaining than clustering the slices into groups, or piling them up into heaps. Hence because of the way the human perceptual system works, lining up the slices creates an observable property that facilitates execution.

Here, the cook used clustering and clever arrangement to make better use of her eyes and to reduce the load on her working memory, techniques I use myself in my day job.  As of this writing (2013) I'm teaching English in Korea.  I have a desk, a bunch of books, pencils, erasers, the works.  All the folders are together, the books are separated by level, and all ungraded homework is kept in its own place.  At the start of the work day I take out all the books and folders I'll need for that day and arrange them in the same order as my classes. When I get done with a class the book goes back on the day's pile but rotated 90 degrees so that I can tell it's been used. When I'm totally done with a book and I've entered homework scores and such, it goes back in the main book stack where all my books are.  I can tell at a glance which classes I've had, which ones I'll have, what order I'm in, which classes are finished but unprocessed, and which ones are finished and processed.  Cthulu only knows how much time I save and how many errors I prevent all by utilizing space well.  

These examples show how space can help you keep track of temporal order and make quick, accurate estimates, but it may not be clear how space can simplify choice.  Recall that simplifying choice usually breaks down into either taking some choices away or making good choices more obvious.  Taking choices away may sound like a bad thing, but each choice requires you to spend time evaluating options, and if you are juggling many different tasks the chance of making the wrong choice goes up.  Similarly, looking for good options soaks up time, unless you can find a way to make yourself trip over them.  

An example of removing bad decisions is in factory workers placing a rag on hot pipes so they know not to touch them (Kirsh, "The Intelligent Use of Space", 1995).  And here is how some carpenters structure their work space so that they can make good uses for odds and ends easier to see:

In the course of making a piece of furniture one periodically tidies up. But not completely. Small pieces of wood are pushed into a corner or left about; tools, screw drivers and mallets are kept nearby. The reason most often reported is that 'they come in handy'. Scraps of wood can serve to protect surfaces from marring when clamped, hammered or put under pressure. They can elevate a piece when being lacquered to prevent sticking. The list goes on.

By symbolically marking a dangerous object the engineers are shutting down the class of actions which involves touching the pipe. It is all too easy in the course of juggling multiple aspects of a task to forget something like this and injure yourself.  The strategically placed and obvious visual marker means that the environment keeps track of the danger for you.  Likewise poisonous substances have clear warning labels and are kept away from anything you might eat; both precautions count as good use of space.

My copy of Steven Johnson's Where Good Ideas Come From is on another continent, but the carpenter example reminded me of his recommendation to keep messy notebooks.  Doing so makes it more likely you'll see unusual and interesting connections between things you're thinking about.  He goes so far as to use a tool called DevonThink which speeds this process up for him.

And while I'm at it, this also points to one advantage of having physical books over PDFs.  My books take up space and are easier to see than their equivalent 1's and 0's on a hard drive, so I'm always reminded of what I have left to read. More than once I've gone on a useful tangent because the book title or cover image caught my attention, and more than one interesting conversation got started when a visitor was looking over my book collection.  Scanning the shelves at a good university library is even better, kind of like 17th-century StumbleUpon, and English-language libraries are something I've sorely missed while I've been in Asia.  

All this usefulness derives from the spatial properties and arrangement of books, and I have no idea how it can be replicated with the Kindle.  

 

Specific Recommendations

You can see from the list of examples I've provided that there are a billion ways of incorporating these insights into work, life, and recreation.  By discussing the concept I hope to have drawn your attention to the ways in which space is a resource, and I suspect just doing this is enough to get a lot of people to see how they can improve their use of space.  Here are some more ideas, in no particular order:

-I put my alarm clock far enough away from my bed so that I have to actually get up to turn it off.  This is so amazingly    effective at ensuring I get up in the morning that I often hate my previous-night's self.  Most of the time I can't go back to  sleep even when I try.   

-There's reason to suspect that a few extra monitors or a bigger display will make your life easier  [Thanks Qiaochu_Yuan].

-When doing research for an article like this one, open up all the tabs you'll need for the project in a separate window and close  each tab as you're done with it.  You'll be less distracted by something irrelevant and you won't have to remember what you did  or didn't read.  

-Having a separate space to do something seems to greatly increase the chances I'll get it done.  I tried not going to the gym  for a while and just doing push ups in my house, managing to keep that up for all of a week or so. Recently, I switched gyms,  and despite now having to take a bus all the way across town I make it to the gym 3-5 times a week, pretty much without fail.  If your studying/hacking/meditation isn't going well, try going somewhere which exists only to give people a  place to do that  thing.

-Put whatever you can't afford to forget when you leave the house right by the door.

-If something is really distracting you, completely remove it from the environment temporarily.  During one particularly strenuous  finals in college I not only turned off the xbox, I completely unplugged it and put it in a drawer.  Problem. Solved.  

-Alternatively, anything you're wanting to do more of should be out in the open.  Put your guitar stand or chess board or  whatever where you're going to see it frequently, and you'll engage with it more often.  This doubles as a signal to other  people, giving you an opportunity to manage their impression of you, learn more about them, and identify those with similar  interests to yours.  

-Make use of complementary strategies (Kirsh, "Complementary Strategies", 1995).  If you're having trouble comprehending    something, make a diagram, or write a list.  The linked paper describes a simple pilot study which involved two groups tasked  with counting coins, one which could use their hands and one which could not.  The 'no hands' group was more likely to make  errors and to take longer to complete the task.  Granted, this was a pilot study with sample size = 5, and the difference  wasn't that stark.  But it's worth thinking about next time you're stuck on a problem.    

-Complementary strategies can also include things you do with your body, which after all is just space you wear with you  everywhere.  Talk out loud to yourself if you're alone, give a mock presentation in which you summarize a position you're trying  to understand, keep track of arguments and counterarguments with your fingers.  I've always found the combination of  explaining something out loud to an imaginary person while walking or pacing to be especially potent.  Some of my best ideas  come to me while I'm hiking.    

-Try some of these embodied cognition hacks.

 

Summary and Conclusion

Space is a resource which, like all others, can be used effectively or not.  When used effectively, it acts to simplify choices, simplify perception, and simplify internal computation.  I've provided many examples of good space usage from all sorts of real-life domains in the hopes that you can apply some of these insights to live and work more effectively.  

 

Further Reading

[In the original post these references contained no links.  Sincere thanks to user Pablo_Stafforini for tracking them down]

Kirsh, D. (1995) The Intelligent Use of Space

Kirsh, D. (1999) Distributed Cognition, Coordination and Environment Design

Kirsh, D. (1998) Adaptive Rooms, Virtual Collaboration, and Cognitive Workflow

Kirsh, D. (1996) Adapting the Environment Instead of Oneself

Kirsh, D. (1995) Complementary Strategies: Why we use our hands when we think

 

Effective Altruism Through Advertising Vegetarianism?

20 peter_hurford 12 June 2013 06:50PM

Abstract: If you value the welfare of nonhuman animals from a consequentialist perspective, there is a lot of potential for reducing suffering by funding the persuasion of people to go vegetarian through either online ads or pamphlets.  In this essay, I develop a calculator for people to come up with their own estimates, and I personally come up with a cost-effectiveness estimate of $0.02 to $65.92 needed to avert a year of suffering in a factory farm.  I then discuss the methodological criticism that merits skepticism of this estimate and conclude by suggesting (1) a guarded approach of putting in just enough money to help the organizations learn and (2) the need for more studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, that include decent control groups.

-

Introduction

I start with the claim that it's good for people to eat less meat, whether they become vegetarian -- or, better yet, vegan -- because this means less nonhuman animals are being painfully factory farmed.  I've defended this claim previously in my essay "Why Eat Less Meat?".  I recognize that some people, even those who consider themselves effective altruists, do not value the well-being of nonhuman animals.  For them, I hope this essay is interesting, but I admit it will be a lot less relevant.

The second idea is that it shouldn't matter who is eating less meat.  As long as less meat is being eaten, less animals will be farmed, and this is a good thing.  Therefore, we should try to get other people to also try and eat less meat.

The third idea is that it also doesn't matter who is doing the convincing.  Therefore, instead of convincing our own friends and family, we can pay other people to convince people to eat less meat.  And this is exactly what organizations like Vegan Outreach and The Humane League are doing.  With a certain amount of money, one can hire someone to distribute pamphlets to other people or put advertisements on the internet, and some percentage of people who receive the pamphlets or see the ads will go on to eat less meat.  This idea and the previous one should be uncontroversial for consequentialists.

But the fourth idea is the complication.  I want my philanthropic dollars to go as far as possible, so as to help as much as possible.  Therefore, it becomes very important to try and figure out how much money it takes to get people to eat less meat, so I can compare this to other estimations and see what gets me the best "bang for my buck".


Other Estimations

I have seen other estimates floating around the internet that try to estimate the cost of distributing pamphlets, how many conversions each pamphlet produces, and how much less meat is ate via each conversion.  Brian Tomasik calculates $0.02 to $3.65 [PDF] per year of nonhuman animal suffering prevented, later $2.97 per year, and then later $0.55 to $3.65 per year.

Jess Whittlestone provides statistics that reveal an estimate of less than a penny per year[1]. 

Effective Animal Activism, a non-profit evaluator for animal welfare charities, came up with an estimate [Excel Document] of $0.04 to $16.60 per year of suffering averted, that also takes into account a variety of additional variables, like product elasticity.

Jeff Kaufman uses a different line of reasoning, by estimating how many vegetarians there are and guessing how many of them came via pamphlets, estimates it would take $4.29 to $536 to make someone vegetarian for one year.  Extrapolating from that using at a rate of 255 animals saved per year and a weighted average of 329.6 days lived per animal (see below for justification of both assumptions), would give $0.02 to $1.90 per year of suffering averted[2].

A third line of reasoning, also by Jeff Kaufman, was to measure the amount of comments on the pro-vegetarian websites advertised in these campaigns and found that 2-22% of them were about an intended behavior change (eating less meat, going vegetarian, or going vegan), depending on the website.  I don't think we can draw any conclusions from this, but it's interesting.

To make my calculations, I decided to make a calculator.  Unfortunately, I can't embed it here, so you'd have to open it in a new tab as a companion piece.

I'm going to start by using the following formula: Years of Suffering Averted per Dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal)

Now, to get estimations for these variables.


Pamphlets Per Dollar

How much does it cost to place the advertisement, whether it be the paper pamphlet or a Facebook advertisement?  Nick Cooney, head of the Humane League, says the cost-per-click of Facebook ads is 20 cents.

But what about the cost per pamphlet?  This is more of a guess, but I'm going to go with <a href="">Vegan Outreach's suggested donation of $0.13 per "Compassionate choices" booklet.

However, it's important to note that this cost must also include opportunity cost -- leafleters must forego the ability to use that time to work a job.  This means I must include an opportunity cost of say $8/hr on top of that, making the actual cost $0.27 assuming a pamphlet is given out each minute of volunteer time, meaning 3.7 people are reached per dollar from pamphlets.  For Facebook advertisements, the opportunity cost is trivial.


Conversions Per Pamphlet

This is the estimate with the biggest target on it's head, so to speak.  How many people do we get to actually change their behavior with a simple pamphlet or Facebook advertisement?  Right now, we have three lines of evidence:

Facebook Study

Humane League did A $5000 Facebook advertisement campaign.  They bought ads that look like this...

 

...and sent people to websites (like this one or this one) with auto-playing videos that start playing and show the horrors of factory farming.

Afterward, there was another advertisement run to people who "liked" the video page, offering a 1 in 10 chance of winning a free movie ticket in order to take a survey.  Everyone who emailed in asking for a free vegetarian starter kit were also emailed a survey.  104 people took the survey and there were 32 reported vegetarians[3] and 45 people reported, for example, that their chicken consumption decreased "slightly" or "significantly".

7% of visitors liked the page and 1.5% of visitors ordered a starter kit.  Assuming all the other people went away from the video not changing their consumption, this survey would lead us to (very tenuously) think about 2.6% of people seeing the video will become a vegetarian[4].

(Here's the results of the survey in PDF.)

Pamphlet Study

A second study discussed in "The Powerful Impact of College Leafleting (Part 1)" and "The Powerful Impact of College Leafleting: Additional Findings and Details (Part 2)" looked specifically at pamphlets.

Here, Humane League staff visited two large East Coast state schools and distributed leaflets.  They then returned two months later and surveyed people walking by.  Those who remember receiving a leaflet earlier were counted.  They found about 2% of those receiving a pamphlet went vegetarian.

Vegetarian Years Per Conversion

But once a pamphlet or Facebook advertisement captures someone, how long will they stay vegetarian?  One survey showed vegetarians refrain from eating meat for an average of 6 years or more.  Another study I found says 93% of vegetarians stay vegetarian for at least three years.

 

Animals Saved Per Vegetarian Year

And once you have a vegetarian, how many animals do they save per year?  CountingAnimals says 406 animals saved per year.

The Humane League suggests 28 chickens, 2 egg industry hens, 1/8 beef cow, 1/2 pig, 1 turkey, and 1/30 dairy cow per year (total = 31.66 animals), and does not provide statistics on fish.  This agrees with CountingAnimals on non-fish totals.

Days Lived Per Animal

One problem, however, is that saving a cow that could suffer for years is different from saving a chicken that suffers for only about a month.  Using data from Farm Sanctuary plus World Society for the Protection of Animals data on fish [PDF], I get this table:

Animal Number Days Alive
Chicken (Meat) 28 42
Chicken (Egg) 2 365
Cow (Beef) 0.125 365
Cow (Milk) 0.033 1460
Fish 225 365

This makes the weighted average 329.6 days[5].

 

Accounting For Biases

As I said before, our formula was Years of Suffering Averted = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Days lived / animal).

Let's plug these values in... Years of Suffering Averted per Dollar = 5 * 0.02 * 3 * 255.16 * 329.6/365 = 69.12.

Or, assuming all this is right (and that's a big assumption), it would cost less than 2 cents to prevent a year of suffering on a factory farm by buying vegetarians.

I don't want to make it sound like I'm beholden to this cost estimate or that this estimate is the "end all, be all" of vegan outreach.  Indeed, I share many of the skepticisms that have been expressed by others.  The simple calculation is... well... simple, and it needs some "beefing up", no pun intended.  Therefore, I also built a "complex calculator" that works on a much more complex formula[6] that is hopefully correct[7] and will provide a more accurate estimation.

 

The big, big deal for the surveys is concern for bias.  The most frequently mentioned bias is social desirability bias, or people who say they reduced meat just because they want to please the surveyor or look like a good person, which actually happens a lot more on surveys than we'd like.

To account for this, we'll have to figure out how inflated answers are because of this bias and then scale the answers down by that amount.  Nick Cooney who says that he's been reading studies that about 25% to 50% of people who say they are vegetarian actually are, though I don't yet have the citations.  Thus, if we find out that an advertisement creates two meat reducers, we'd scale that down to one reducer if we're expecting a 50% desirability bias.

 

The second bias that will be a problem for us is non-response bias, as those who don't reduce their diet are less likely to take the survey and therefore less likely to be counted.  This is especially true in the Facebook study, which only measures people who "liked" or requested a starter kit, showing some pro-vegetarian affiliation.

We can balance this out by assuming everyone who didn't take the survey went on to have no behavior change whatsoever.  Nick Cooney's Facebook Ad Survey is for the 7% of people who liked the page (and then responded to the survey), and obviously those who liked the page are more likely to reduce their consumption.  I chose an optimistic value of 90% to consider the survey completely representative of the 7% who liked the page, and then a bit more for those who reduced their consumption but did not like the page.  My pessimistic value was 95%, assuming everyone who did not like the survey went unchanged and assuming a small response bias among those who liked the page but chose not to take the survey.

For the pamphlets, however, there should be no response bias since the entire population of college students was surveyed from randomly, and no one was said to reject taking the survey.

 

Additional People Are Being Reached

In the Facebook survey, those who said they reduced their meat consumption were also asked if they influenced any of their friends and family to also reduce eating meat, and found that they usually produced 0.86 additional reducers.

This figure seems very high, but I do strongly expect the figure to be positive -- people who reduce eating meat will talk about it sometimes, essentially becoming free advertisements.  I'd be very surprised if they ended up being a net negative.

 

Accounting for Product Elasticity

Another way to boost the effectiveness of the estimate is to be more accurate about what happens when someone stops eating meat.  The change isn't from the actual refusal to eat, but rather from the reduced demand for meat, which leads to a reduced supply.  Following the laws of economics, however, this reduction won't necessarially be one-for-one, but rather depend on the elasticity of product demand and supply.  By getting this number, we can find out how much meat is reduced for every meat not demanded.

My guesses in the calculator come from the following sources, some of which are PDFs: Beef #1Beef #2Dairy #1Dairy #2Pork #1, Pork #2Egg #1, Egg #2PoultrySalmon, and for all fish.

 

Putting It All Together

Implementing the formula on the calculator, we end up with an estimate of $0.03 to $36.52 to reduce one year of suffering on a factory farm based on the Facebook ad data and an estimate of $0.02 to $65.92 based on the pamphlet data.

Of course, many people are skeptical of these figures.  Perhaps surprisingly, so am I.  I'm trying to strike a balance between being an advocate of vegan outreach as a very promising path for making the world a better place, while not losing sight of the methodological hurdles that have not yet been met, and open to the possibility that I'm wrong about this.

The big methodological elephant in the room is that my entire cost estimate depends on having a plausible guess for how likely someone is to change their behavior based on seeing an advertisement.

I feel slightly reassured because:

  1. There are two surveys for two different media, and they both provide estimates of impact that agree with each other.
  2. These estimates also match anecdotes from leafleters about approximately how many people come back and say they went vegetarian because of a pamphlet.
  3. Even if we were to take the simple calculator and drop the "2% chance of getting four years of vegetarianism" assumption down to, say, a pessimistic "0.1% chance of getting one year" conversion rate, the estimate is still not too bad -- $0.91 to avert a year of suffering.
  4. More studies are on the way.  Nick Cooney is going to do a bunch more to study leaflets, and Xio Kikauka and Joey Savoie have publicly published some survey methodology [Google Docs].

That said, the possibility for desirability bias in the survey is a large concern as long as the surveys continue to be from overt animal welfare groups and continue to clearly state that they're looking for reductions in meat consumption.

Also, so long as surveys are only given to people that remember the leaflet or advertisement, there will be a strong possibility of response bias, as those who remember the ad are more likely to be the ones who changed their behavior.  We can attempt to compensate for these things, but we can only do so much.

Furthermore, and more worrying, there's a concern that the surveys are just measuring normal drift in vegetarianism, without any changes being attributable to the ads themselves.  For example, imagine that every year, 2% of people become vegetarians and 2% quit.  Surveying these people at random and not capturing those who quit will end up finding a 2% conversion rate.

How can we address these?  I think all three problems can be solved with a decent control group, whether it be a group of people that receive a leaflet not about vegetarianism, or no leaflet at all.  Luckily, Kikauka and Savoie's survey intend to do just that.

Jeff Kaufman has a good proposal for a survey design I'd like to see implemented in this area.

 

Market Saturation and Diminishing Marginal Returns?

Another concern is that there are diminishing marginal returns to these ads.  As the critique goes, there are only so many people that will be easily swayed by the advertisement, and once all of them are quickly reached by Facebook ads and pamphlets, things will dry up.

Unlike the others, I don't think this criticism works well.  After all, even if it were true, it still would be worthwhile to take the market as far as it will go, and we can keep monitoring for saturation and find the point where it's no longer cost-effective.

However, I don't think the market has been tapped up yet at all.  According to Nick Cooney [PDF], there are still many opportunities in foreign markets and outside the young, college kid demographic.

 

The Conjunction Fallacy?

The conjunction fallacy is a classic fallacy that reminds us that no matter what, the chance of event A happening can never be smaller than the chance of event A happening, followed by event B.  For example, the probability that Linda is a bank teller will always be larger than (or equal to) the probability that Linda is a bank teller and a feminist.

What does this mean for vegetarian outreach?  Well, for the simple calculator, we're estimating five factors.  In the complex calculator, we're estimating 90 factors.  Even if each factor is 99% likely to be correct, the chance that all five are right is 95%, and the chance that all 50 are right is only 60%.  If each factor is only 90% likely to be correct, the complex calculator will be right with a probability of 0.5%!

This is a cause for concern, but I don't think there's any way around this.  It's just an inherent problem with estimation.  Hopefully we'll be balanced by (1) using the different bounds and (2) hoping underestimates and overestimates will cancel each other out.

 

Conversion and The 100 Yard Line

Something we should take into account that helps the case for this outreach rather than hurts it is the idea that conversions aren't binary -- someone can be pushed by the ad to be more likely to reduce their meat intake as opposed to fully converted.  As Brian Tomasik puts it:

Yes, some of the people we convince were already on the border, but there might be lots of other people who get pushed further along and don’t get all the way to vegism by our influence. If we picture the path to vegism as a 100-yard line, then maybe we push everyone along by 20 yards. 1/5 of people cross the line, and this is what we see, but the other 4/5 get pushed closer too. (Obviously an overly simplistic model, but it illustrates the idea.)

This would be either very difficult or outright impossible to capture in a survey, but is something to take into account.

 

Three Places I Might Donate Before Donating to Vegan Outreach

When all is said and done, I like the case for funding this outreach.  However, I think there are three other possibilities along these lines that I find more promising:

Funding the research of vegan outreach: There needs to be more and higher-quality studies of this before one can feel confident enough in the cost-effectiveness of this outreach.  However, initial results are very promising, and the value of information of more studies is therefore very high.  Studies can also find ways to advertise more effectively, increasing the impact of each dollar spent.  Right now, however, it looks like all ongoing studies are fully funded, but if there were opportunities to fund more, I would jump on it.

Funding Effective Animal Activism: EAA is an organization pushing for more cost-effectiveness in the domain of nonhuman animal welfare and is working to further evaluate what opportunities are the best, Givewell-style.  Giving them more money can potentially attract a lot more attention to this outreach, and get it more scrutiny, research, and money down the line.

Funding Centre for Effective Altruism: Overall, it might just be better to get more people involved in the idea of giving effectively, and then getting them interested in vegan outreach, among other things.

 

Conclusion

Vegan outreach is a promising, though not fully studied, method of outreach that deserves both excitement and skepticism.  Should one put money into it?  Overall, I'd take a guarded approach of putting in just enough money to help the organizations learn, develop better cost-effective measurements and transparency, and become more effective.  It shouldn't be too long before this area will become studied well enough to have good confidence in how things are doing.

More studies should be developed that explore advertising vegetarianism in a wide variety of media in a wide variety of ways, with decent control groups.

I look forward to seeing how this develops.  Don't forget to play around with my calculator.

-

 

Footnotes

[1]: Cost effectiveness in years of suffering prevented per dollar = (Pamphlets / dollar) * (Conversions / pamphlet) * (Veg years / conversion) * (Animals saved / veg year) * (Years lived / animal).

Plugging in 80K's values... Cost effectiveness = (Pamphlets / dollar) * 0.01 to 0.03 * 25 * 100 * (Years lived / animal)

Filling in the gaps with my best guesses... Cost effectiveness = 5 * 0.01 to 0.03 * 25 * 100 * 0.90 = 112.5 to 337.5 years of suffering averted per dollar
I personally think 25 veg-years per conversion on average is possible but too high; I personally err from 4 to 7.
[2]: I feel like there's an error in this calculation or that Kaufman might disagree with my assumptions of number of animals or days per animal, because I've been told before that these estimates with this method are supposed to be about an order of magnitude higher than other estimates.  However, I emailed Kaufman and he seemed to not find any fault with the calculation, though he does think the methodology is bad and the calculation should not be taken at face value.
[3]: I calculated the number of vegetarians by eyeballing about how many people said they no longer eat fish, which I'd guess only a vegetarian would be willing to give up.
[4]: 32 vegetarians / 104 people = 30.7%.  That population is 8.5% (7% for likes + 1.5% for the starter kit) of the overall population, leading to 2.61% (30.7% * 8.5%).
[5]: Formula is [(Number Meat Chickens)(Days Alive) + (Number Egg Chickens)(Days Alive) + (Number Beef Cows)(Days Alive) + (Number Milk Cows)(Days Alive) + (Number Fish)(Days Alive)] / (Total Number Animals).  ...Plugging things in: [(28)(42) + (2)(365) + (0.125)(365) + (0.033)(1460) + (225)(365)] / 255.16] = 329.6 days

[6]:
Cost effectiveness in amount of days prevented per dollar = (People Reached / Dollar + (People Reached / Dollar * Additional People Reached / Direct Reach * Response Bias * Desirability Bias)) * Years Spent Reducing * (((Percent Increasing Beef * Increase Value) + (Percent Staying Same with Beef * Staying Same Value) + (Percent Decreasing Beef Slightly * Decrease Slightly Value) + (Percent Decreasing Beef Significantly * Decrease Significantly Value) + (Percent Eliminating Beef * Elimination Value) + (Percent Never Ate Beef * Never Ate Value)) * Normal Beef Consumption * Beef Elasticity * (Average Beef Lifespan + Days of Suffering from Beef Slaughter)) + (((Percent Increasing Dairy * Increase Value) + (Percent Staying Same with Dairy * Staying Same Value) + (Percent Decreasing Dairy Slightly * Decrease Slightly Value) + (Percent Decreasing Dairy Significantly * Decrease Significantly Value) + (Percent Eliminating Dairy * Elimination Value) + (Percent Never Ate Dairy * Never Ate Value)) * Normal Dairy Consumption * Dairy Elasticity * (Average Dairy Lifespan + Days of Suffering from Dairy Slaughter)) + (((Percent Increasing Pig * Increase Value) + (Percent Staying Same with Pig * Staying Same Value) + (Percent Decreasing Pig Slightly * Decrease Slightly Value) + (Percent Decreasing Pig Significantly * Decrease Significantly Value) + (Percent Eliminating Pig * Elimination Value) + (Percent Never Ate Pig * Never Ate Value)) * Normal Pig Consumption * Pig Elasticity * (Average Pig Lifespan + Days of Suffering from Pig Slaughter)) + (((Percent Increasing Broiler Chicken * Increase Value) + (Percent Staying Same with Broiler Chicken * Staying Same Value) + (Percent Decreasing Broiler Chicken Slightly * Decrease Slightly Value) + (Percent Decreasing Broiler Chicken Significantly * Decrease Significantly Value) + (Percent Eliminating Broiler Chicken * Elimination Value) + (Percent Never Ate Broiler Chicken * Never Ate Value)) * Normal Broiler Chicken Consumption * Broiler Chicken Elasticity * (Average Broiler Chicken Lifespan + Days of Suffering from Broiler Chicken Slaughter)) + (((Percent Increasing Egg * Increase Value) + (Percent Staying Same with Egg * Staying Same Value) + (Percent Decreasing Egg Slightly * Decrease Slightly Value) + (Percent Decreasing Egg Significantly * Decrease Significantly Value) + (Percent Eliminating Egg * Elimination Value) + (Percent Never Ate Egg * Never Ate Value)) * Normal Egg Consumption * Egg Elasticity * (Average Egg Lifespan + Days of Suffering from Egg Slaughter)) + (((Percent Increasing Turkey * Increase Value) + (Percent Staying Same with Turkey * Staying Same Value) + (Percent Decreasing Turkey Slightly * Decrease Slightly Value) + (Percent Decreasing Turkey Significantly * Decrease Significantly Value) + (Percent Eliminating Turkey * Elimination Value) + (Percent Never Ate Turkey * Never Ate Value)) * Normal Turkey Consumption * Turkey Elasticity * (Average Turkey Lifespan + Days of Suffering from Turkey Slaughter)) + (((Percent Increasing Farmed Fish * Increase Value) + (Percent Staying Same with Farmed Fish * Staying Same Value) + (Percent Decreasing Farmed Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Farmed Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Farmed Fish * Elimination Value) + (Percent Never Ate Farmed Fish * Never Ate Value)) * Normal Farmed Fish Consumption * Farmed Fish Elasticity * (Average Farmed Fish Lifespan + Days of Suffering from Farmed Fish Slaughter)) + (((Percent Increasing Sea Fish * Increase Value) + (Percent Staying Same with Sea Fish * Staying Same Value) + (Percent Decreasing Sea Fish Slightly * Decrease Slightly Value) + (Percent Decreasing Sea Fish Significantly * Decrease Significantly Value) + (Percent Eliminating Sea Fish * Elimination Value) + (Percent Never Ate Sea Fish * Never Ate Value)) * Normal Sea Fish Consumption * Sea Fish Elasticity * Days of Suffering from Sea Fish Slaughter) * Response Bias * Desirability Bias
[7]: Feel free to check the formula for accuracy and also check to make sure the calculator implements the formula correctly.  I worry that the added accuracy from the complex calculator is outweighed by the risk that the formula is wrong.

-

Edited 18 June to correct two typos and update footnote #2.

Also cross-posted on my blog.

The flawed Turing test: language, understanding, and partial p-zombies

11 Stuart_Armstrong 17 May 2013 02:02PM

There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.

The problem is Campbell's law (or Goodhart's law):

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

This applies to more than social indicators. To illustrate, imagine that you were a school inspector, tasked with assessing the all-round education of a group of 14-year old students. You engage them on the French revolution and they respond with pertinent contrasts between the Montagnards and Girondins. Your quizzes about the properties of prime numbers are answered with impressive speed, and, when asked, they can all play quite passable pieces from "Die Zauberflöte".

You feel tempted to give them the seal of approval... but they you learn that the principal had been expecting your questions (you don't vary them much), and that, in fact, the whole school has spent the last three years doing nothing but studying 18th century France, number theory and Mozart operas - day after day after day. Now you're less impressed. You can still conclude that the students have some technical ability, but you can't assess their all-round level of education.

The Turing test functions in the same way. Imagine no-one had heard of the test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test - and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.

continue reading »

To Inspire People to Give, Be Public About Your Giving

11 peter_hurford 17 May 2013 06:58AM

Many people think it would be nicer if people were to give more money to non-profits, especially effective ones.  However, for most people, it doesn't even occur to them that they giving a large share of their salary to charity is something that people actually can do, or that people are doing on a regular basis.

Being public with one's pledge to donate not only spreads information about how easy it is to fight global poverty with a serious commitment, but that such commitments are the kind of thing that people can actually take.  By being public with these pledges, we can actually inspire people to give, where they otherwise wouldn't.

But how did people get stuck in a rut?  Why doesn't giving money come naturally?  And how would public declarations help dig people out of this rut?

 

The Bystander Effect and The Assumption of Self-Interest

First, to understand how to get people to give we have to understand why they currently do not.  There are a number of reasons, but one of the most prevalent is what's called the bystander effect.  While this effect is widely known in groups failing to respond to disasters right in front of their faces, it's magnified when the disaster is global poverty a continent or two away.  We think that because other people around us are not giving, it must also not be our responsibility, and we sure wouldn't want to be suckered into helping when no one else is doing their fair share.

Ever since Thomas Hobbes's The Leviathan, seeing human nature in terms of selfishness has been common, and persists to this day[1,2] as a strong and occasionally self-reinforcing belief[3,4].  People think of monetary incentives as being the most effective incentive for encouraging blood donations[5], even when this turns out to not be the case[6].  People greatly over-estimate the amount people will support a policy that favors them over other people[5].  As noted by Alexis de Tocqueville in 1835, "Americans enjoy explaining almost every act of their lives on the principle of self-interest"[7].

This leads us to a natural assumption that donating to charity is irrational... or, at least, other people aren't doing it, so neither should I.  However, this norm of self-interest is largely a myth, and people seem to do better than most people expect.

 

Challenging the Self-Interest Norm

This means the self-interest norm has to be challenged, and if it is challenged, we can expect people to revise their selfish-based theory of human nature and turn to more selfless acts like charitable giving.  If we're interested in getting people to donate more than what they already do, we need to open people up to the idea that charitable giving cannot only be virtuous but expected, and can be done not only at the typical rate of 1%, but at rates of 10% or much higher[8].  We also should challenge the norm that charity should be silent and not spoken about, and instead mention it openly and proudly[9].

People tend to conform, both intentionally and unintentionally, adopting the actions of others[4], and end up unwilling to adopt contrary actions unless other people are also going along with them.  If peer pressure can make high schoolers turn to drug use, alcohol drinking, cigarette smoking, or even drop out of high school[10], surely it can stop people from giving.

 

For example, take the famous Asch Conformity Experiments.  Here, people were in a group and asked to look at a line and compare its length to three other lines on another card, and state which line matches the height of the first line.  The task is enormously simple, but is complicated by being in a group of several other people, all in on the experiment, all who give the identical wrong answer.

Asch found that many people would conform to this wrong answer, even against their better judgement.  However, by adding another subject to this experiment who would give the correct answer, the tendency to conform would drop dramatically, even though the correct answer is still in the minority.  Take away the partner, even halfway through the experiment with the same subject, and conformity shoots back up.

 

However, allowing people an escape from this norm can lead them to be able to increase their charitable donations.  In one field experiment, a radio station would mention to potential donors whenever a previous donor had donated $300, and they found that this increased donations by $13 more per person over the control condition, and these donors were also more likely to renew their memberships and donate more the next year compared to those in the control condition[11].

In a separate field experiment, donors gave more to a radio station when prompted with an amount that was higher than their previous contribution[12].  Lastly, a third field experiment found that student donors were more likely to give to funds for students when told that 64% of other students had donated than when they were told that 46% of other students had donated[13].

 

Overall, people are moved by seeing what others do, and can be tilted away from self-destructive norms by seeing other people go against the flow.  An organization like Giving What We Can making a public stand for giving can accomplish just that.  Make your giving public, and it should multiply as you inspire others.

 

Motivations and Fights for Status

Reflecting on the need to push up the norm to accurately reflect the giving nature of society, it seems like the pushback to privatize giving is harmful.  And I think it is.  But why does it come about in the first place?  Robert Wiblin speculates that being public about giving calls your motivations into question.  If you're only motivated by compassion for those in need, why do you need to boast?

Well, of course, there's an interest in raising the norm.  But let's assume that giving was really just a giant fight for status... would that be so bad?  All else being equal, I prefer pure intention to that of giving just to prove to others, but competing for status via donation oneupmanship is considerably more useful than competing for status via bigger houses, bigger cars, and bigger flatscreen TVs.

Or rather, people still end up competing over their charitable contributions, but it comes in the forms of significantly less-effective (though still arguably worthwhile) charitable competition, like volunteering, building schools, or adopting African children.  If, instead, we normalized people giving checks, at least more people could be helped while the status fight goes on.

 

Conclusion

Many people want to leave the world in a better place than they found it, perhaps even going as far as wanting to do the best they can.  To these people, I hope that the idea of donation, especially to effective causes in potentially large amounts ends up appealing.  But if this cool idea is seen as "boastful", it won't catch on, and won't get the publicity (I think) it deserves.

Moreover, people won't be able to network together and share information about more cost-effective charities or the latest trends in development economics, because everyone will be keeping it to themselves, ending up being collectively self-defeating.

We seem forced by society to pretend to be self-interested, because we're asked to not talk about our acts of kindness.  But this only goes to re-enforce the deadly cycle.  The only way to push ourselves out of this cycle is to demonstrate that some people do donate and push up this norm.  And groups like GivingWhatWeCan, 80000 Hours, and BolderGiving are working on doing just that.

Personally, I'd have to agree that this works -- I'm inspired by these stories, and I don't think I would ever be donating 10%+ without a group that makes it seem like a completely normal and awesome thing to do.

So is talking about donations too boastful?  I think, for the sake of those the donations help, we can afford a little boasting in this one area.

 

References and Notes

(Note: Most of these links open to PDFs.)

[1]: Barry Schwartz. 1986. The Battle for Human Nature: Science, Morality and Modern Life. Canada: Penguin Books.

[2]: Alfie Kohn. 1990.
The Brighter Side of Human Nature. New York: Basic Books.

[3]: Dale T. Miller. 1999. "The Norm of Self-Interest". American Psychologist 54 (12): 1053-1060.

[4]: John M. Darley and Russell H. Fazio. "Expectancy Confirmation Processes Arising in the Social Interaction Sequence". 1980. American Psychologist 35 (10): 867-881.

[5]: Dale T. Miller and Rebecca K. Ratner. 1998. "The Disparity Between the Actual and Assumed Power of Self-Interest". Journal of Personality and Social Psychology 74 (1): 53-62.

[6]: Nicola Lacetera, Mario Macis, and Robert Slonim. 2011. "Rewarding Altruism? A Natural Field Experiment". The National Bureau of Economic Research Working Paper #17636.

[7]: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. 
Democracy in America. Garden City, N.Y.: Anchor, p546.

[8]: The Giving What We Can pledge requires 10% and this is already shockingly high for most, but people on 80000 Hours's member list or among Bolder Giving's stories donate up to 50% of their income or more!

[9]: Of course, I don't think we should mention it *all* the time -- we should recognize when is the time and place, and not be unreasonable.  On the same time, we shouldn't be completely silent.  Places like Facebook, personal blogs, and when the topic comes up for conversation all seem like fair game.

[10]: Alejandro Gaviria and Steven Raphael. 2001. "School-Based Peer Effects and Juvenile Behavior". The Review of Economics and Statistics 83 (2): 257-268.

[11]: Other conditions were $180, $75, or no prompt about previous donors at all.  Jen Shang and Rachel Croson. Forthcoming. “Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods”. The Economic Journal.

[12]: Rachel Croson and Jen Shang. 2008. "The Impact of Downward Social Information on Contribution Decisions"Experimental Economics 11: 221-233.

[13]: Bruno S. Frey and Stephan Meier. 2004. "Social Comparisons and Pro-social Behavior: Testing 'Conditional Cooperation' in a Field Experiment". The American Economic Review 94 (5): 1717-1722.

-

Also cross-posted on my blog.

How to Build a Community

13 peter_hurford 15 May 2013 05:43AM

I've noticed that quite a few people are interested in fostering communities -- both creating communities and improving them to make them work together.  But how do we go about actually doing this?  What's there to community that we can foster and build upon?  What makes a community thrive, and how do we take advantage of this to make and/or improve communities?

To answer these questions, I turned to two books:

The first is The Penguin and The Leviathan: How Cooperation Triumphs Over Self-Interest by Yochai Benkler.  Benkler, in writing about cooperative systems (Penguins, named after the Linux Penguin) and hierarchical systems (Leviathans, named after Thomas Hobbes's The Leviathan), studies the psychology, economics, and political science of cooperation and helps explain what makes communities stick.

The second is Liars and Outliers: Enabling the Trust that Society Needs to Thrive by Bruce Schneier.  Schneier studies trust and cooperation from a dizzying variety of sciences (psychology, biology, economics, anthropology, computer science, and political science).  Schneier's ultimate game is figuring out what is preventing society from falling apart, and that can be applied to building communities.

Let's see what they got.

 

Communities Need Cooperation

Schneier and Benkler both paint a view of human nature that is different than what is commonly thought, but what has emerged from the sciences: People are both self-interested and other-interested, different people will have different balances of each, and within each person these two goals can often conflict.  Additionally, the "other-interested" aspect can be multiple and occasionally conflicting allegiances, such as to one's family, to one's neighborhood, to one's country, to one's venture philanthropy club, etc.

What's unique to all communities is that they involve people who have set aside some of their immediate self-interest to work together.  For instance, when we work together in a group, I definitely don't beat you over the head and steal your lunch money, and I don't usually attempt to free ride and get you to do the group work for me, but we mutually work to solve communal problems and share in the benefits of community.

Public Goods and Free Riders

An example of how psychology has sought to simplify and simulate a community is through what's called "The Public Goods Game".  In this game, a group of about ten participants are each sat down, and given $10 each to start with.  The game is then played for several rounds, and in each round all participants get to put a certain secret amount of their money into a collective pot.  The experimenters then look at the pot, double the amount of money inside it, and redistribute the result evenly to all the players.  For added bonus, the experimenters inform all participants that they get to walk away with their winnings after the game is over.

If everyone went perfectly with the community, each player would see their money double each round.  But the wrinkle is that if people don't contribute at all to the pot, then they stand to gain even more money from the results of everyone else's contributions.  This is called the free rider problem: there is a tension between wanting to contribute to the pot for the good of yourself and the good of the group as a whole and refraining from contributing so that you benefit even more.

The Free Rider Problem and The Collective Action Problem

But the tension can result in further disaster, for imagine everyone decides to be a free rider and defect from the group -- now, no money goes in the pot at all, and everyone ends with the $10 they start with.  This gets worse when we imagine some other real-life scenarios -- for instance, that of fishermen in a lake.

The fishermen can either choose to fish normally or overfish.  If all the fishermen overfish, they stand to deplete the lake and all fishermen lose their jobs.  However, if just a few fishermen overfish, they get the benefit of added fish to sell, and the lake can handle the slight increase in load.  So this tension is to be the fisherman that wins most by personally overfishing, while not collectively depleting the entire lake.  Such problems are called collective action problems -- people do well individually by defecting but do worse collectively if everyone defects.  The result of a collective action problem ending in disaster is called the tragedy of the commons.

The Community Solution

So what's the solution to these problems?  Benkler proposes two models for dealing with them -- employing the Leviathan and placing lots of regulations on overfishing and enforcing them with strict punishments, or employing the Penguin and creating a community that deals with these problems collectively and in a self-policing way.

It turns out that certain problems are best dealt with differing combinations of Leviathan and Penguin models, but most problems need lots of community just because it can be difficult to figure out who is going against the community, and communities have more freedom for their participants.  At the same time, if there are too many would-be defectors a community can never get off the ground.

Communities need cooperation to work.  So how can we get this cooperation to fly?

 

The Four Pressures of Cooperation

Bruce Schneier notes that normally we don't think through these free rider problems and try to scheme our way through them -- we just cooperate, instinctively.  We don't assume people will rip us off, and we usually don't rip other people off -- that's just how we are.  But why?  Schneier suggests that cooperation can be fostered and maintained through four different pressures, though differing kinds and amounts of pressure apply to different situations, and getting the balance of pressures right is a key part of his book:

1.) Moral Pressures: Many, but not all of us, have various moral feelings that lead us to want to cooperate.  It could be as easy as feeling incredibly guilty when we defect against our friends, or as complex as subscribing to an abstract principle of justice.  For most of us, it's a general feeling that cooperating is the "right thing to do" and defecting for our own personal self-interest is "wrong", and we just don't want to do it.  Schneier and Benkler both find that moral pressures compel cooperation a surprising amount of the time.

2.) Reputational Pressures: Another part about living in a community for a long time is that you have a reputation to live by.  Defect against the community and you may win a few times, but then people start to notice and start working to stop you.  They might refuse you friendship or other things you want, or even kick you out of the community altogether!  Benkler finds that many communities can thrive on reputation alone, like eBay, Amazon, or Reddit.

3.) Institutional Pressures: Morals and reputation aren't the end of it though; many communities make specific, codified norms and enforce them with specific, codified punishments.  These pressures are laws, and the fear of breaking the law, being caught, and getting the punishment can often further spur cooperation.  Best yet, the community can often get together and agree to these norms, realizing it is in their individual benefit to force themselves and the rest of the community to play along, as to avoid tragedies of the commons.

4.) Security Pressures: Lastly, there are always going to be a few people who put morals, reputation, and laws aside and try to defect anyway.  For these, we hope to stop them in their tracks or make their jobs more difficult, by using complex security systems.  It can be as simple as a security camera or anti-theft radio, or as complex as Fort Knox.  Security is a double plan: it first attempts to raise the costs of defection; by making it physically harder to defect, one is less tempted to do so.  It then attempts to better catch and apprehend those who still try.

 

Your Reason for Joining; Your Reason for Staying

Remember these pressures don't all work for the same problems -- it may be proper to use security and institutional pressures to stop someone from overfishing, but not from intentionally cutting the cake so they get to eat the bigger slice.  Moral and reputational pressures seem to be more encompassing, but they are also more easily defeated -- people with less of a moral compass can often wander from community to community, wrecking small amounts of havoc and never getting caught or punished.

Benkler suggests another way to get people to buy into a community and not defect against it -- make it clear that being part of the community is something they really want.  Whether your joining a community or forced into one (family, country, etc.), the community will be more likely to thrive.

Four Ways to Bond

But why might one want to join or stay in a community?  For many, the answer is the intangibles -- they feel a sense of belonging, friendship, and group cohesion that creates an empathetic attachment and makes people want to play by the rules of the group.  For others, the answer is the tangibles -- the group may have a stated mission statement that is important to the person, or belonging in the group might confer a specific benefit.  People might even belong for a mix of tangibles and intangibles, plus a natural tendency to want to join groups.

But how do we foster these bonds?  Benkler has his own set of four things, suggesting that group identity can be fostered through a combination of four means:

1.) Fairness: The community needs to be fair -- people need to all contribute more or less equally, or at least have genuine intentions to put in equal effort, and the benefits of the group need to be spread among all participants more or less evenly, or in a fair proportion to how much the participant puts in.

2.) Autonomy: The community needs to not demand too much, and make sure to compensate quickly and generously for special sacrifices.  There are inherent costs to joining and staying with a group, and costs for cooperating with the group -- one doesn't just give up the self-interested benefits of defection, but rather must pay additional costs to maintain their group status.  Being aware of and addressing these costs are important.  In short, the group must respect their members as individuals.

3.) Democracy: The community also needs to accept (with fairness and autonomy) the input of all the members.  Group norms should be developed by a vote, with weight given on building consensus as much as possible, and with understanding the reasons why people might not like the consensus.  Not only does having input make it more likely people's preferences will be taken into account, lowering the costs of cooperation, but having input makes people feel more group cohesion and belonging.

4.) Communication: During times when formal votes aren't taken, the community also needs to be consistently (but not constantly) talking about how the group is doing, and checking in with members who might be feeling left out.  Just like democracy, group cohesion is built through communication, and communication lowers the costs of cooperation.  It's best when resolving disputes is not dictatorial, like in a court of law, but rather cooperative, like in an arbitration.

 

Looking Back to the Public Goods Game

To demonstrate these four points, Benkler draws on many real-world examples, such as policies of various companies, and interactions on the internet.  He also draws on returning back to our simple-community-in-the-lab, the Public Goods Game, for additional confirmation, and its worth seeing how these things play out.

In the original Public Goods game, contributions to the pot were made anonymously and no-one was allowed to talk or communicate.  Typically, a fair amount of people would cooperate in the beginning (generally, people contribute about 70% of their share), but starts to drop as people see that others aren't contributing.  They start to feel like suckers, and the fairness starts to kick in.

A Different Game

However, variants of the Public Goods game offer ways out.  When participants were allowed to talk to each other, contributions rose (communication).  Likewise, when participants were allowed to use some of their money to punish those who didn't contribute (say, pay $3 to prevent someone from getting their share this round if they didn't cooperate last round), people would do so.  

Even the simple act of making the contributions public increased cooperation, drawing on reputation.  Sometimes small fines were imposed on those who didn't cooperate (institutional pressures) which brought up cooperation, and these fines worked especially well when the group got to vote on how high they would be (democracy).

Lastly, helping frame the game would help -- those who were told they were taking place in a "Community Game" were far more likely to contribute to the pot and keep contributing than those who were told they were taking place in a "Wall Street Game".  By reminding people they are in a community, people thought more about their community norms, and felt more group cohesion, and were more likely to trust others.

 

Conclusions

Ultimately, creating communities is all about fostering cooperation, and you foster cooperation by ensuring that there is mutual trust and some sort of way to prevent defectors from taking advantage of the system.  People often naturally don't want to defect, but will do so if they think others will take advantage of them first.

Social Pressures

But how do we foster this trust?  The first step is to make use of our social pressures when and to the amount that's appropriate -- relying on empathetic and moral norms, reputation, institutionalized laws, and security systems -- and being sure to get the balance right.  For small communities, this probably just needs to be a set of agreed norms, and ensuring that the norms are properly and responsibly enforced.

The Benefits of Joining

The second part is while implementing the first step, we should keep in mind why people are joining or staying in the first place, and make sure to provide a community where the benefits of joining -- both the tangibles and intangibles -- are present and apparent.  We should acknowledge the costs of cooperating, and make sure the benefits are there to foster group loyalty and belonging.

An Effective Community

While implementing, it's important to keep in mind that communities should also be fair, respect the autonomy and individuality of the members, give members input through democracy, and foster lots of communication about how things are going.  We should also keep a keen eye to how things are framed, while not going overboard on it or lying.

The End Reward

But when we accomplish communities, the rewards are pretty great -- not only do we avoid free riders and the tragedy of the commons, but we ourselves get to take advantage of communities that are more productive than the individuals alone, and secure the feelings of belonging to a group we enjoy.

-

Also cross-posted on my blog.

 

View more: Next