Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Note: I'm a former research analyst at GiveWell. This post describes the evolution of my thinking about robustness of cost-effectiveness estimates in philanthropy. All views expressed here are my own.
Up until 2012, I believed that detailed explicit cost-effectiveness estimates are very important in the context of philanthropy. My position was reflected in a comment that I made in 2011:
The problem with using unquantified heuristics and intuitions is that the “true” expected values of philanthropic efforts plausibly differ by many orders of magnitude, and unquantified heuristics and intuitions are frequently insensitive to this. The last order of magnitude is the only one that matters; all others are negligible by comparison. So if at all possible, one should do one’s best to pin down the philanthropic efforts with the “true” expected value per dollar of the highest (positive) order of magnitude. It seems to me as though any feasible strategy for attacking this problem involves explicit computation.
During my time at GiveWell, my position on this matter shifted. I still believe that there are instances in which rough cost-effectiveness estimates can be useful for determining good philanthropic foci. But I’ve shifted toward the position that effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact.
In this post I’ll focus on one reason for my shift: explicit cost-effectiveness estimates are generally much less robust than I had previously thought.
Thanks to those who've been listening, let us know how your experience has been thus far and what you think of the service by dropping an email to firstname.lastname@example.org.
Related: What Do We Mean By "Rationality?"
Epistemic rationality and instrumental rationality are both useful. However, some things may benefit one form of rationality yet detract from another. These tradeoffs are often not obvious, but can have serious consequences.
For instance, take the example of learning debate skills. While involved in debate in high school, I learned how to argue a position quite convincingly, muster strong supporting evidence, prepare rebuttals for counterarguments, prepare deflections for counterarguments that are difficult to rebut, and so on.
I also learned how to do so regardless of what side of a topic I was assigned to.
My debate experience has made me a more convincing and more charismatic person, improved my public speaking skills, and bolstered my ability to win arguments. Instrumentally speaking, this can be a very useful skillset. Epistemically speaking, this sort of preparation is very dangerous, and I later had to unlearn many of these thought patterns in order to become better at finding the truth.
For example, when writing research papers, the type of motivated cognition used when searching for evidence to bolster a position in a debate is often counterproductive. Similarly, when discussing what the best move for my business to make is, the ability to argue convincingly for a position regardless of whether it is right is outright dangerous, and lessons learned from debate may actually decrease the odds of making the correct decision-- if I'm wrong but convincing and my colleagues are right but unconvincing, we could very well end up going down the wrong path!
Epistemic and instrumental goals may also conflict in other ways. For instance, Kelly (2003) points out that, from an epistemic rationality perspective, learning movie spoilers is desirable, since they will improve your model of the world. Nevertheless, many people consider spoilers to be instrumentally negative, since they prefer the tension of not knowing what will happen while they watch a movie.
Bostrom (2011) describes many more situations where having a more accurate model of the world can be hazardous to various instrumental objectives. For instance, knowing where the best parties are held on campus can be a very useful piece of knowledge to have in many contexts, but can become a distracting temptation when you're writing your thesis. Knowing that one of your best friends has just died can be very relevant to your model of the world, but can also cause you to become dangerously depressed. Knowing that Stalin's wife didn't die from appendicitis can be useful for understanding certain motivations, but can be extraordinarily dangerous to know if the secret police come calling.
Thus, epistemic and instrumental rationality can in some cases come into conflict. Some instrumental skillsets might be better off neglected for reasons of epistemic hygeine; similarly, some epistemic ventures might yield information that it would be instrumentally better not to know. When developing rationality practices and honing one's skills, we should take care to acknowledge these tradeoffs and plan accordingly.
 Kelly, T., (2003). Epistemic Rationality as Instrumental Rationality: A Critique. Philosophy and Phenomenological Research, 66(3), pp. 612-640.
 Bostrom, N., (2011). Information Hazards: A Typology of Harms from Knowledge. Review of Contemporary Philosophy, 10, pp. 44-79.
New meetups (or meetups with a hiatus of more than a year) are happening in:
- First Bristol meetup: 25 May 2013 03:00PM
- Tel Aviv, Israel Meetup - Goal Clarification with special guest Cat from CFAR: 23 May 2013 07:00PM
Other irregularly scheduled Less Wrong meetups are taking place in:
- Atlanta Lesswrong's May Meetup: The Rationality of Social Relationships, Friendship, Love, and Family.: 17 May 2013 07:00PM
- Bielefeld Meetup May 22nd: 22 May 2013 07:00PM
- Berlin Social Meetup: 15 June 2013 05:00PM
- Bratislava lesswrong meetup III: 20 May 2013 06:30PM
- Brussels meetup: 18 May 2013 01:00PM
- Durham/RTLW HPMoR discussion, ch. 65-68: 18 May 2013 12:30PM
- London Meetup: 26th May: 26 May 2013 02:00PM
- [Moscow] Belief cleaning: 26 May 2013 04:00PM
- Paris Meetup: Sunday, May 26.: 26 May 2013 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Austin, TX: 18 May 2019 01:30PM
- Seattle-Vancouver Kilomeetup: 18 May 2013 11:54AM
- Vienna meetup #3: 18 May 2013 04:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Madison WI, Melbourne, Mountain View, New York, Ohio, Portland, Salt Lake City, Seattle, Toronto, Vienna, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
There is a problem with the Turing test, practically and philosophically, and I would be willing to bet that the first entity to pass the test will not be conscious, or intelligent, or have whatever spark or quality the test is supposed to measure. And I hold this position while fully embracing materialism, and rejecting p-zombies or epiphenomenalism.
The more any quantitative
socialindicator is used for socialdecision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the socialprocesses it is intended to monitor."
This applies to more than social indicators. To illustrate, imagine that you were a school inspector, tasked with assessing the all-round education of a group of 14-year old students. You engage them on the French revolution and they respond with pertinent contrasts between the Montagnards and Girondins. Your quizzes about the properties of prime numbers are answered with impressive speed, and, when asked, they can all play quite passable pieces from "Die Zauberflöte".
You feel tempted to give them the seal of approval... but they you learn that the principal had been expecting your questions (you don't vary them much), and that, in fact, the whole school has spent the last three years doing nothing but studying 18th century France, number theory and Mozart operas - day after day after day. Now you're less impressed. You can still conclude that the students have some technical ability, but you can't assess their all-round level of education.
The Turing test functions in the same way. Imagine no-one had heard of the test, and someone created a putative AI, designing it to, say, track rats efficiently across the city. You sit this anti-rat-AI down and give it a Turing test - and, to your astonishment, it passes. You could now conclude that it was (very likely) a genuinely conscious or intelligent entity.
Many people think it would be nicer if people were to give more money to non-profits, especially effective ones. However, for most people, it doesn't even occur to them that they giving a large share of their salary to charity is something that people actually can do, or that people are doing on a regular basis.
Being public with one's pledge to donate not only spreads information about how easy it is to fight global poverty with a serious commitment, but that such commitments are the kind of thing that people can actually take. By being public with these pledges, we can actually inspire people to give, where they otherwise wouldn't.
But how did people get stuck in a rut? Why doesn't giving money come naturally? And how would public declarations help dig people out of this rut?
The Bystander Effect and The Assumption of Self-Interest
First, to understand how to get people to give we have to understand why they currently do not. There are a number of reasons, but one of the most prevalent is what's called the bystander effect. While this effect is widely known in groups failing to respond to disasters right in front of their faces, it's magnified when the disaster is global poverty a continent or two away. We think that because other people around us are not giving, it must also not be our responsibility, and we sure wouldn't want to be suckered into helping when no one else is doing their fair share.
Ever since Thomas Hobbes's The Leviathan, seeing human nature in terms of selfishness has been common, and persists to this day[1,2] as a strong and occasionally self-reinforcing belief[3,4]. People think of monetary incentives as being the most effective incentive for encouraging blood donations, even when this turns out to not be the case. People greatly over-estimate the amount people will support a policy that favors them over other people. As noted by Alexis de Tocqueville in 1835, "Americans enjoy explaining almost every act of their lives on the principle of self-interest".
This leads us to a natural assumption that donating to charity is irrational... or, at least, other people aren't doing it, so neither should I. However, this norm of self-interest is largely a myth, and people seem to do better than most people expect.
Challenging the Self-Interest Norm
This means the self-interest norm has to be challenged, and if it is challenged, we can expect people to revise their selfish-based theory of human nature and turn to more selfless acts like charitable giving. If we're interested in getting people to donate more than what they already do, we need to open people up to the idea that charitable giving cannot only be virtuous but expected, and can be done not only at the typical rate of 1%, but at rates of 10% or much higher. We also should challenge the norm that charity should be silent and not spoken about, and instead mention it openly and proudly.
People tend to conform, both intentionally and unintentionally, adopting the actions of others, and end up unwilling to adopt contrary actions unless other people are also going along with them. If peer pressure can make high schoolers turn to drug use, alcohol drinking, cigarette smoking, or even drop out of high school, surely it can stop people from giving.
For example, take the famous Asch Conformity Experiments. Here, people were in a group and asked to look at a line and compare its length to three other lines on another card, and state which line matches the height of the first line. The task is enormously simple, but is complicated by being in a group of several other people, all in on the experiment, all who give the identical wrong answer.
Asch found that many people would conform to this wrong answer, even against their better judgement. However, by adding another subject to this experiment who would give the correct answer, the tendency to conform would drop dramatically, even though the correct answer is still in the minority. Take away the partner, even halfway through the experiment with the same subject, and conformity shoots back up.
However, allowing people an escape from this norm can lead them to be able to increase their charitable donations. In one field experiment, a radio station would mention to potential donors whenever a previous donor had donated $300, and they found that this increased donations by $13 more per person over the control condition, and these donors were also more likely to renew their memberships and donate more the next year compared to those in the control condition.
In a separate field experiment, donors gave more to a radio station when prompted with an amount that was higher than their previous contribution. Lastly, a third field experiment found that student donors were more likely to give to funds for students when told that 64% of other students had donated than when they were told that 46% of other students had donated.
Overall, people are moved by seeing what others do, and can be tilted away from self-destructive norms by seeing other people go against the flow. An organization like Giving What We Can making a public stand for giving can accomplish just that. Make your giving public, and it should multiply as you inspire others.
Motivations and Fights for Status
Reflecting on the need to push up the norm to accurately reflect the giving nature of society, it seems like the pushback to privatize giving is harmful. And I think it is. But why does it come about in the first place? Robert Wiblin speculates that being public about giving calls your motivations into question. If you're only motivated by compassion for those in need, why do you need to boast?
Well, of course, there's an interest in raising the norm. But let's assume that giving was really just a giant fight for status... would that be so bad? All else being equal, I prefer pure intention to that of giving just to prove to others, but competing for status via donation oneupmanship is considerably more useful than competing for status via bigger houses, bigger cars, and bigger flatscreen TVs.
Or rather, people still end up competing over their charitable contributions, but it comes in the forms of significantly less-effective (though still arguably worthwhile) charitable competition, like volunteering, building schools, or adopting African children. If, instead, we normalized people giving checks, at least more people could be helped while the status fight goes on.
Many people want to leave the world in a better place than they found it, perhaps even going as far as wanting to do the best they can. To these people, I hope that the idea of donation, especially to effective causes in potentially large amounts ends up appealing. But if this cool idea is seen as "boastful", it won't catch on, and won't get the publicity (I think) it deserves.
Moreover, people won't be able to network together and share information about more cost-effective charities or the latest trends in development economics, because everyone will be keeping it to themselves, ending up being collectively self-defeating.
We seem forced by society to pretend to be self-interested, because we're asked to not talk about our acts of kindness. But this only goes to re-enforce the deadly cycle. The only way to push ourselves out of this cycle is to demonstrate that some people do donate and push up this norm. And groups like GivingWhatWeCan, 80000 Hours, and BolderGiving are working on doing just that.
Personally, I'd have to agree that this works -- I'm inspired by these stories, and I don't think I would ever be donating 10%+ without a group that makes it seem like a completely normal and awesome thing to do.
So is talking about donations too boastful? I think, for the sake of those the donations help, we can afford a little boasting in this one area.
References and Notes
(Note: Most of these links open to PDFs.)
: Barry Schwartz. 1986. The Battle for Human Nature: Science, Morality and Modern Life. Canada: Penguin Books.
: Alfie Kohn. 1990. The Brighter Side of Human Nature. New York: Basic Books.
: Dale T. Miller. 1999. "The Norm of Self-Interest". American Psychologist 54 (12): 1053-1060.
: John M. Darley and Russell H. Fazio. "Expectancy Confirmation Processes Arising in the Social Interaction Sequence". 1980. American Psychologist 35 (10): 867-881.
: Dale T. Miller and Rebecca K. Ratner. 1998. "The Disparity Between the Actual and Assumed Power of Self-Interest". Journal of Personality and Social Psychology 74 (1): 53-62.
: Nicola Lacetera, Mario Macis, and Robert Slonim. 2011. "Rewarding Altruism? A Natural Field Experiment". The National Bureau of Economic Research Working Paper #17636.
: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. Democracy in America. Garden City, N.Y.: Anchor, p546.
: The Giving What We Can pledge requires 10% and this is already shockingly high for most, but people on 80000 Hours's member list or among Bolder Giving's stories donate up to 50% of their income or more!
: Of course, I don't think we should mention it *all* the time -- we should recognize when is the time and place, and not be unreasonable. On the same time, we shouldn't be completely silent. Places like Facebook, personal blogs, and when the topic comes up for conversation all seem like fair game.
: Alejandro Gaviria and Steven Raphael. 2001. "School-Based Peer Effects and Juvenile Behavior". The Review of Economics and Statistics 83 (2): 257-268.
: Other conditions were $180, $75, or no prompt about previous donors at all. Jen Shang and Rachel Croson. Forthcoming. “Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods”. The Economic Journal.
: Rachel Croson and Jen Shang. 2008. "The Impact of Downward Social Information on Contribution Decisions". Experimental Economics 11: 221-233.
: Bruno S. Frey and Stephan Meier. 2004. "Social Comparisons and Pro-social Behavior: Testing 'Conditional Cooperation' in a Field Experiment". The American Economic Review 94 (5): 1717-1722.
-Also cross-posted on my blog.
A Munchkin is the sort of person who, faced with a role-playing game, reads through the rulebooks over and over until he finds a way to combine three innocuous-seeming magical items into a cycle of infinite wish spells. Or who, in real life, composes a surprisingly effective diet out of drinking a quarter-cup of extra-light olive oil at least one hour before and after tasting anything else. Or combines liquid nitrogen and antifreeze and life-insurance policies into a ridiculously cheap method of defeating the invincible specter of unavoidable Death. Or figures out how to build the real-life version of the cycle of infinite wish spells.
It seems that many here might have outlandish ideas for ways of improving our lives. For instance, a recent post advocated installing really bright lights as a way to boost alertness and productivity. We should not adopt such hacks into our dogma until we're pretty sure they work; however, one way of knowing whether a crazy idea works is to try implementing it, and you may have more ideas than you're planning to implement.
So: please post all such lifehack ideas! Even if you haven't tried them, even if they seem unlikely to work. Post them separately, unless some other way would be more appropriate. If you've tried some idea and it hasn't worked, it would be useful to post that too.
I've noticed that quite a few people are interested in fostering communities -- both creating communities and improving them to make them work together. But how do we go about actually doing this? What's there to community that we can foster and build upon? What makes a community thrive, and how do we take advantage of this to make and/or improve communities?
To answer these questions, I turned to two books:
The first is The Penguin and The Leviathan: How Cooperation Triumphs Over Self-Interest by Yochai Benkler. Benkler, in writing about cooperative systems (Penguins, named after the Linux Penguin) and hierarchical systems (Leviathans, named after Thomas Hobbes's The Leviathan), studies the psychology, economics, and political science of cooperation and helps explain what makes communities stick.
The second is Liars and Outliers: Enabling the Trust that Society Needs to Thrive by Bruce Schneier. Schneier studies trust and cooperation from a dizzying variety of sciences (psychology, biology, economics, anthropology, computer science, and political science). Schneier's ultimate game is figuring out what is preventing society from falling apart, and that can be applied to building communities.
Let's see what they got.
Communities Need Cooperation
Schneier and Benkler both paint a view of human nature that is different than what is commonly thought, but what has emerged from the sciences: People are both self-interested and other-interested, different people will have different balances of each, and within each person these two goals can often conflict. Additionally, the "other-interested" aspect can be multiple and occasionally conflicting allegiances, such as to one's family, to one's neighborhood, to one's country, to one's venture philanthropy club, etc.
What's unique to all communities is that they involve people who have set aside some of their immediate self-interest to work together. For instance, when we work together in a group, I definitely don't beat you over the head and steal your lunch money, and I don't usually attempt to free ride and get you to do the group work for me, but we mutually work to solve communal problems and share in the benefits of community.
Public Goods and Free Riders
An example of how psychology has sought to simplify and simulate a community is through what's called "The Public Goods Game". In this game, a group of about ten participants are each sat down, and given $10 each to start with. The game is then played for several rounds, and in each round all participants get to put a certain secret amount of their money into a collective pot. The experimenters then look at the pot, double the amount of money inside it, and redistribute the result evenly to all the players. For added bonus, the experimenters inform all participants that they get to walk away with their winnings after the game is over.
If everyone went perfectly with the community, each player would see their money double each round. But the wrinkle is that if people don't contribute at all to the pot, then they stand to gain even more money from the results of everyone else's contributions. This is called the free rider problem: there is a tension between wanting to contribute to the pot for the good of yourself and the good of the group as a whole and refraining from contributing so that you benefit even more.
The Free Rider Problem and The Collective Action Problem
But the tension can result in further disaster, for imagine everyone decides to be a free rider and defect from the group -- now, no money goes in the pot at all, and everyone ends with the $10 they start with. This gets worse when we imagine some other real-life scenarios -- for instance, that of fishermen in a lake.
The fishermen can either choose to fish normally or overfish. If all the fishermen overfish, they stand to deplete the lake and all fishermen lose their jobs. However, if just a few fishermen overfish, they get the benefit of added fish to sell, and the lake can handle the slight increase in load. So this tension is to be the fisherman that wins most by personally overfishing, while not collectively depleting the entire lake. Such problems are called collective action problems -- people do well individually by defecting but do worse collectively if everyone defects. The result of a collective action problem ending in disaster is called the tragedy of the commons.
The Community Solution
So what's the solution to these problems? Benkler proposes two models for dealing with them -- employing the Leviathan and placing lots of regulations on overfishing and enforcing them with strict punishments, or employing the Penguin and creating a community that deals with these problems collectively and in a self-policing way.
It turns out that certain problems are best dealt with differing combinations of Leviathan and Penguin models, but most problems need lots of community just because it can be difficult to figure out who is going against the community, and communities have more freedom for their participants. At the same time, if there are too many would-be defectors a community can never get off the ground.
Communities need cooperation to work. So how can we get this cooperation to fly?
The Four Pressures of Cooperation
Bruce Schneier notes that normally we don't think through these free rider problems and try to scheme our way through them -- we just cooperate, instinctively. We don't assume people will rip us off, and we usually don't rip other people off -- that's just how we are. But why? Schneier suggests that cooperation can be fostered and maintained through four different pressures, though differing kinds and amounts of pressure apply to different situations, and getting the balance of pressures right is a key part of his book:
1.) Moral Pressures: Many, but not all of us, have various moral feelings that lead us to want to cooperate. It could be as easy as feeling incredibly guilty when we defect against our friends, or as complex as subscribing to an abstract principle of justice. For most of us, it's a general feeling that cooperating is the "right thing to do" and defecting for our own personal self-interest is "wrong", and we just don't want to do it. Schneier and Benkler both find that moral pressures compel cooperation a surprising amount of the time.
2.) Reputational Pressures: Another part about living in a community for a long time is that you have a reputation to live by. Defect against the community and you may win a few times, but then people start to notice and start working to stop you. They might refuse you friendship or other things you want, or even kick you out of the community altogether! Benkler finds that many communities can thrive on reputation alone, like eBay, Amazon, or Reddit.
3.) Institutional Pressures: Morals and reputation aren't the end of it though; many communities make specific, codified norms and enforce them with specific, codified punishments. These pressures are laws, and the fear of breaking the law, being caught, and getting the punishment can often further spur cooperation. Best yet, the community can often get together and agree to these norms, realizing it is in their individual benefit to force themselves and the rest of the community to play along, as to avoid tragedies of the commons.
4.) Security Pressures: Lastly, there are always going to be a few people who put morals, reputation, and laws aside and try to defect anyway. For these, we hope to stop them in their tracks or make their jobs more difficult, by using complex security systems. It can be as simple as a security camera or anti-theft radio, or as complex as Fort Knox. Security is a double plan: it first attempts to raise the costs of defection; by making it physically harder to defect, one is less tempted to do so. It then attempts to better catch and apprehend those who still try.
Your Reason for Joining; Your Reason for Staying
Remember these pressures don't all work for the same problems -- it may be proper to use security and institutional pressures to stop someone from overfishing, but not from intentionally cutting the cake so they get to eat the bigger slice. Moral and reputational pressures seem to be more encompassing, but they are also more easily defeated -- people with less of a moral compass can often wander from community to community, wrecking small amounts of havoc and never getting caught or punished.
Benkler suggests another way to get people to buy into a community and not defect against it -- make it clear that being part of the community is something they really want. Whether your joining a community or forced into one (family, country, etc.), the community will be more likely to thrive.
Four Ways to Bond
But why might one want to join or stay in a community? For many, the answer is the intangibles -- they feel a sense of belonging, friendship, and group cohesion that creates an empathetic attachment and makes people want to play by the rules of the group. For others, the answer is the tangibles -- the group may have a stated mission statement that is important to the person, or belonging in the group might confer a specific benefit. People might even belong for a mix of tangibles and intangibles, plus a natural tendency to want to join groups.
But how do we foster these bonds? Benkler has his own set of four things, suggesting that group identity can be fostered through a combination of four means:
1.) Fairness: The community needs to be fair -- people need to all contribute more or less equally, or at least have genuine intentions to put in equal effort, and the benefits of the group need to be spread among all participants more or less evenly, or in a fair proportion to how much the participant puts in.
2.) Autonomy: The community needs to not demand too much, and make sure to compensate quickly and generously for special sacrifices. There are inherent costs to joining and staying with a group, and costs for cooperating with the group -- one doesn't just give up the self-interested benefits of defection, but rather must pay additional costs to maintain their group status. Being aware of and addressing these costs are important. In short, the group must respect their members as individuals.
3.) Democracy: The community also needs to accept (with fairness and autonomy) the input of all the members. Group norms should be developed by a vote, with weight given on building consensus as much as possible, and with understanding the reasons why people might not like the consensus. Not only does having input make it more likely people's preferences will be taken into account, lowering the costs of cooperation, but having input makes people feel more group cohesion and belonging.
4.) Communication: During times when formal votes aren't taken, the community also needs to be consistently (but not constantly) talking about how the group is doing, and checking in with members who might be feeling left out. Just like democracy, group cohesion is built through communication, and communication lowers the costs of cooperation. It's best when resolving disputes is not dictatorial, like in a court of law, but rather cooperative, like in an arbitration.
Looking Back to the Public Goods Game
To demonstrate these four points, Benkler draws on many real-world examples, such as policies of various companies, and interactions on the internet. He also draws on returning back to our simple-community-in-the-lab, the Public Goods Game, for additional confirmation, and its worth seeing how these things play out.
In the original Public Goods game, contributions to the pot were made anonymously and no-one was allowed to talk or communicate. Typically, a fair amount of people would cooperate in the beginning (generally, people contribute about 70% of their share), but starts to drop as people see that others aren't contributing. They start to feel like suckers, and the fairness starts to kick in.
A Different Game
However, variants of the Public Goods game offer ways out. When participants were allowed to talk to each other, contributions rose (communication). Likewise, when participants were allowed to use some of their money to punish those who didn't contribute (say, pay $3 to prevent someone from getting their share this round if they didn't cooperate last round), people would do so.
Even the simple act of making the contributions public increased cooperation, drawing on reputation. Sometimes small fines were imposed on those who didn't cooperate (institutional pressures) which brought up cooperation, and these fines worked especially well when the group got to vote on how high they would be (democracy).
Lastly, helping frame the game would help -- those who were told they were taking place in a "Community Game" were far more likely to contribute to the pot and keep contributing than those who were told they were taking place in a "Wall Street Game". By reminding people they are in a community, people thought more about their community norms, and felt more group cohesion, and were more likely to trust others.
Ultimately, creating communities is all about fostering cooperation, and you foster cooperation by ensuring that there is mutual trust and some sort of way to prevent defectors from taking advantage of the system. People often naturally don't want to defect, but will do so if they think others will take advantage of them first.
But how do we foster this trust? The first step is to make use of our social pressures when and to the amount that's appropriate -- relying on empathetic and moral norms, reputation, institutionalized laws, and security systems -- and being sure to get the balance right. For small communities, this probably just needs to be a set of agreed norms, and ensuring that the norms are properly and responsibly enforced.
The Benefits of Joining
The second part is while implementing the first step, we should keep in mind why people are joining or staying in the first place, and make sure to provide a community where the benefits of joining -- both the tangibles and intangibles -- are present and apparent. We should acknowledge the costs of cooperating, and make sure the benefits are there to foster group loyalty and belonging.
An Effective Community
While implementing, it's important to keep in mind that communities should also be fair, respect the autonomy and individuality of the members, give members input through democracy, and foster lots of communication about how things are going. We should also keep a keen eye to how things are framed, while not going overboard on it or lying.
The End Reward
But when we accomplish communities, the rewards are pretty great -- not only do we avoid free riders and the tragedy of the commons, but we ourselves get to take advantage of communities that are more productive than the individuals alone, and secure the feelings of belonging to a group we enjoy.
-Also cross-posted on my blog.
Until recently, I hadn't paid much attention to Pomodoro, though I've heard of it for a few years now. "Uncle Bob" Martin seemed to like it, and he's usually worth paying attention to in such matters. However, it mostly seemed to me like a way of organizing a variety of tasks and avoiding procrastination, and I've never had much trouble with that.
However after the January CFAR workshop suggested it in passing, I decided to give it a try; and I realized I had it all wrong. Pomodoros aren't (for me) a means of avoiding procrastination or dividing time among projects. They're a way of blasting through Ugh fields.
The Pomodoro technique is really simple compared to more involved systems like Getting Things Done (GTD). Here it is:
- Set a timer for 25 minutes
- Work on one thing for that 25 minutes, nothing else. No email, no phone calls, no snack breaks, no Twitter, no IM, etc.
- Take a five minute break
- Pick a new project, or the same project, if you prefer.
That's pretty much it. You can buy a book or a special timer for this; but there's really nothing else to it. It takes longer to explain the name than the technique. (When Francesco Cirillo invented this technique in the 1980s, he was using an Italian kitchen timer shaped like a tomato. Pomodoro is Italian for tomato.)
I got interested in Pomodoro when I realized I could use it to clean my office/desk/apartment. David Allen's GTD system appealed to me, but I could never maintain it, and the 2+ days it needed to get all the way to a clean desk was always a big hurdle to vault. However, spending 25 minutes at a time, followed by a break and another project seemed a lot more manageable.
I tried it, and it worked. My desk stack quickly shrunk, not to empty, but at least to a place where an accidental elbow swing no longer launched avalanches of paper onto the floor as I typed.
So I decided to try Pomodoro on my upcoming book. The publisher was using a new authoring system and template that I was unfamiliar with. There were a dozen little details to figure out about the new system--how to check out files in git, how to create a section break, whether to use hard or soft wrapping, etc.--and I just worked through them one by one. 25 minutes later I'd knocked them all out, and was familiar enough with the new system to begin writing in earnest. I didn't know everything about the software, but I knew enough that it was no longer averting. Next I used 25 minutes on a chapter that was challenging me, and Pomodoro got me to the point where I was in the flow.
That's when I realized that Pomodoro is not a system for organizing time or avoiding procrastination (at least not for me). What it is, is an incredibly effective way to break through tasks that look too hard: code you're not familiar with, an office that's too cluttered, a chapter you don't know how to begin.
The key is that a Pomodoro forces you to focus on the unfamiliar, difficult, aversive task for 25 minutes. 25 minutes of focused attention without distractions from other, easier tasks is enough to figure out many complex situations or at least get far enough along that the next step is obvious.
Here's another example. I had a task to design a GWT widget and plug it into an existing application, and I have never done any work with GWT. Every time I looked at the frontend application code, it seemed like a big mess of confused, convoluted, dependency injected, late bound, spooky-action-at-a-distance spaghetti. Now doubtless there wasn't anything fundamentally more difficult about this code than the server side code I have been writing; and if my career had taken just a slightly different path over the last six years, frontend GWT code might be my bread and butter. But my career didn't take that path, and this code was a big Ugh field for me. So I set the Pomodoro timer on my smartphone and started working. Did I finish? No, but I got started, made progress, and proved to myself that GWT wasn't all that challenging after all. The widget is still difficult enough and GWT complex enough that I may need several more Pomodoros to finish the job, but I did get way further and learn more in 25 minutes of intense focus than I would have done in a day or even a week without it.
I don't use the Pomodoro technique exclusively. Once I get going on a project or a chapter, I don't need the help; and five minute breaks once I'm in the flow just distract me. So some days I just do 1 or 2 or 0 Pomodoros, whatever it takes to get me rolling again and past the blocker.
I also don't know if this works for genuinely difficult problems. For instance, I don't know if it will help with a difficult mathematical proof I've been struggling with for months (though I intend to find out). But for subjects that I know I can do, but can't quite figure out how to do, or where to start, the power of focusing 25 minutes of real attention on just that one problem is astonishing.
Short form: Pascal's Muggle
tl;dr: If you assign superexponentially infinitesimal probability to claims of large impacts, then apparently you should ignore the possibility of a large impact even after seeing huge amounts of evidence. If a poorly-dressed street person offers to save 10(10^100) lives (googolplex lives) for $5 using their Matrix Lord powers, and you claim to assign this scenario less than 10-(10^100) probability, then apparently you should continue to believe absolutely that their offer is bogus even after they snap their fingers and cause a giant silhouette of themselves to appear in the sky. For the same reason, any evidence you encounter showing that the human species could create a sufficiently large number of descendants - no matter how normal the corresponding laws of physics appear to be, or how well-designed the experiments which told you about them - must be rejected out of hand. There is a possible reply to this objection using Robin Hanson's anthropic adjustment against the probability of large impacts, and in this case you will treat a Pascal's Mugger as having decision-theoretic importance exactly proportional to the Bayesian strength of evidence they present you, without quantitative dependence on the number of lives they claim to save. This however corresponds to an odd mental state which some, such as myself, would find unsatisfactory. In the end, however, I cannot see any better candidate for a prior than having a leverage penalty plus a complexity penalty on the prior probability of scenarios.
In late 2007 I coined the term "Pascal's Mugging" to describe a problem which seemed to me to arise when combining conventional decision theory and conventional epistemology in the obvious way. On conventional epistemology, the prior probability of hypotheses diminishes exponentially with their complexity; if it would take 20 bits to specify a hypothesis, then its prior probability receives a 2-20 penalty factor and it will require evidence with a likelihood ratio of 1,048,576:1 - evidence which we are 1048576 times more likely to see if the theory is true, than if it is false - to make us assign it around 50-50 credibility. (This isn't as hard as it sounds. Flip a coin 20 times and note down the exact sequence of heads and tails. You now believe in a state of affairs you would have assigned a million-to-one probability beforehand - namely, that the coin would produce the exact sequence HTHHHHTHTTH... or whatever - after experiencing sensory data which are more than a million times more probable if that fact is true than if it is false.) The problem is that although this kind of prior probability penalty may seem very strict at first, it's easy to construct physical scenarios that grow in size vastly faster than they grow in complexity.
I originally illustrated this using Pascal's Mugger: A poorly dressed street person says "I'm actually a Matrix Lord running this world as a computer simulation, along with many others - the universe above this one has laws of physics which allow me easy access to vast amounts of computing power. Just for fun, I'll make you an offer - you give me five dollars, and I'll use my Matrix Lord powers to save 3↑↑↑↑3 people inside my simulations from dying and let them live long and happy lives" where ↑ is Knuth's up-arrow notation. This was originally posted in 2007, when I was a bit more naive about what kind of mathematical notation you can throw into a random blog post without creating a stumbling block. (E.g.: On several occasions now, I've seen someone on the Internet approximate the number of dust specks from this scenario as being a "billion", since any incomprehensibly large number equals a billion.) Let's try an easier (and way smaller) number instead, and suppose that Pascal's Mugger offers to save a googolplex lives, where a googol is 10100 (a 1 followed by a hundred zeroes) and a googolplex is 10 to the googol power, so 1010100 or 1010,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 lives saved if you pay Pascal's Mugger five dollars, if the offer is honest.
View more: Next