A collection of Stubs.
In light of SDR's comment yesterday, instead of writing a new post today I compiled my list of ideas I wanted to write about, partly to lay them out there and see if any stood out as better than the rest, and partly so that maybe they would be a little more out in the wild than if I hold them until I get around to them. I realise there is not a thesis in this post, but I figured it would be better to write one of these than to write each in it's own post with the potential to be good or bad.
Original post: http://bearlamp.com.au/many-draft-concepts/
I create ideas at about the rate of 3 a day, without trying to. I write at about a rate of 1.5 a day. Which leaves me always behind. Even if I write about the best ideas I can think of, some good ones might never be covered. This is an effort to draft out a good stack of them so that maybe it can help me not have to write them all out, by better defining which ones are the good ones and which ones are a bit more useless.
With that in mind, in no particular order - a list of unwritten posts:
From my old table of contents
Goals of your lesswrong group – As a guided/workthrough exercise in deciding why the group exists and what it should do. Help people work out what they want out of it (do people know)? setting goals, doing something particularly interesting or routine, having fun, changing your mind, being activists in the world around you. Whatever the reasons you care about, work them out and move towards them. Nothing particularly groundbreaking in the process here. Sit down with the group with pens and paper, maybe run a resolve cycle, maybe talk about ideas and settle on a few, then decide how to carry them out. Relevant links: Sydney meetup, group resources (estimate 2hrs to write)
Goals interrogation + Goal levels – Goal interrogation is about asking <is this thing I want to do actually a goal of mine> and <is my current plan the best way to achieve that>, goal levels are something out of Sydney Lesswrong that help you have mutual long term goals and supporting short term goal. There are 3 main levels, Dream, Year, Daily (or approximate) you want dream goals like going to the moon, you want yearly goals like getting another year further in your degree and you want daily goals like studying today that contribute to the upper level goals. Any time you are feeling lost you can look at the guide you set out for yourself and use it to direct you. (3hrs)
How to human – A zero to human guide. A guide for basic functionality of a humanoid system. Something of a conglomeration of maslow, mental health, so you feel like shit and system thinking. Am I conscious?Am I breathing? Am I bleeding or injured (major or minor)? Am I falling or otherwise in danger and about to cause the earlier questions to return false? Do I know where I am? Am I safe? Do I need to relieve myself (or other bodily functions, i.e. itchy)? Have I had enough water? sleep? food? Is my mind altered (alcohol or other drugs)? Am I stuck with sensory input I can't control (noise, smells, things touching me)? Am I too hot or too cold? Is my environment too hot or too cold? Or unstable? Am I with people or alone? Is this okay? Am I clean (showered, teeth, other personal cleaning rituals)? Have I had some sunlight and fresh air in the past few days? Have I had too much sunlight or wind in the past few days? Do I feel stressed? Okay? Happy? Worried? Suspicious? Scared? Was I doing something? What am I doing? do I want to be doing something else? Am I being watched (is that okay?)? Have I interacted with humans in the past 24 hours? Have I had alone time in the past 24 hours? Do I have any existing conditions I can run a check on - i.e. depression? Are my valuables secure? Are the people I care about safe? (4hrs)
List of common strategies for getting shit done – things like scheduling/allocating time, pomodoros, committing to things externally, complice, beeminder, other trackers. (4hrs)
List of superpowers and kryptonites – when asking the question “what are my superpowers?” and “what are my kryptonites?”. Knowledge is power; working with your powers and working out how to avoid your kryptonites is a method to improve yourself. What are you really good at, and what do you absolutely suck at and would be better delegating to other people. The more you know about yourself, the more you can do the right thing by your powers or weaknesses and save yourself troubles.
List of effective behaviours – small life-improving habits that add together to make awesomeness from nothing. And how to pick them up. Short list: toothbrush in the shower, scales in front of the fridge, healthy food in the most accessible position in the fridge, make the unhealthy stuff a little more inacessible, keep some clocks fast - i.e. the clock in your car (so you get there early), prepare for expected barriers ahead of time (i.e. packing the gym bag and leaving it at the door), and more.
Stress prevention checklist – feeling off? You want to have already outsourced the hard work for “things I should check on about myself” to your past self. Make it easier for future you. Especially in the times that you might be vulnerable. Generate a list of things that you want to check are working correctly. i.e. did I drink today? Did I do my regular exercise? Did I take my medication? Have I run late today? Do I have my work under control?
Make it easier for future you. Especially in the times that you might be vulnerable. – as its own post in curtailing bad habits that you can expect to happen when you are compromised. inspired by candy-bar moments and turning them into carrot-moments or other more productive things. This applies beyond diet, and might involve turning TV-hour into book-hour (for other tasks you want to do instead of tasks you automatically do)
A p=np approach to learning – Sometimes you have to learn things the long way; but sometimes there is a short cut. Where you could say, “I wish someone had just taken me on the easy path early on”. It’s not a perfect idea; but start looking for the shortcuts where you might be saying “I wish someone had told me sooner”. Of course the answer is, “but I probably wouldn’t have listened anyway” which is something that can be worked on as well. (2hrs)
Rationalists guide to dating – Attraction. Relationships. Doing things with a known preference. Don’t like unintelligent people? Don’t try to date them. Think first; then act - and iteratively experiment; an exercise in thinking hard about things before trying trial-and-error on the world. Think about places where you might meet the kinds of people you want to meet, then use strategies that go there instead of strategies that flop in the general direction of progress. (half written)
Training inherent powers (weights, temperatures, smells, estimation powers) – practice makes perfect right? Imagine if you knew the temperature always, the weight of things by lifting them, the composition of foods by tasting them, the distance between things without measuring. How can we train these, how can we improve. Probably not inherently useful to life, but fun to train your system 1! (2hrs)
Strike to the heart of the question. The strongest one; not the one you want to defeat – Steelman not Strawman. Don’t ask “how do I win at the question”; ask, “am I giving the best answer to the best question I can give”. More poetic than anything else - this post would enumerate the feelings of victory and what not to feel victorious about, as well as trying to feel what it's like to be on the other side of the discussion to yourself, frustratingly trying to get a point across while a point is being flung at yourself. (2hrs)
How to approach a new problem – similar to the “How to solve X” post. But considerations for working backwards from a wicked problem, as well as trying “The least bad solution I know of”, Murphy-jitsu, and known solutions to similar problems. Step 0. I notice I am approaching a problem.
Turning Stimming into a flourish – For autists, to make a presentability out of a flaw.
How to manage time – estimating the length of future tasks (and more), covered in notch system, and do tasks in a different order. But presented on it's own.
Spices – Adventures in sensory experience land. I ran an event of spice-smelling/guessing for a group of 30 people. I wrote several documents in the process about spices and how to run the event. I want to publish these. As an exercise - it's a fun game of guess-the-spice.
Wing it VS Plan – All of the what, why, who, and what you should do of the two. Some people seem to be the kind of person who is always just winging it. In contrast, some people make ridiculously complicated plans that work. Most of us are probably somewhere in the middle. I suggest that the more of a planner you can be the better because you can always fall back on winging it, and you probably will. But if you don't have a plan and are already winging it - you can't fall back on the other option. This concept came to me while playing ingress, which encourages you to plan your actions before you make them.
On-stage bias – The changes we make when we go onto a stage include extra makeup to adjust for the bright lights, and speaking louder to adjust for the audience which is far away. When we consider the rest of our lives, maybe we want to appear specifically X (i.e, confident, friendly) so we should change ourselves to suit the natural skews in how we present based on the "stage" we are appearing on. appear as the person you want to appear as, not the person you naturally appear as.
Creating a workspace – considerations when thinking about a “place” of work, including desk, screen, surrounding distractions, and basically any factors that come into it. Similar to how the very long list of sleep maintenance suggestions covers environmental factors in your sleep environment but for a workspace.
Posts added to the list since then
Doing a cost|benefit analysis - This is something we rely on when enumerating the options and choices ahead of us, but something I have never explicitly looked into. Some costs that can get overlooked include: Time, Money, Energy, Emotions, Space, Clutter, Distraction/Attention, Memory, Side effects, and probably more. I'd like to see a How to X guide for CBA. (wikipedia)
Extinction learning at home - A cross between intermittent reward (the worst kind of addiction), and what we know about extinguishing it. Then applying that to "convincing" yourself to extinguish bad habits by experiential learning. Uses the CFAR internal Double Crux technique, precommit yourself to a challenge, for example - "If I scroll through 20 facebook posts in a row and they are all not worth my time, I will be convinced that I should spend less time on facebook because it's not worth my time" Adjust 20 to whatever position your double crux believes to be true, then run a test and iterate. You have to genuinely agree with the premise before running the test. This can work for a number of committed habits which you want to extinguish. (new idea as at the writing of this post)
How to write a dating ad - A suggestion to include information that is easy to ask questions about (this is hard). For example; don't write, "I like camping", write "I like hiking overnight with my dog", giving away details in a way that makes them worth inquiring about. The same reason applies to why writing "I'm a great guy" is really not going to get people to believe you, as opposed to demonstrating the claim. (show, don't tell)
How to give yourself aversions - an investigation into aversive actions and potentially how to avoid collecting them when you have a better understanding of how they happen. (I have not done the research and will need to do that before publishing the post)
How to give someone else an aversion - similar to above, we know we can work differently to other people, and at the intersection of that is a misunderstanding that can leave people uncomfortable.
Lists - Creating lists is a great thing, currently in draft - some considerations about what lists are, what they do, what they are used for, what they can be used for, where they come in handy, and the suggestion that you should use lists more. (also some digital list-keeping solutions)
Choice to remember the details - this stems from choosing to remember names, a point in the conversation where people sometimes tune out. As a mindfulness concept you can choose to remember the details. (short article, not exactly sure why I wanted to write about this)
What is a problem - On the path of problem solving, understanding what a problem is will help you to understand how to attack it. Nothing more complicated than this picture to explain it. The barrier is a problem. This doesn't seem important on it's own but as a foundation for thinking about problems it's good to have sitting around somewhere.
How to/not attend a meetup - for anyone who has never been to a meetup, and anyone who wants the good tips on etiquette for being the new guy in a room of friends. First meetup: shut up and listen, try not to be too much of an impact on the existing meetup group or you might misunderstand the culture.
Noticing the world, Repercussions and taking advantage of them - There are regularly world events that I notice. Things like the olympics, Pokemon go coming out, the (recent) spaceX rocket failure. I try to notice when big events happen and try to think about how to take advantage of the event or the repercussions caused by that event. Motivated to think not only about all the olympians (and the fuss leading up to the olympics), but all the people at home who signed up to a gym because of the publicity of the competitive sport. If only I could get in on the profit of gym signups...
leastgood but only solution I know of - So you know of a solution, but it's rubbish. Or probably is. Also you have no better solutions. Treat this solution as the best solution you have (because it is) and start implementing it, as you do that - keep looking for other solutions. But at least you have a solution to work with!
Self-management thoughts - When you ask yourself, "am I making progress?", "do I want to be in this conversation?" and other self management thoughts. And an investigation into them - it's a CFAR technique but their writing on the topic is brief. (needs research)
instrumental supply-hoarding behaviour - A discussion about the benefits of hoarding supplies for future use. Covering also - what supplies are not a good idea to store, and what supplies are. Maybe this will be useful for people who store things for later days, and hopefully help to consolidate and add some purposefulness to their process.
list of sub groups that I have tried - Before running my local lesswrong group I partook in a great deal of other groups. This was meant as a list with comments on each group.
If you have nothing to do – make better tools for use when real work comes along - This was probably going to be a poetic style motivation post about exactly what the title suggests. Be Prepared.
what other people are good at (as support) - When reaching out for support, some people will be good at things that other people are not. For example - emotional support, time to spend on each other, ideas for solving your problems. Different people might be better or worse than others. Thinking about this can make your strategies towards solving your problems a bit easier to manage. Knowing what works and what does not work, or what you can reliably expect when you reach out for support from some people - is going to supercharge your fulfilment of those needs.
Focusing - An already written guide to Eugine Gendlin's focusing technique. That needs polishing before publishing. The short form: treat your system 1 as a very powerful machine that understands your problems and their solutions more than you do; use your system 2 to ask it questions and see what it returns.
Rewrite: how to become a 1000 year old vampire - I got as far as breaking down this post and got stuck at draft form before rewriting. Might take another stab at it soon.
Should you tell people your goals? - This thread in a post. In summary: It depends on the environment, the wrong environment is actually demotivational, the right environment is extra motivational.
Meta: this took around 4 hours to write up. Which is ridiculously longer than usual. I noticed a substantial number of breaks being taken - not sure if that relates to the difficulty of creating so many summaries or just me today. Still. This experiment might help my future writing focus/direction so I figured I would try it out. If you see an idea of particularly high value I will be happy to try to cover it in more detail.
Towards cause priotisation estimates for child abuse
Closest community background reading: http://www.givewell.org/labs/causes/criminal-justice-reform
Scale
prevalence
Back of the envelope estimate of the number of abused excluding those who are emotionaly abused and neglected (because those stats aren’t on the wikipedia page for child abuse):
>Despite these limitations, international studies show that a quarter of all adults report experiencing physical abuse as children, and that and 1 in 5 women and 1 in 13 men report experiencing childhood sexual abuse. Emotional abuse and neglect are also common childhood experiences ("Child maltreatment: Fact sheet No. 150". World Health Organization. December 2014)
If all those physically abused are the same as those sexually abused (most conservative estimate) then 0.2 of all people are abused as children. If they are completely seperate populations then ((1/5 + 1/13)/2) + (1/4) = 0.39 (~0.4) of all people are abused as children. So, 0.2-0.4 of all people are abused.
>A long-term study of adults retrospectively reporting adverse childhood experiences including verbal, physical and sexual abuse, as well as other forms of childhood trauma found 25.9% of adults reported verbal abuse as children, 14.8% reported physical abuse, and 12.2% reported sexual abuse
More likely ¼ of all people are abused as children in some way or another
Harm (qualitatively)
>reduction in lifespan of 7 to 15 years (Kolassa, Iris – Tatjana. "Biological memory of childhood maltreatment – current knowledge and recommendations for future research" (PDF). Ulmer Volltextserver – Institutional Repository der Universität Ulm. Retrieved 30 March 2014.)
>more likely to suffer from physical ailments such as allergies, arthritis, asthma, bronchitis, high blood pressure, and ulcers (Dolezal, T.; McCollum, D.; Callahan, M. (2009). Hidden Costs in Health Care: The Economic Impact of Violence and Abuse. Academy on Violence and Abuse.)
>emotional abuse has been linked to increased depression, anxiety, and difficulties in interpersonal relationships ("Reactive attachment disorder")
>One long-term study found that up to 80% of abused people had at least one psychiatric disorder at age 21, with problems including depression, anxiety, eating disorders, and suicide attempts.[95] One Canadian hospital found that between 36% and 76% of women mental health outpatients had been abused, as had 58% of women and 23% of men schizophrenic inpatients.[96] A recent study has discovered that a crucial structure in the brain's brain's reward circuits is compromised by childhood abuse and neglect, and predicts Depressive Symptoms later in life.[9]
Exponential growth, externalities or diminishment of the problem
>90 percent of maltreating adults were maltreated as children (Starr RH, Wolfe DA (1991). The Effects of Child Abuse and Neglect (pp. 1–33). New York: The Guilford Press. ISBN 978-0-89862-759-6)
>children who experience child abuse and/or neglect are 59% more likely to be arrested as juveniles, 28% more likely to be arrested as adults, and 30% more likely to commit violent crime ("Child Abuse Statistics". Childhelp. Retrieved 5 March 2015.)
> A study by Dante Cicchetti found that 80% of abused and maltreated infants exhibited symptoms of disorganized attachment. When some of these children become parents, especially if they suffer from posttraumatic stress disorder (PTSD), dissociative symptoms, and other sequelae of child
Shut up, stop dumping qutes and give me the QALY’s
>The combined strata-level effects of maltreatment on Short Form–6D utility was a reduction of 0.028 per year (95% confidence interval=0.022, 0.034; P<.001). (www.ncbi.nlm.nih.gov/pmc/articles/PMC2377283/)
0.028per year * world population * 0.25 = 51800000 QALY’s per year
Neglectedness
>In the U.S. in 2013, of the 294,000 reported child abuse cases only 81,124 received any sort of counseling or therapy. ("National Statistics on Child Abuse". National Children's Alliance. Archived from the original on 2 May 2014.)
It's likely to be more neglected in low and middle income countries.
Tractability
>Most acts of physical violence against children are undertaken with the intent to punish.[106] In the United States, interviews with parents reveal that as many as two thirds of documented instances of physical abuse begin as acts of corporal punishment meant to correct a child's behavior, while a large-scale Canadian study found that three quarters of substantiated cases of physical abuse of children have occurred within the context of physical punishment.[107] Other studies have shown that children and infants who are spanked by parents are several times more likely to be severely assaulted by their parents or suffer an injury requiring medical attention. Studies indicate that such abusive treatment often involves parents attributing conflict to their child's willfulness or rejection, as well as "coercive family dynamics and conditioned emotional responses".[16] Factors involved in the escalation of ordinary physical punishment by parents into confirmed child abuse may be the punishing parent's inability to control their anger or judge their own strength, and the parent being unaware of the child's physical vulnerabilities.[15]
>Some professionals argue that cultural norms that sanction physical punishment are one of the causes of child abuse, and have undertaken campaigns to redefine such norms.[108][109][110]
>Into the 21st century many countries have taken steps to eradicate domestic violence, such as criminalization of violence against women and other abuses. Organizations have been formed which provide assistance and protection of domestic abuse victims, laws and criminal remedies, and domestic violence courts (https://en.wikipedia.org/wiki/Management_of_domestic_violence)
What can we do about it?
Given that ‘’three quarters of substantiated cases of physical abuse of children have occurred within the context of physical punishment’’, (see tractability section) assuming that a ban on corporal punishment towards children could be enforced with just 10% compliance worldwide, we could save a minimum of 10% * ¾ * 51800000 QALY’s per year = 3885000 QALY’s per year.
Now how cost effective would it be? What could we use as a reference class for how much resources would need to be invested to outlaw and enforce bans on corporal punishment of children? I don’t have the subject matter experience to say, so if anybody can help me out here please do. If you can also estimate how much money would be saved from everything from healthcare costs to criminal justice aversion costs, please chime in.
Instead, let’s compare with one Open Philanthropy Project funded area [clearing the organ donation waitlist](http://www.givewell.org/labs/causes/organ-transplantation). They’ve simply funded trying to figure out the solution, whereas some steps are more obvious for child abuse. They decide to go ahead on that based on estimates for merely thousands of QALY’s. It should be overwhelmingly evident that averting child abuse probably dominates the organ donation waitlist problem.
Faced with such aberrant findings, I think it’s appropriate to hand this over to the community for input before collaboratively investigating this area. Could averting child abuse be the most important cause? If it is at least an important cause, what does it’s neglectedness from the cause prioritisation community thus far say about the methods by which potential important causes are identified?
Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes
Typology: since not elsewhere disambiguated, divestment will be considered a form of shareholder activism in this article.
The aim of this call for information is to identify under what conditions shareholder activism or divestment is more appropriate. Shareholder activism referrers to the action and activities around proposing and rallying support for a resolution at a company AGM such as reinstatement or impeachment of a director, or a specific action like renouncing a strategic direction (like investment in coal). In contrast, divestment infers to withdrawal of an investment in a company by shareholders, such as a tobacco or fossil fuel company. By identifying the important variables that determine which strategy is most appropriate, activists and shareholders will be able to choose strategies that maximise social and environmental outcomes while companies will be able to maximise shareholder value.
Very little published academic literature exists on the consequences of divestment. Very little published academic literature exists on the social and environmental consequences of shareholder activism other than the impact on the financial performance of the firm, and conventional metrics of shareholder value.
Controversy (1)
One item of non academic literature, a manifestos on a socially responsible investing blog (http://www.socialfunds.com/media/index.cgi/activism.htm) weighs up the option of divestment against shareholder activism by suggesting that divestment is appropriate as a last resort, if considerable support is rallied, the firm is interested in its long term financial sustainability, and responds whereas voting on shareholder resolutions is appropriate when groups of investors are interested in having an impact. It’s unclear how these contexts are distinguished. DVDivest, a divestment activist group (dcdivest.org/faq/#Wouldn’t shareholder activism have more impact than divestment?) contends in their manifesto the shareholder activism is better suited to changing one aspect of a company's operation whereas divestment is appropriate when rejected a basic business model. This answer too is inadequate as a decision model since one companies can operate multiple simultaneous business models, own several businesses, and one element of their operation may not be easily distinguished from the whole system - the business. They also identify non-responsiveness of companies to shareholder action as a plausible reason to side with divestment.
Controversy (2)
Some have claimed that resolutions that are turned down have an impact. It’s unclear how to enumerate that impact and others. The enumeration of impacts is itself controversially and of course methodologically challenging.
Research Question(s)
Population: In publicly listed companies
Exposure: is shareholder activism in the form of proxy voting, submitting shareholder resolutions and rallying support for shareholder resolution
Comparator: compared to shareholder activism in the form of divestment
Outcome: associated with outcomes - shareholder resolutions (votes and resolutions) and/or indicators or eventuation of financial (non)sustainability (divestment) and/or media attention (both)
Link: Thoughts on the basic income pilot, with hedgehogs
I have resisted the urge of promoting my blog for many months, but this is literally (per my analysis) for the best cause.
We have also raised a decent amount of money so far, so at least some people were convinced by the arguments and didn't stop at the cute hedgehog pictures.
Altruistic parenting
I just read this article about the felicific calculus of parenthood.
The average happiness worldwide is 5.1 on a one out of ten scale; Americans are at 7.1. Arbitrarily deciding that one year of a 10 life is equivalent to two years of a 5 life, the cost per QALY of having a child for total utilitarians is $5500.
However, NICE’s threshold for cost effectiveness of a health intervention is about $30,000 (20,000 pounds) per QALY. Therefore, for total utilitarians, having a child may be considered a cost-effective intervention, although not an optimal intervention.
...surrogacy is an underexplored way to do good. Rather than costing money, the first-time surrogate earns thirty thousand dollars, which can grow to forty thousand dollars for experienced surrogates– and it still creates 109 QALYs that otherwise would not exist. These children are likely to grow up in wealthy families who really, really want to have them, and are thus likely to be even happier than this analysis suggests.
In the comments section, the following grabbed my attention.
Estimates for the size of a sustainable human population appear to mostly range between 2 billion and 10 billion, and the meta-analysis here (http://bioscience.oxfordjournals.org/content/54/3/195) suggests that the best point estimate is around 7.7 billion. Meanwhile most estimates of population growth over the next hundred years suggest the total population will reach 10-11 billion. It seems likely that at some point in the next couple hundred years, the population will decrease substantially due to a Malthusian catastrophe. This transition is likely to cause a great deal of suffering. Surely even a total utilitarian would agree that it would be better for the necessary drop in population to be as small as possible.
And even if the population never rises above sustainable carrying capacity, it’s not obvious that total utilitarians should see a larger population as preferable. The drop in happiness due to increased competition for resources could outweigh the benefit of an additional person existing and having experiences.
Then, I read this article. Here are the highlights:
Bryan Caplan’s excellent book Selfish Reasons to Have More Kids[7] reviews the evidence from 40 years of adoption and twin studies with a frankly liberating result: *barring actual deprivation or trauma, children are largely who they are going to be as a result of their genetic makeup. In long-term measures of well-being, education and employment, parental influence exerts a temporary effect which disappears when we are no longer living with our parents. So costly added extras (music lessons, coaching and tutoring, private school fees) are probably not going to change your child’s life in the long term. (However, data on the antenatal environment suggests benefit to taking iodine, but avoiding ice-storms and licorice during pregnancy.[8]) Sharing time together and finding common interests can build a good relationship and help a child develop without major costs.
In addition to straightforward financial outlay, parenthood comes with costs of time and opportunity. Loss of flexibility and leisure mean you won’t be able to take all opportunities (like taking on extra work to make more money or advance your career). Late notice travel is unlikely to be possible. You will probably be sleep deprived for a large part of the first year or more of your child’s life, and this may impact on your work performance. The work of parenting will take time, though some of it may be outsourced at the cost of increased financial outlay.
So, this baby is going to cost you about £2000 a year and take a variable but large amount of your time, which will equate in the end to another chunk of money. For parents taking parental leave or working less than full time to provide childcare, there may be delay to career progression as well as income. Does this represent an unacceptably large sum of money and time to be compatible with the goal of maximising our impacts for the good?
In the light of this reality, the rationalist suggestion I have encountered – that one guard against a desire to become a parent by pre-emptively being sterilised before the desire has arisen – seems a recipe for psychological disaster.
Finally we may ask whether parenthood – and the resulting person created – will benefit the wider world? This is a harder good to calculate or rely upon. The inheritance of specific character traits is difficult to predict. It’s certainly not guaranteed that your offspring will embrace all of your values throughout their lifetime. The burden of onerous parental expectations are extensively documented, and it would appear foolish to have children on the expectation they will be altruistic in the same way you are. However, your child is likely to resemble you in many important respects. By adulthood, the heritability of IQ is between 0.7 and 0.8,[13] and there is evidence from twin studies of significant heritability of complex traits like empathy.[14] This would give them a high probability of adding significant net good to the world.
That's rather confronting:
* a '5' on a scale of happiness ain't that bad
* don't stress too much when raising your biological kids, you can't do that much
* they're probably not worth having anyway
Just kidding. But, the evidence is quite fascinating.
[Link] Review of "Doing Good Better"
The book is by William MacAskill, founder of 80000 Hours and Giving What We Can. Excerpt:
Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion; moral indictment is transformed into an empowering investment opportunity...
Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most...The second thought – that we try to make things better – is shared by every plausible moral system and every decent person. If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.
GiveDirectly, SCI and health outcomes
**What GiveDirectly says:**
>This study documented large, positive, and sustainable impacts across a wide range of outcomes including assets, earnings, food security, ** mental health**, and domestic violence. It found no evidence of impacts on alcohol or tobacco use, crime, or inflation. It also examined a number of design questions such as how to size transfers and whether to give them to men or women.
Source: [GiveDirectly](https://www.givedirectly.org/research-at-give-directly.html)
**What the evidence says:**
*GiveDirectly*
>Overall, GiveDirectly increased households’ assets, consumption, and food security. The program also improved psychological well-being, especially among households with female recipients and households that received the large transfer. GiveDirectly had no impact on health or education measures.
>Psychological impacts: GiveDirectly households reported a 0.2 standard deviation increase (0.35 sd for large transfer recipients) on an index measuring psychological well-being. This improvement was largely driven by increases in happiness and life satisfaction, and reductions in stress and depression. There were no differences in self-reported measures between monthly-transfer and lump-sum recipients, but cortisol levels were significantly higher for monthly-transfer recipients. A potential explanation being that the monthly-transfer recipients seemed to have difficulty saving or investing the transfer, which may have led to increased stress.
Source: [Innovations for Poverty Action](http://www.poverty-action.org/project/0522)
*SCI*
>There is a very strong case that mass deworming is effective in reducing infections. The evidence on the connection to positive quality-of-life impacts is less clear, but there is a fairly strong possibility that deworming is highly beneficial.
>There is strong evidence that administration of the drugs reduces worm loads, but weaker evidence on the causal relationship between reducing worm loads and improved life outcomes.
>Evidence for the impact of deworming on short-term general health is thin, especially for soil-transmitted helminth (STH)-only deworming. Most of the potential effects are relatively small, the evidence is mixed, and different approaches have varied effects. We would guess that deworming populations with schistosomiasis and STH (combination deworming) does have some small impacts on general health, but do not believe it has a large impact on health in most cases. We are uncertain that STH-only deworming affects general health.
>In our view, the most compelling case for deworming as a cost-effective intervention comes not from its subtle impacts on general health (which appear relatively minor and uncertain) nor from its potential reduction in severe symptoms of disease effects (which we believe to be rare), but from the possibility that deworming children has a subtle, lasting impact on their development, and thus on their ability to be productive and successful throughout life.
>Community deworming before a child’s first birthday brings about a 0.2-standard-deviation improvement in performance on Raven’s Matrices, a decade after the intervention. Estimated effects on vocabulary measures are similar in magnitude, but not always as significant; effects on memory are not statistically distinguishable from zero. A summary measure, the first principal component of all six cognitive measurements, also shows a roughly 0.2-standard-deviation effect. These effects are equivalent to between 0.5 and 0.8 additional grades in school … The effect of community deworming spillovers on height, height-for-age, and stunting all appear statistically
Source: [GiveWell](http://www.givewell.org/international/top-charities/schistosomiasis-control-initiative).
GiveWell goes on to argue that this leads to improvements in income. In turn, I would expect that this leads to increases in income, assets and consumption with consequences similar to direct cash transfers as in the case of GiveDirectly.
Deworming a movement
Over the last few days I've been reviewing the evidence for EA charity recommendations. Based on my personal experience alone, the community seems to be comprehensively inept, poor at marketing, extremely insular, methodologically unsophisticated but meticulous, transparent and well-intentioned. I currently hold the belief that EA movement building does more harm than good and that is requires significant rebranding and shifts in its informal leadership or to die out before it damages the reputation of the rationalist community and our capacity to cooperate with communities that share mutual interests.
It's one thing to be ineffective and know it. It's another thing to be ineffective and not know it. It's yet another thing to be ineffective, not know it, yet champion effectiveness and make a claim to moral superiority.
In case you missed the memo deworming is controversial, GiveWell doesn't engage with the meat of the debate, and my investigations of the EA community's spaces suggests that it's not at all known. I've even briefly posted about it elsewhere on LessWrong to see if there was unspoken knowledge about it, but it seems not. Given that it's the hot topic in mainstream development studies and related academic communities, I'm aghast at how irresponsive 'we' are.
What's actionable for us here. If you're looking for a high reliability effective altruism prospect, do not donate to SCI or Evidence Action. And by extension, do not donate to EA organisations to donate to these groups, including GiveWell. I am assuming you will use those funds more wisely instead, say buying healthier food for yourself.
For who don't to review the links for a more comprehensive analyses from Cochrane and GiveWell, here is one summary of the debate recommended in the Cochrane article:
Last month there was another battle in an ongoing dispute between economists and epidemiologists over the merits of mass deworming. In brief, economists claim there is clear evidence that cheap deworming interventions have large effects on welfare via increased education and ultimately job opportunities. It’s a best buy development intervention. Epidemiologists claim that although worms are widespread and can cause illnesses sometimes, the evidence of important links to health is weak and knock-on effects of deworming to education seem implausible. As stated by Garner “the belief that deworming will impact substantially on economic development seems delusional when you look at the results of reliable controlled trials.”
Aside: Framing this debate as one between economists and epidemiologists captures some of the dynamic of what has unfortunately been called the “worm wars” but it is a caricature. The dispute is not just between economists and epidemiologists. For an earlier round of this see this discussion here, involving health scientists on both sides. Note also that the WHO advocates deworming campaigns.
So. Deworming: good for educational outcomes or not?
On their side, epidemiologists point to 45 studies that are jointly analyzed in Cochrane reports. Among these they see few high quality studies on school attendance in particular, with a recent report concluding that they “do not know if there is an effect on school attendance (very low quality evidence).” Indeed they also see surprisingly few health benefits. One randomized control trial included one million Indian students and found little evidence of impact on health outcomes. Much bigger than all other trials combined; such results raise questions for them about the possibility of strong downstream effects. Economists question the relevance of this result and other studies in the Cochrane review.
On their side, the chief weapon in the economists’ arsenal has for some time been a paper from 2004 on a study of deworming in West Kenya by Ted Miguel and Michael Kremer, two leading development economists that have had an enormous impact on the quality of research in their field. In this paper, Miguel and Kremer (henceforth MK) claimed to show strong effects of deworming on school attendance not just for kids in treated schools but also for the kids in untreated schools nearby. More recently a set of new papers focusing on longer term impacts, some building on this study, have been added to this arsenal. In addition, on their side, economists have a few things that do not depend on the evidence at all: determination, sway, and the moral high ground. After all, who could be against deworming kids?
Additional criticisms of GiveWelL charities: http://lesswrong.com/lw/mo0/open_thread_aug_24_aug_30/cp8h
The kind of work I think EA's should be focussing on http://lesswrong.com/lw/mld/genosets/cnys AND
http://lesswrong.com/r/discussion/lw/mk2/lets_pour_some_chlorine_into_the_mosquito_gene/
The problem with MIRI: http://lesswrong.com/lw/cr7/proposal_for_open_problems_in_friendly_ai/cm2j
Effective Altruism from XYZ perspective
In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.
I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.
Effective Altruism vs Missionaries? Advice Requested from a Newly-Built Crowdfunding Platform.
Hi, I'm developing a next-generation crowdfunding platform for non-profit fundraising. From what we have seen, it is aeffective tool, more about it below. I'm working with two other cofounders, both of whom are evangelical Christians. We get along well in general, but that I strongly believe in effective altruism and they do not.
We will launch a second pilot fundraising campaign in 2-3 weeks. My co-founders have arranged for us fund raise for is a "church planting" missionary organization. This is so opposed my belief in effective altruism I feel uncomfortable using our effective tool to funnel donors' dollars in THIS of all directions. This is not the reason I got involved in this project.
My argument with them is that we should charge more to ineffective nonprofits such as colleges, religious, or political organizations, and use that extra to subsidize the campaign and money-processing costs of the effective non-profits. I think this is logically consistent with earning to give. But I am being outvoted two-to-one by people who believe saving lives and saving souls are nearly equally important.
So I have two requests:
1. If anyone has advise on how to navigate this (including any especially well written articles that would appeal to evangelical Christians, or experience negotiating with start-up cofounders).
2. If anyone has personal connections with effective or effective-ish non-profits, I would much prefer to fundraise for them than my co-founder's church connections. Caveat: the org must have US non-profit legal status.
About the platform: the gist our concept is that we're using a lot of psychology and biases and altruism research to nudge more people towards giving and also nudge them towards a sustained involvement with the nonprofit in question. We're using some of the tricks that made the ice bucket challenge so successful (but with added accountability to ensure that visible involvement actually leads to monetary donations). Users can pledge money contingent on their friend's involvement, which motivates people in the same way that matching donations motivate people. Giving is very visible, and people are more likely to give if they see friends giving. Friends are making the request for funding, which creates a sense of personal connection. Each person's mini-campaign has an involvement goal and a time limit (3 friends in 3 days) to create a sense of urgency. The money your friends donate visibly increases your impact so it also feel like getting money from nothing - a $20 pledge can become hundreds of dollars. We nudge people towards automated smaller monthly reoccurring gifts. We try to minimize the number of barriers to making a donation (less steps, fewer fields).
4 days left in Giving What We Can's 2015 fundraiser - £34k to go

We at Giving What We Can have been running a fundraiser to raise £150,000 by the end of June, so that we can make our budget through the end of 2015. We are really keen to keep the team focussed on their job of growing the movement behind effective giving, and ensure they aren't distracted worrying about fundraising and paying the bills.
With 4 days to go, we are now short just £34,000!
We also still have £6,000 worth of matching funds available for those who haven't given more than £1,000 to GWWC before and donate £1,000-£5,000 before next Tuesday! (For those who are asking, 2 of the matchers I think wouldn't have given otherwise and 2 I would guess would have.)
If you've been one of those holding out to see if we would easily reach the goal, now's the time to pitch in to ensure Giving What We Can can continue to achieve its vision of making effective giving the societal default and move millions more to GiveWell-recommended and other high impact organisations.
So please give now or email me for our bank details: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org.
If you want to learn more, please see this more complete explanation for why we might be the highest impact place you can donate. This fundraiser has also been discussed on LessWrong before, as well as the Effective Altruist forum.
Thanks so much!
Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction
Cross Posted at the EA Forum
At Event Horizon (a Rationalist/Effective Altruist house in Berkeley) my roommates yesterday were worried about Slate Star Codex. Their worries also apply to the Effective Altruism Forum, so I'll extend them.
The Problem:
Lesswrong was for many years the gravitational center for young rationalists worldwide, and it permits posting by new users, so good new ideas had a strong incentive to emerge.
With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.
The Effective Altruism forum doesn't have that particular problem. It is however more constrained in terms of what can be posted there. It is after all supposed to be about Effective Altruism.
We thus have three different strong attractors for the large community of people who enjoy reading blog posts online and are nearby in idea space.
Possible Solutions:
(EDIT: By possible solutions I merely mean to say "these are some bad solutions I came up with in 5 minutes, and the reason I'm posting them here is because if I post bad solutions, other people will be incentivized to post better solutions)
If Slate Star Codex became an open blog like Lesswrong, more people would consider transitioning from passive lurkers to actual posters.
If the Effective Altruism Forum got as many readers as Lesswrong, there could be two gravity centers at the same time.
If the moderation and self selection of Main was changed into something that attracts those who have been on LW for a long time, and discussion was changed to something like Newcomers discussion, LW could go back to being the main space, with a two tier system (maybe one modulated by karma as well).
The Past:
In the past there was Overcoming Bias, and Lesswrong in part became a stronger attractor because it was more open. Eventually lesswrongers migrated from Main to Discussion, and from there to Slate Star Codex, 80k blog, Effective Altruism forum, back to Overcoming Bias, and Wait But Why.
It is possible that Lesswrong had simply exerted it's capacity.
It is possible that a new higher tier league was needed to keep post quality high.
A Suggestion:
I suggest two things should be preserved:
Interesting content being created by those with more experience and knowledge who have interacted in this memespace for longer (part of why Slate Star Codex is powerful), and
The opportunity (and total absence of trivial inconveniences) for new people to try creating their own new posts.
If these two properties are kept, there is a lot of value to be gained by everyone.
The Status Quo:
I feel like we are living in a very suboptimal blogosphere. On LW, Discussion is more read than Main, which means what is being promoted to Main is not attractive to the people who are actually reading Lesswrong. The top tier quality for actually read posting is dominated by one individual (a great one, but still), disincentivizing high quality posts by other high quality people. The EA Forum has high quality posts that go unread because it isn't the center of attention.
Taking Effective Altruism Seriously
Epistemic status: 90% confident.
Inspiration: Arjun Narayan, Tyler Cowen.
The noblest charity is to prevent a man from accepting charity, and the best alms are to show and enable a man to dispense with alms.
Background
Effective Altruism (EA) is "a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world." Along with the related organisation GiveWell, it often focuses on getting the most "bang for your buck" in charitable donations. Unfortunately, despite their stated aims, their actual charitable recommendations are generally wasteful, such as cash transfers to poor Africans. This leads to the obvious question - how can we do better?
Doing better
One of the positive aspects of EA theory is its attempt to widen the scope of altruism beyond the traditional. For instance, to take into account catastrophic risks, and the far future. However, altruism often produces a far-mode bias where intentions matter above results. This can be a particular problem for EA - for example, it is very hard to get evidence about how we are affecting the far future. An effective method needs to rely on a tight feedback loop between action and results, so that continual updates are possible. At the extreme, Far Mode operates in a manner where no updating on results takes place at all. However, it is also important that those results are of significant magnitude as to justify the effort. EA has mostly fallen into the latter trap - achieving measurable results, but which are of no greater consequence.
The population of sub-Saharan Africa is around 950 million people, and growing. They have been a prime target of aid for generations, but it remains the poorest region of the world. Providing cash transfers to them mostly merely raises consumption, rather than substantially raising productivity. A truly altruistic program would enable the people in these countries to generate their own wealth so that they no longer needed poverty - unconditional transfers, by contrast, is an idea so lazy even Bob Geldof could stumble on it. The only novel thing about the GiveWell program is that the transfers are in cash.
Unfortunately, no-one knows how to turn poor African countries into productive Western ones, short of colonization. The problem is emphatically not a shortage of capital, but rather low productivity, and the absence of effective institutions in which that capital can be deployed. Sadly, these conditions and institutions cannot simply be transplanted into those countries.
A greater charity
However, there do exist countries with high productivity, and effective institutions in which that capital can be deployed. That capital then raises world productivity. As F.A. Harper wrote:
Savings invested in privately owned economic tools of production amount to... the greatest economic charity of all.
That is because those tools increase the productivity of labour, and so raise output. The pie has grown. Moreover, the person who invests their portion of the pie into new capital is particularly altruistic, both because they are not taking a share themselves, and because they are making a particularly large contribution to future pies.
In the same way that using steel to build tanks means (on the margin) fewer cars and vice-versa, using craftsmen to build a new home means (on the margin) fewer factories and vice-versa. Investment in capital is foregone consumption. Moreover, you do not need to personally build those economic tools; rather, you can part-finance a range of those tools by investing in the stock market, or other financial mechanisms.
Now, it's true that little of that capital will be deployed in sub-Saharan Africa at present, due to the institutional problems already mentioned. Investing in these countries will likely lead to your capital being stolen or becoming unproductive - the same trap that prevents locals from advancing equally prevents foreign investors from doing so. However, if sub-Saharan Africa ever does fix its culture and institutions, then the availability of that capital will then serve to rapidly raise productivity and then living standards, much as is taking place in China. Moreover, by making the rest of the world richer, this increases the level of aid other countries could provide to sub-Saharan Africa in future, should this ever be judged desirable. It also serves to improve the emigration prospects of individuals within these countries.
Feedback
Another great benefit of capital investment is the sharp feedback mechanism. The market economy in general, and financial markets in particular, serve to redistribute capital from ineffective to effective ventures, and from ineffective to effective investors. As a result, it is no longer necessary to make direct (and expensive) measurements of standards of living in sub-Saharan Africa; as long as your investment fund is gaining in value, you can rest safe in the knowledge that its growth is contributing, in a small way, to future prosperity.
Commitment mechanisms
However, if investment in capital is foregone consumption, then consumption is foregone investment. If I invest in the stock market today (altruistic), then in ten years' time spend my profits on a bigger house (selfish), then some of the good is undone. So the true altruist will not merely create capital, he will make sure that capital will never get spent down. One good way of doing that would be to donate to an institution likely to hold onto its capital in perpetuity, and likely to grow that capital over time. Perhaps the best example of such an institution would be a richly-endowed private university, such as Harvard, which has existed for almost 400 years and is said to have an endowment of $32 billion.
John Paulson recently gave Harvard $400 million. Unfortunately, this meant he came in for a torrent of criticism from people claiming he should have given the money to poor Africans, etc. I hope to see Effective Altruists defending him, as he has clearly followed through on their concepts in the finest way.
Further thoughts and alternatives
- Some people say that we are currently going through a "savings glut" in which capital is less productive than previously thought. In this case, it may be that Effective Altruists should focus on funding (and becoming!) successful entrepreneurs in different spaces.
- I am sympathetic to the Thielian critique that innovation is being steadily stifled by hostile forces. I view the past 50 years, and the foreseeable future, as a race between technology and regulation, which technology is by no means certain to win. It may be that Effective Altruists should focus on political activity, to defend and expand economic liberty where it exists - this is currently the focus of my altruism.
- However, government is not the enemy; rather, the enemy is the cultural beliefs and conditions that create a demand for the destruction of economic liberty. To the extent this critique, it may be that Effective Altruists should focus on promoting a pro-innovation and pro-liberty mindset; for example, through movies and novels.
Conclusion
Giving What We Can needs your help!
As you probably know, Giving What We Can exists to move donations to the charities that can most effectively help others. Our members take a pledge to give 10% of their incomes for the rest of their life to the most impactful charities. Along with other extensive resources for donors such as GiveWell and OpenPhil, we produce and communicate, in an accessible way, research to help members determine where their money will do the most good. We also impress upon members and the general public the vast differences between the best charities and the rest.
Many LessWrongers are members or supporters, including of course the author of Slate Star Codex. We also recently changed our pledge so that people could give to whichever cause they felt best helped others, such as existential risk reduction or life extension, depending on their views. Many new members now choose to do this.
What you might not know is that 2014 was a fantastic year for us - our rate of membership growth more than tripled! Amazingly, our 1066 members have now pledged over $422 million, and already given over $2 million to our top rated charities. We've accomplished this on a total budget of just $400,000 since we were founded. This new rapid growth is thanks to the many lessons we have learned by trial and error, and the hard work of our team of staff and volunteers.
To make it to the end of the year we need to raise just another £110,000. Most charities have a budget in the millions or tens of millions of pounds and we do what we do with a fraction of that.
We want to raise the money as quickly as possible, so that our staff can stop focusing on fundraising (which takes up a considerable amount of energy), and get back to the job of growing our membership.
Some of our supporters are willing to sweeten the deal as well: if you haven't given us more than £1,000 before, then they'll match 1:1 a gift between £1,000 and £5,000.
You can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for our bank details. Info on tax deductible giving from the USA and non-UK Europe are also available on our website.
What we are doing this year
The second half of this year is looking like it will be a very exciting for us. Four books about effective altruism are being released this year, including one by our own trustee William MacAskill, which will be heavily promoted in the US and UK. The Effective Altruism Summit is also turning into 'EA Global' with events at Google Headquarters in San Francisco, Oxford University and Melbourne, headlined by Elon Musk.
Tens, if not hundreds of thousands of people will be finding out about our philosophy of effective giving for the first time.
To do these opportunities justice Giving What We Can needs to expand its staff to support its rapidly growing membership and local chapters, and ensure we properly follow up with all prospective members. We want to take people who are starting to think about how they can best make the world a better place, and encourage them to make a serious long-term commitment to effective giving, and help them discover where their money can do the most good.
Looking back at our experience over the last five years, we estimate that each $1 given to Giving What We Can has already moved $6, and will likely end up moving between $60 and $100 to the most effective charities in the world. (This are time discounted, counterfactual donations, only to charities we regard very highly. Check out this report for more details.)
This represents a great return on investment, and I would be very sad if we couldn't take these opportunities just because we lacked the necessary funding.
Our marginal hire
If we don't raise this money we will not have the resources to keep on our current Director of Communications. He has invaluable experience as a Communications Director for several high-profile Australian politicians, which has given him skills in web-development, public relations, graphic design, public speaking and social media. Amongst the things he has already achieved in his three months here are: automation of the book-keeping on our Trust (saving huge amounts of time and minimising errors), very much improved our published materials including our fundraising prospectus, written a press release and planned a media push to capitalise on our getting to 1,000 members and Peter Singer’s book release in the UK.
His wide variety of skills mean that there are a large number of projects he would be capable of doing which would increase our member growth, and we are keen for him to test a number of these. His first project would be to optimise our website to make the most of the increased attention effective altruism will be generating over the summer and turn that into people actually donating 10% of their incomes to the most effective causes. In the past we have had trouble finding someone with such a broad set of crucial skills. Combined with how swiftly and well he has integrated into our team, it would be a massive loss to have to let him go and later down the line need to try to recruit a replacement.
As I wrote earlier you can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for bank details or personalised advice on how to give best. If you need tax deductibility in another country check these pages on the USA and non-UK Europe.
I'm happy to take questions here or by email!
What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy
Epistemic status: Wild guesswork based on half-understood studies from way outside my field. More food for thought than trustworthy information.
tl/dr: Estimates of familial relatedness between people should help promote empathy, so here's how to make them - and might this be useful for Effective Altruism?
The why
I don't know how it is for you, but for me, knowing I'm related to someone makes a specific emotional difference. Scenario: I'm at a big family-and-friends get-together, I meet a guy, we get along. (For clarity, let's assume no sexual tension.) And then we're told we're third cousins via some weird aunt. From the moment I'm told, I feel different towards him. Firm, forthcoming, obliging. Some kind of basic kinship emotion, I guess, noticeable when it shifts on these rare occasions but basically going on, deep down in System 1, every time that emailing a remote uncle feels different from emailing a similarly remote associate.
Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins and likes to point out everyone I've ever had sex with was a cousin of some degree. That similarly remote associate where I don't have that kinship feeling - he's a relative too, just a more distant one. And when I notice that, I get a bit of that kinship feeling too...
With me so far? Here's my thesis: the two human feelings of kinship and empathy are closely connected, and to make one of them more salient is to increase the salience of the other.
I don't think this has been tested properly. A. J. Jacobs, who is running a huge family reunion event in New York this summer, said "some ambitious psychology professor needs to conduct a study about whether we deliver lower electrical shocks to people if we know we’re related" and I think he's exactly right.
Has anybody here not heard of circles of empathy? They're a concept invented by the very cool 19th century rationalist William Edward Hartpole Lecky in his "History of European Morals From Augustus to Charlemagne". Peter Singer summarizes it as follows:
Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle... includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.
There's more to read about this in Peter Singer's "The Expanding Circle" or Steven Pinker's "The Better Angels of Our Nature", but what strikes me about it is contained in that single sentence: The expansion that is described tracks actual genetic relatedness, or Consanguinity. The list goes down a gradient of (expected) genetic relatedness. This makes the size of the circle of empathy seem to depend on a threshold of how related you need to be to someone in order to care about them.
(Note that Becky published his "History of European Morals" - with this inclusion of concern about animals - in 1869, i.e. only ten years after the publication of "On the Origin of Species". There was some animal rights legislation before Darwin, but animal rights as a movement only arose after we knew animals to be our relatives.)
On the other hand, those who would promote empathy have always relied on familial vocabulary, chiefly "brother" and "sister", to refer to people who evidently weren't actual brothers or sisters. Martin Luther King, Jesus, the Buddha, Mandela, Gandhi, they all do this. So maybe it works a bit. Maybe it helps trigger that emotional kinship response and that somehow helps people get along.
Now to see how these emotional responses would arise, we could discuss reciprocal altruism and gene-centered Darwinism and whatnot, but "The Selfish Gene" is required reading anyway and I assume you've done your homework. I'd like to instead go to the second part of my thesis, the one about increasing salience.
Recognizing you're related to somebody does something. (Especially if you have an incest fetish, of course.) I propose that whatever it does increases empathy. And empathy might not be a categorically good thing, but it comes pretty close, at least until you extend it to all food groups. So maybe we could increase empathy among people by pointing out their relatedness. And maybe we can do this more vividly, more strikingly than by simply saying "we're all descended from apes, so we're all related, duh" or by boring the non-nerd majority to death with talk of human genetic clustering and fixation indexes.
So I'd like to revisit that "brothers and sisters" thing from MLK and those other guys. Maybe they shouldn't have used figurative language. Maybe a more lasting feeling of kinship can be created by literal language: By telling people how related they are. Detailed ancestry information is being collected at various Wiki-like sites, but even assuming they'll grow and become less US-centric, they don't go back very far (except around very famous people) and what came before remains guesswork. So let's do some Fermi-ish estimates.
The how
The drop dead amazing Nature Article Modelling the recent common ancestry of all living humans is way too careful and scientific to put an exact number on how long ago the last common ancestor lived, unfortunately. But the mean date their simulations come up with is 1415 BC, which will be approximately 120 generations ago, so let's say really remote people like the Karitiana tribe are, at most, something like 125th degree cousins of all of us. So that's a useful upper bound for the degree of cousinhood between any two arbitrary humans, such as you and me.
The lower bound could be something like 3 - if you and I were that closely related, we'd share a great-great-grandparent and could probably ascertain rather than guess that. With fairly extensive genealogy, the lower bound might go up to around 5 - which is the level where you need to look at 64 ancestors for each of us who lived in the middle of the 19th century and failed to use Facebook. We'd find it hard to ascertain whether your great-great-great-great-grandmother Mary was identical to mine.
There are a lot of special cases where the lower bound can be higher. If both people involved know their family more than 3 generations were deep-rooted peasant folks from two distinct populations, the history books might tell them how many centuries further back are very unlikely to contain a common ancestor. (This will of course be much rarer among descendants of immigrants, like Americans, than it is for citizens of older or more rural countries.) If they're of different ethnicities, castes or classes that wouldn't normally date each other 80 years ago, the lower bound should probably go up a few more generations. If both people involved are Icelanders, they can just look up their last common ancestor in the comprehensive Icelandic family tree. But let's assume you and I don't have any of these special cases, and we're stuck with a lower bound of 3. Now between that and 125, how do we narrow it down?
Turns out the authors of that gorgeous Nature paper don't hand out access to their simulations to random dudes who just email them. So lets see how far we get on the hard way.
In a completely random mating model (where people do not tend to mate with people who happen to live near them, i.e. happen to be descendants of the same people), your number of ancestors doubles with every generation you go back, in a sort of ancestor tree that grows backwards. We're looking for the point where the two ancestor trees first meet. If we assume generations have homogenous lengths (which implies further simplifying assumptions like moms and dads are the same age) and further assume only people from within the same generation have kids with each other, cousins of the Nth degree have a common ancestor N+1 generations ago, and each has 2N+1 ancestors belonging to that generation.
This means that for you and me to be, say, 15th degree cousins, our two sets of 215+1=65536 ancestors have to have one person in common, some 480 years ago, assuming 30 years as mean parenthood age. Of course we each probably have less than 65536 unique ancestors due to... um... "reticulations".
But empirically, it seems that "a pair of modern Europeans living in neighboring populations share around 2–12 genetic common ancestors from the last 1,500 years" and even individuals from opposite ends of Europe will normally have common ancestors if you search back 3000 years (source). That isn't what you get from the simplistic model above - the numbers of ancestors it calculates exceed the world population less than 32 generations (about 800 years) ago. The empirical genetic data from this paper would indicate that it is likely the median first common ancestor between me and anybody in central Europe is somewhere like 1200 years (or 40 generations) ago and any two people anywhere in Europe would probably be at most 100th degree cousins.
Around 600 years ago is a good time to look at, because that's shortly before intercontinental travel started to intricately connect all regions of the world, including genetically. If most of your 600-years-ago ancestors lived outside Europe, you and I might still be <25 degrees cousins - maybe you have some ancestor who left for Europe 300 years ago, leaving siblings behind (your ancestors) and having kids in Europe (mine). Or vice versa. But that kind of thing is unlikely and since we're doing rough estimates I suggest we round that probability down to zero.
In genetic studies, no other continent is anywhere near as well-studied as Europe, so I guess we'll just have to roll with it and assume that other places are about the same as this paper found and the nice exponential drop-off with geographic distance that's the case in Europe is also the case elsewhere. America and Australia as continents of immigrants continue to be a special cases. But for two people with families from, say, West Africa, I'd be comfortable assuming that if they're from roughly the same large region (say around the Bight of Benin) they're probably something like 40th degree cousins and if not, they're still something like 100 degree cousins at least.
It gets only slightly more complicated if the set of ancestors you know - say your four grandparents - are a mix of descendants from different regions or continents. Just add the number of generations between you and them to your expected degree of cousinhood to everybody from that region or continent.
Needless to say these are all wild guesses. I'm basically hoping someone more qualified than me will see this and be horrified enough to go do the job properly.
Now I'm not an American but statistically you probably are, and you might be more interested in know how closely you're related to other Americans - your boss, your sexual partners, or Mel Gibson. The bad news is that as a member of a nation of relatively recent immigrants, and particularly if your ancestors didn't all come from different continents, you have a harder time estimating most recent common ancestors with people than most other people on Earth. The good news, however, is that the data collected at the large ancestry sites ancestry.com, FamilySearch.org, Geni.com and WikiTree.com are all growing fastest in the US-centric part of their "world trees".
For cousinhood between people whose ancestors seem to have lived on entirely seperate continents as far as anyone knows, I think we can only fall back on our upper bound of 125 degrees of cousinhood. Things get fuzzy so far back, the world population was much smaller, and the population of those who have descendants living today is smaller still. Shared ancestry within any particular generation remains unlikely, but over the centuries and millenia, between trade (particularly in slaves), the various empires and the mass rapes of warfare, genes did get mixed around. Again, see that spectacular Nature paper if you still haven't.
Side note: The most recent common ancestor of two arbitrarily chosen people on different continents is likely to be someone who had kids on different continents. So it is probably a very rich person, a sailor or a soldier, i.e. a male. In general, the number of unique males in anybody's ancestor tree will likely be much smaller than the number of unique females. I expect the difference will be sharper in most recent common ancestors of humans from different continents, because women have shorter fertility windows inside which to travel intercontinentally and don't seem to have moved nearly as much as men except as slaves.
The point of all this is simple. Now you can look at somebody and figure she's not only your cousin, you even have a guess as to the degree of cousin she is. I like to do that when I'm angry with people, because for me, it makes a distinct emotional difference. Maybe try if that works for you too.
Relation to the care allocation problem
I suspect this cousinhood thing could be a fairly principled solution to the problem of how to allocate caring between humans and animals, which Yvain/Scott laid out in a recent SSC post. Why not go by actual (known or estimated) blood relations, and privilege closer relatives over more distant ones?
Our last common ancestor with chimps was something like 5 to 6 million years ago, so our ancestor trees merge about 250000 (human) generations ago, making chimps something like quarter-million-degrees-cousins of all of us. Generations get a lot shorter further back, so our last common ancestor with cattle and dogs, about 92 million years ago, may be 30 million generations ago. Birds would be much more distant, our last common ancestor with them was around 310 million years ago, and so forth. (Richard Dawkins The Ancestor's Tale has much more on this.) For me, this maps rather nicely onto my intuitive prejudices as to how much I should care about which creatures. It fails to map my caring for plants far more than I care for bacteria, but EA has nothing to improve on in that department.
If EA has to have impartiality in the sense that your neighbor can't be more important to you than a tribesman in Mongolia, this isn't EA. Quoth Yvain:
allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it.
So anybody trying to grow EA might want to make that step easier. Maybe a "closeness multiplier" on units of caring works better than a series of unprincipled exceptions, and still gets across the idea that units of caring are to be distributed between everybody (or everybody's QALYs), if unevenly. And then to become more impartial would be to have that multiplier approach 1.
And if that were the case, my personal preference for how to design that multiplier would be that it shouldn't rely on arbitrary constructs like citizenships. Maybe if EAs want to find a principled solution to the care allocation problem, consanguinity should be one of the options.
Log-normal Lamentations
[Morose. Also very roughly drafted.]
Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.
There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1
Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.
Look at our thick-tailed works, ye average, and despair! 2
One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.
Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.
Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.
A shattered visage
Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields': Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex ante extremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.
So there are a few ways an Effective Altruist mindset can depress our egos:
- It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
- ‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
- (Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
- Many of these fields have ‘lottery-like’ characteristics where ex ante and ex post value diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.
What remains besides
I haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.
In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4
Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5
‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6
Notes:
- As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free). ↩
- I wonder how much this post is a monument to the grasping vaingloriousness of my character… ↩
- Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA. ↩
- Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases. ↩
- Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them. ↩
- cf. Max Ehrmann put it well:
↩… If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.
Enjoy your achievements as well as your plans. Keep interested in your own career, however humble…
Effective effective altruism: Get $400 off your next charity donation
For those of you unfamiliar with Churning, it's the practice of signing up for a rewards credit card, spending enough with your everyday purchases to get the (usually significant) reward and then cancelling it. Many of these cards are cards with annual fees (which is commonly waived and/or the one-time reward will pay for). For a nominal amount of work, you can churn cards for significant bonuses.
Ordinarily I wouldn't come close to spending enough money to qualify for many of these rewards, but I recently made the Giving What You Can pledge. I now have a steady stream of predictable expenses, and conveniently, GiveWell allows donations via most any credit card. I've started using new rewards cards to pay for these expenses each time, resulting in free flights (this is how I'm paying to fly to NYC this summer), Amazon gift cards, or sometimes just straight cash.
Since the first of the year (total expenses $4000, including some personal expenses) I've churned $700 worth of bonuses (from a Delta Amazon Express Gold and a Capital One Venture Card). This money can be redonated, saved, spent, or whatever.
Disclaimers:
1. I hope it goes without saying that you should pay off your balance in full each month, just like you should with any other card.
2. This has some negative impact on your credit, in the short term.
3. It should be noted that credit card companies make at least some money (I think 3%) off of your transactions, so if you're trying to hit a target of X% to charity, you would need to donate X/0.97, or 10.31% for 10% to account for that 3%. The reward should more than cover it.
4. Read more about this, including the pros and cons, from multiple sources before you try it. It's not something that should be done lightly, but does synergize very nicely with charity donations.
Effective Sustainability - results from a meetup discussion
Related-to Focus Areas of Effective Altruism
These are some small tidbits from our LW-like Meetup in Hamburg. The focus was on sustainability not on altruism as that was more in the spirit of our group. EA was mentioned but no comparison was made. Well-informed effective altruists will probably find little new in this writeup.
So we discussed effective sustainability. To this end we were primed to think rationally by my 11-year old who moderated a session on mind-mapping 'reason' (with contributions from the children). Then we set out to objectively compare concrete everyday things by their sustainability. And how to do this.
Is it better to drink fruit juice or wine? Or wine or water? Or wine vs. nothing (i.e.to forego sth.)? Or wine vs. paper towels? (the latter intentionally different)
The idea was to arrive at simple rules of thumb to evaluate the sustainability of something. But we discovered that even simple comparisons are not that simple and intuition can run afoul (surpise!). One example was that apparently tote bags are not clearly better than plastic bags in terms of sustainability. But even the simple comparison of tap water vs. wine which seems like a trivial subset case is non-trivial when you consider where the water comes from and how it is extracted from the ground (we still think that water is better but we not as sure as before).
We discussed some ways to measure sustainability (in brackets to which we reduced it):
- fresh water use -> energy
- packaging material used -> energy, permanent ressources
- transport -> energy
- energy -> CO_2, permanent ressources
- CO_2 production
- permanent consumption of ressources
Life-Cycle-Assessment (German: Ökobilanz) was mentioned in this context but it was unclear what that meant precisely. Only afterwards was it discovered that it's a blanket term for exactly this question (with lots of estabilished measurements for which it is unclear how to simplify them for everyday use).
We didn't try to break this down - a practical everyday approch doesn't allow for that and the time spent on analysing and comparing options is also equivalent to ressources possibly not spent efficiently.
One unanswered question was how much time to invest in comparing alternatives. Too little comparison means to take the nextbest option which is what most people apparently do and which also apparently doesn't lead to overall sustainable behavior. But too much analysis of simple decisions is also no option.
The idea was still to arrive at actionable criteria. One first approximation be settled on was
1) Forego consumption.
A nobrainer really, but maybe even that has to be stated. Instead of comparing options that are hard to compare try to avoid consumption where you can. Water instead of wine or fruit juice or lemonde. This saves lots of cognitive ressources.
Shortly after we agreed on the second approximation:
2) Spend more time on optimizing ressources you consume large amounts of.
The example at hand was wine (which we consume only a few times a year) versus toilet paper... No need to feel remorse over a one-time present packaging.
Note that we mostly excluded personal well-being, happiness and hedons from our consideration. We were aware that our goals affect our choices and hedons have to factored into any real strategy, but we left this additional complication out of our analysis - at least for this time.
We did discuss signalling effects. Mostly in the context of how effective ressources can be saved by convincing others to act sustainably. One important aspect for the parents was to pass on the idea and to act as a role model (with the caveat that children need a simplified model to grasp the concept). It was also mentioned humorously that one approach to minimize personal ressource consumption is suicide and transitively to convice others of same. The ultimate solution having no humans on the planet (a solution my 8-year old son - a friend of nature - arrived at too). This apparently being the problem when utilons/hedons are expluded.
A short time we considered whether outreach comes for free (can be done in addition to abstinence) and should be the no-brainer number 3. But it was then realized that at least right now and for us most abstinence comes at a price. It was quoted that buying sustainable products is about 20% more expensive than normal products. Forgoing e.g. a car comes at reduced job options. Some jobs involve supporting less sustainable large-scale action. Having less money means less options to act sustaibale. Time being convertible to money and so on.
At this point the key insight mentioned was that it could be much more efficient from a sustainability point of view to e.g. buy CO_2 certificates than to buy organic products. Except that the CO_2 certificate market is oversupplied currently. But there seem to be organisations which promise to achieve effective CO_2 reduction in developing countries (e.g. solar cooking) at a much higher rate than be achieved here. Thus the thrid rule was
3) Spend money on sustainable organisations instead of on everyday products that only give you a good feeling.
Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission
If you shop on Amazon in the countries listed below, you can earn a substantial commission for charity by doing so via the links below. This is a cost-free way to do a lot of good, so I'd encourage you to do so! You can bookmark one of the direct links to Amazon below and then use that bookmark every time you shop.
The commission will be at least 5%, varying by product category. This is substantially better than the AmazonSmile scheme available in the US, which only gives 0.5% of the money you spend to charity. It works through Amazon's 'Associates Program', which pays this commission for referring purchasers to them, from the unaltered purchase price (details here). It doesn't cost the purchaser anything. The money goes to Associates Program accounts owned by the EA non-profit Charity Science, money to which always gets regranted to GiveWell-recommended charities unless explicitly earmarked otherwise. For ease of administration and to get tax-deductibility, commission will get regranted to the Schistosomiasis Control Initiative until further notice.
Direct links to Amazon for your bookmarks
If you'd like to shop for charity, please bookmark the appropriate link below now:
From now through November 28: Black Friday Deals Week
Amazon's biggest cut price sale is this week. The links below take you to currently available deals:
Please share these links
I'll add other links on the main 'Shop for Charity' page later. I'd love to hear suggestions for good commission schemes in other countries. If you'd like to share these links with friends and family, please point them to this post or even better this project's main page.
Happy shopping!
'Shop for Charity' is a Charity Science project

The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach
The Centre for Effective Altruism, the group behind 80,000 Hours, Giving What We Can, the Global Priorities Project, Effective Altruism Outreach, and to a lesser extent The Life You Can Save and Animal Charity Evaluators, is looking to grow its team with a number of new roles:
- Giving What We Can: Director of Research
- Giving What We Can: Communications Manager
- 80,000 Hours: Head of Research
- Central CEA: Chief Operating Officer
- Global Priorities Project: Research Fellow (accepting expressions of interest at this point)
- We are also looking for 'graduate volunteers' for Giving What We Can in 2015, particularly over the summer
We are so keen to find great people that if you introduce us to someone new who we end up hiring, we will pay you $1,000 for the favour! If you know anyone awesome who would be a good fit for us please let me know: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. They can also book a short meeting with me directly.
We may be able to sponsor outstanding applicants from the USA.
Applications close Friday 5th December 2014.
Why is CEA an excellent place to work?
First and foremost, “making the world a better place” is our bottom line and central aim. We work on the projects we do because we think they’re the best way for us to make a contribution. But there’s more.
The specifics of what we are looking for depend on the role and details can be found in the job descriptions. In general, we're looking for people who have many of the following traits:
- Self-motivated, hard-working, and independent;
- Able to deal with pressure and unfamiliar problems;
- Have a strong desire for personal development;
- Able to quickly master complex, abstract ideas, and solve problems;
- Able to communicate clearly and persuasively in writing and in person;
- Comfortable working in a team and quick to get on with new people;
- Able to lead a team and manage a complex project;
- Keen to work with a young team in a startup environment;
- Deeply interested in making the world a better place in an effective way, using evidence and research;
- A good understanding of the aims of the Centre for Effective Altruism and its constituent organisations.
I hope to work at CEA in the future. What should I do now?
Of course this will depend on the role, but generally good ideas include:
- Study hard, including gaining useful knowledge and skills outside of the classroom.
- Degrees we have found provide useful training include: philosophy, statistics, economics, mathematics and physics. However, we are hoping to hire people from a more diverse range of academic and practical backgrounds in the future. In particular, we hope to find new members of the team who have worked in operations, or creative industries.
- Write regularly and consider starting a blog.
- Manage student and workplace clubs or societies.
- Work on exciting projects in your spare time.
- Found a start-up business or non-profit, or join someone else early in the life of a new project.
- Gain impressive professional experience in established organisations, such as those working in consulting, government, politics, advocacy, law, think-tanks, movement building, journalism, etc.
- Get experience promoting effective altruist ideas online, or to people you already know.
- Use 80,000 Hours' research to do a detailed analysis of your own future career plans.
A website standard that is affordable to the poorest demographics in developing countries?
Fact: the Internet is excruciatingly slow in many developing countries, especially outside of the big cities.
Fact: today's websites are designed in such a way that they become practically impossible to navigate with connections in the order of, say, 512kps. Ram below 4GB and a 7-year old CPU are also a guarantee of a terrible experience.
Fact: operating systems are usually designed in such an obsolescence-inducing way as well.
Fact: the Internet is a massive source of free-flowing information and a medium of fast, cheap communication and networking.
Conclusion: lots of humans in the developing world are missing out on the benefits of a technology that could be amazingly empowering and enlightening.
I just came across this: what would the internet 2.0 have looked like in the 1980s. This threw me back to my first forays in Linux's command shell and how enamoured I became with its responsiveness and customizability. Back then my laptop had very little autonomy, and very few classrooms had plugs, but by switching to pure command mode I could spend the entire day at school taking notes (in LaTeX) without running out. But I switched back to the GUI environment as soon as I got the chance, because navigating the internet on the likes of Lynx is a pain in the neck.
As it turns out, I'm currently going through a course on energy distribution in isolated rural areas in developing countries. It's quite a fascinating topic, because of the very tight resource margins, the dramatic impact of societal considerations, and the need to tailor the technology to the existing natural renewable resources. And yet, there's actually a profit to be made investing in these projects; if managed properly, it's win-win.
And I was thinking that, after bringing them electricity and drinkable water, it might make sense to apply a similar cost-optimizing, shoestring-budget mentality to the Internet. We already have mobile apps and mobile web standards which are built with the mindset of "let's make this smartphone's battery last as long as possible".
Even then, (well-to-do, smartphone-buying) thrid-worlders are somewhat neglected: Samsung and the like have special chains of cheap Android smartphones for Africa and the Middle East. I used to own one; "this cool app that you want to try out is not available for use on this system" were a misery I had to get used to.
It doesn't seem to be much of a stretch to do the same thing for outdated desktops. I've been in cybercafés in North Africa that still employ IBM Aptiva machines, mechanical keyboard and all—with a Linux operating system, though. Heck, I've seen town "pubs", way up in the hills, where the NES was still a big deal among the kids, not to mention old arcades—Guile's theme goes everywhere.
The logical thing to do would be to adapt a system that's less CPU intensive, mostly by toning down the graphics. A bare-bones, low-bandwith internet that would let kids worldwide read wikipedia, or classic literature, and even write fiction (by them, for them), that would let nationwide groups tweet to each other in real time, that would let people discuss projects and thoughts, converse and play, and do all of those amazing things you can do on the Internet, on a very, very tight budget, with very, very limited means. Internet is supposed to make knowledge and information free and universal. But there's an entry-level cost that most humans can't afford. I think we need to bridge that. What do you guys think?
Effective Writing
Granted, writing is not very effective. But some of us just love writing...
Earning to Give Writing: Which are the places that pay 1USD or more dollars per word?
Mind Changing Writing: What books need being written that can actually help people effectively change the world?
Clarification Writing: What needs being written because it is only through writing that these ideas will emerge in the first place?
Writing About Efficacy: Maybe nothing else needs to be written on this.
What should we be writing about if we have already been, for very long, training the craft? What has not yet been written, what is the new thing?
The world surely won't save itself through writing, but it surely won't write itself either.
High school students and effective altruism
The cluster of ideas underlying effective altruism is an important part of my worldview, and I believe it would be valuable for many people to be broadly familiar with these ideas. As I mentioned in an earlier LessWrong post, I was pleasantly surprised that many advisees for Cognito Mentoring (including some who are still in high school) were familiar with and interested in effective altruism. Further, our page on effective altruism learning resources has been one of our more viewed pages in recent times, with people spending about eight minutes on average on the page according to Google Analytics.
In this post, I consider the two questions:
- Are people in high school ready to understand the ideas of effective altruism?
- Are there benefits from exposing people to effective altruist ideas when they are still in high school?
1. Are people in high school ready to understand the ideas of effective altruism?
I think that the typical LessWrong reader would have been able to grasp key ideas of effective altruism (such as room for more funding and earning to give) back in ninth or tenth grade from the existing standard expositions. Roughly, I expect that people who are 2 or more standard deviations above the mean in IQ can understand the ideas when they begin high school, and those who are 1.5 standard deviations above the mean in IQ can understand the ideas by the time they end high school. Certainly, some aspects of the discussion, such as the one charity argument, benefit from knowledge of calculus. Both the one charity argument and the closely related concept of room for more funding are linked with the idea of marginalism in economics. But it's not a dealbreaker: people can understand the argument better with calculus or economics, but they can understand it reasonably well even without. And it might also work in reverse: seeing these applications before studying the formal mathematics or economics may make people more interested in mastering the mathematics or economics.
Of course, just because people can understand effective altruist ideas if they really want to, doesn't mean they will do so. It may be necessary to simplify the explanations and improve the exposition so as to make it more attractive to younger people. An alternative route would be to sneak the explanations into things young people are already engaging with. This could be an academic curriculum or a story. Harry Potter and the Methods of Rationality is arguably an example of the latter, though it is focused more on rationality than on effective altruism.
However, I'm highly uncertain of my guesstimates, partly because I'm not very actively in touch with a representative cross-section of typical, or even of intellectually gifted, high school students. The subset of people I know is generally mediated by several levels of selection bias. I'm therefore quite eager to hear thoughts, particularly from people who are themselves high school students or have tried to discuss effective altruist ideas with high school students.
2. Are there benefits from exposing people to effective altruist ideas when they are still in high school?
Effective altruism as it was originally conceived has been highly focused on the question of where to donate money for the most impact (this is the focus of organizations such as GiveWell and Giving What We Can). This makes it of less direct relevance to people still in high school, because they don't have much disposable income. But there are arguably other benefits. Some examples:
- In recent times, there has been more discussion in the effective altruist community about smart career choice. This seems to have begun with discussion of earning to give. 80,000 Hours has played an important role in shaping the conversation on altruistic career choice. Since people start thinking about careers while in high school, effective altruism is potentially relevant. (This page compiles some links to discussions of altruistic career choice -- we'll be adding more to that as we learn more).
- Lifestyle choices and habits can have an effect on the world both directly (for instance, being vegetarian, or recycling) and indirectly (good habits promote better earning or higher savings that can then be redirected to altruistic causes, or people can become more productive and generate more social value through their jobs). For the lifestyle choices that have a direct effect, it's never too early to start. For instance, if being vegetarian is the right thing, one might as well switch as a teenager. For the indirect effects, starting earlier gives one more lead time to develop skills and habits. If frugal living habits and greater stamina at work promote earning to give, then these habits may be better to set while still a teenager than when one is 25. The Effective Altruists Facebook page includes discussions of many questions of this sort in addition to discussions about where to donate.
- A number of people in high school and college are attracted to activities that ostensibly generate social value. Learning effective altruist ideas may make students more skeptical of many such activities and approach the decision of whether to participate in them more critically. For instance, a stalwart of effective altruism may not see much point (from the social value perspective) in going on a school-sponsored trip to lay bricks for a schoolhouse in Africa. The person may still engage in it as a fun activity, but will not have illusions about it being an activity of high social value. Similarly, people may be more skeptical of the social value of activities that involve volunteering in one's community for tasks where they are easily replaceable by others.
- The effective altruist movement could itself benefit from a greater diversity of people contributing and participating. High school students may have insights that adults overlook.
Did I miss other points? Counterpoints? Do you have relevant experience that can shed light on the discussion? I'm eager to hear thoughts.
Some ideas in the post were based on discussion with my Cognito Mentoring collaborator Jonah Sinick.
UPDATE: The post provoked some discussion in a thread on the Effective Altruists Facebook group.
Jobs and internships available at the Centre for Effective Altruism: new 'EA outreach' roles added
I recently posted on LessWrong main about the jobs and internships currently available at the Centre for Effective Altruism. (As I mentioned, effective altruism in general and CEA in particular have been discussed many times on LessWrong, so these opportunities might be of interest to some readers!) We're just starting a new project to do effective altruism outreach and 'marketing' (much like what Peter Hurford discusses in this post), so have added some new roles in this to the recruitment round; there's a full description of them here. If you're interested, apply by 5pm GMT on February 28th, and if you know anyone who might be, do pass it along!
Private currency to generate funds for effective altruism
In the last few years we have seen two interesting revolutionary ideas on how to change the monetary system. The first is Bitcoin: the most well-known peer-to-peer currency. It has been wildly debated recently and I won't go into the detail of allegations of use in criminal activities etc (for one thing, I don't know much about it). My interest is rather in the money creation part. The people who run the Bitcoin software are rewarded for their work with new Bitcoins - a process called mining. Now the pace at which new Bitcoins are mined is limited, which means that Bitcoin creation is a zero-sum game: the more one miner contributes to the Bitcoin software, the less Bitcoins other miners get. Unsurprisingly, this has led to an arms race: miners spend nearly as much on running the software as they get back in form of new Bitcoins.
The second idea is the Chicago Plan, which was debated already in the 30's, after the great crash of 1929, but which recently was resurrected by Michael Kumhof (senior economist at IMF, of all places). The central idea of the Chicago Plan is to abolish fractional reserve banking - the system by which private banks in effect create money out of thin air. Instead of lending out most of the depositors' money, banks would effectively have to let them stay in the bank.
Instead money would be created by the central bank/government, a process that would generate a massive seignorage for the government. According to Kumhof, it would also have other beneficial effects, such as killing off the "boom-and-bust"-cycles which he thinks fractional reserve banking are mostly responsible for, and diminishing the wasteful parts of the financial sector.
Kumhof ideas' have not been well received. Overall, it is remarkable how little reform there has been of the financial and monetary system given that the world had a major financial meltdown 2008 (and was close to an even greater one, in my understanding). Governments won't challenge the financial system radically in the near future, that's for sure.
Instead radical reforms can only come from private hands. Let us now compare the two ideas. In the Bitcoin system money is created by private hands, but in wasteful ways, which effectively means that there is very little seignorage. Under the Chicago plan, money is created by the government in much more efficient ways, which leads to a large seignorage. Now my idea is to take the best part of both of these ideas: let a private player - more exactly, an altruistic organization such as CEA - produce the money centrally, Chicago plan-like, and let the seignorage be used for altruistic purposes. (Of course, there would be some costs of running the system, but if the system was sufficiently large, these would be negligible in relation to the seignorage.)
If the altruistic organization that did this had a sufficiently good reputation, chances are greater that people would trust the system. Of course, it would try to stop the currency from being used to launder money, drug trade etc.
Generally, people would be suspicious of private currencies where the central authority collected a seignorage, but if this seignorage was used for charitable and other altruistic purposes (and people really trusted that that would be the case), this would, I hope, be less of a problem.
What do you think? I'd be happy to get comments from people who know more about the Bitcoin system, since I don't really know it (though I find it interesting). Perhaps there is some info concerning Bitcoins that tells against this proposal; if so, I'd be interested in that.
In Praise of Tribes that Pretend to Try: Counter-"Critique of Effective Altruism"
We finally have created the first "inside view" critique of EA.
The critique's main worry would please Hofstadter by being self-referential: Being the first, having taken too long to emerge, thus indicating that EA's (Effective Altruists) are pretending to try instead of actually trying, or else they’d have self-criticized already.
Here I will try to clash head-on with what seems to be the most important point of that critique. This will be the only point I'll address, for the sake of brevity, mnemonics and force of argument. This is a meta-contrarian apostasy, in its purpose. I'm not sure it is a view I hold, anymore than a view I think has to be out there in the open, being thought of and criticized. I am mostly indebted to this comment by Viliam_Bur, which was marinating in my mind while I read Ben Kuhn's apostasy.
Original Version Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Counterargument: Tribes have internal structure, and so should the EA movement.
This includes a free reconstruction, containing nearly the whole original, of what I took to be important in Viliam's comment.
Feeling-oriented, and outcome-oriented communities
People probably need two kinds of communities -- let's call them "feelings-oriented community" and "outcome-oriented community". To many people this division has been "home" and "work" over the centuries, but that has some misleading connotations. A very popular medieval alternative was "church" and "work". Organized large scale societies have many alternatives that fill up these roles, to greater or lesser degrees. Indigenous tribes have the three realms separated, "work" has a time and a place, likewise, rituals and late afternoon discussions, chants etc... fulfill the purpose of "church".
A "feelings-oriented community" is a community of people who meet because they enjoy being together and feel safe with each other. The examples are a functional family, a church group, friends meeting in a pub, etc... One of the important properties of feeling oriented communities, that according to Dennett has not yet sunk in the naturalist community is that nothing is a precondition for belonging to the group which feels, or the sacredness taking place. You could spend the rest of your life going to church without becoming a priest, listening to the tribal leaders and shamans talk without saying a word. There are no pre-requisites to become your parent's son, or your sister's brother every time you enter the house.
An "outcome-oriented community" is a community that has an explicit goal, and people genuinely contribute to making that goal happen. The examples are a business company, an NGO, a Toastmasters meetup, an intentional household etc... To become a member of an outcome-oriented community, you have to show that you are willing and able to bring about the goal (either for itself, or in exchange of something valuable). There is some tolerance if you stop doing things well, either by ignorance or, say, bad health. But the tolerance is finite and the group can frown upon, punish, or even expel those who are not clearly helping the goal.
What are communities good for? What is good for communities?
The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do. For example, you could have an NGO with twelve members, where two of them want to have the work done, but the remaining ten only come to socialize. Of course, even those ten will verbally support the explicit goals of the organization, but they will be much more relaxed about timing, care less about verifying the outcomes, etc. For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals. (I have seen a similar tension in a few organizations, and the usual solution seems to be the serious members forming an "organization within an organization", keeping the other ones around them for social and other purposes.)
As an evolutionary just-so story, we have a tribe composed of many different people, and within the tribe we have a hunters group, containing the best hunters. Members of the tribe are required to follow the norms of the tribe. Hunters must be efficient in their jobs. But hunters don't become a separate tribe... they go hunting for a while, and then return back to their original tribe. The tribe membership is for life, or at least for a long time; it provides safety and fulfills the emotional needs. Each hunting expedition is a short-termed event; it requires skills and determination. If a hunter breaks his legs, he can no longer be a hunter; but he still remains a member of his tribe. The hunter has now descended from the feeling and work status, to only the feeling status, this is part of expected cycles - a woman may stop working while having a child, a teenager may decide work is evil and stop working, an existentialist may pause for a year to reflect on the value of life itself in different ways - but throughout they do are not cast away from the reassuring arms of the "feeling's oriented community".
A healthy double layered movement
Viliam and I think a healthy way of living should be modeled like this; on two layers. To have a larger tribe based on shared values (rationality and altruism), and within this tribe a few working groups, both long-term (MIRI, CFAR) and short-term (organizers of the next meetup). Of course it could be a few overlapping tribes (the rationalists, the altruists), but the important thing is that you keep your social network even if you stop participating in some specific project -- otherwise we get either cultish pressure (you have to remain hard-working on our project even if you no longer feel so great about it, or you lose your whole social network) or inefficiency (people remain formally members of the project, but lately barely any work gets done, and the more active ones are warned not to rock the boat). Joining or leaving a project should not be motivated or punished socially.
This is the crux of Viliam's argument and of my disagreement with Ben's Critique: The Effective Altruist community has grown large enough that it can easily afford to have two kinds of communities inside it: The feelings-oriented EA's, whom Ben calls (unfairly in my opinion) pretending to try to be effective altruists, and the outcome-oriented EA’s, whom are Really trying to be effective altruists.
Now that is not how he put it in his critique. He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first, and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.
Intentional Agents, communities or individuals, are not monolithic
Most importantly, if you consider the argument above that Effective Altruim can’t be criticized on accounts of being one single entity, because factually, it isn’t, then I wish you to bring this intuition pump one step further: Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game. Don’t forget the demanding objection to utilitarianism, if you ask of a smoker to stop smoking because it is irrational to smoke, and he believes you, he may end up abandoning rationalism just because a small subset of him was addicted to smoking, and he just couldn't live with that much inconsistency in his self view. Likewise, if to be an utilitarian is infinitely demanding, you lose the utilitarians to “what the hell” effects.
The same goes for Effective Altruists. Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not. I don’t expect and don’t think anyone should expect that any single individual becomes a perfect altruist. There are parts of us that just won’t let go of some thing they crave for and praise. We don’t want to lose the entire community if one individual is not effective enough, and we don’t want to lose one individual if a part of him, or a time-slice, is not satisfying the canonical expectation of the outcome-oriented community.
Rationalists already accepted a layered structure
We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.
For a less sensitive example, consider an average job: you may think about your colleagues as your friends, but if you leave the job, how many of them will you keep regular contact with? In contrast with this, a regular church just asks you to come to sunday prayers, gives you some keywords and a few relatively simple rules. If this level of participation is ideal for you, welcome, brother or sister! And if you want more, feel free to join some higher-commitment group within the church. You choose the level of your participation, and you can change it during your life. For a non-religious example, in a dance group you could just go and dance, or chose to do the new year’s presentation, or choose to find new dancers, all the way up to being the dance organizer and coordinator.
The current rationalist community has solved this problem to some extent. Your level of participation can range from being a lurker at LW, all the way up, from meetup organizer to CFAR creator to writing the next HPMOR or it’s analogue.
Viliam ends his comment by saying: It would be great to have a LW village, where some people would work on effective altruism, others would work on building artificial intelligence, yet others would develop a rationality curriculum, and some would be too busy with their personal issues to do any of this now... but everyone would know that this is a village where good and sane people live, where cool things happen, and whichever of these good and real goals I will choose to prioritize, it's still a community where I belong. Actually, it would be great to have a village where 5% or 10% of people would be the LW community. Connotatively, it's not about being away from other people, but about being with my people.
The challenge, in my view from now on is not how to make effective altruists stop pretending, but how to surround effective altruists with welcoming arms even when the subset of them that is active at that moment is not doing the right things? How can we make EA’s a loving and caring community of people who help each other, so that people feel taken care of enough that they actually have the attentional and emotional resources necessary to really go there and do the impossible.
Here are some examples of this layered system working in non-religious non-tribal settings: Lesswrong has a karma system to tell different functions within the community. It also has meetups, it also has a Study Hall, and it also has strong relations with CFAR and MIRI.
Leverage research, as community/house has active hard-core members, new hirees, people in training, and friends/relationships of people there, very different outcomes expected from each.
Transhumanists have people who only self-identify, people who attend events, people who write for H+ magazine, a board of directors, and it goes all the way up to Nick Bostrom, who spends 70 hours a week working on academic content in related topics.
The solution is not just introspection, but the maintenance of a welcoming environment at every layer of effectiveness
The Effective Altruist community does not need to get introspectively even more focused on effectiveness - at least not right now - what it needs is a designed hierarchical structure which allows it to let everyone in, and let everyone transition smoothly between different levels of commitment.
Most people will transition upward, since understanding more makes you more interested, more effective, etc… in an upward spiral. But people also need to be able to slide down for a bit. To meet their relatives for thanksgiving, to play Go with their workfriends, to dance, to pretend they don’t care about animals. To do their thing. Their internal thing which has not converted to EA like the rest of them have. This is not only okay, it is not only tolerable, it is essential for the movement’s survival.
But then how can those who are at their very best, healthy, strong, smart, and at the edge of the movement push it forward?
Here is an obvious place not to do it: Open groups on Facebook.
Open Facebook is not the place to move it forward. Some people who are recognized as being in the forefront of the movement, like Toby, Will, Holden, Beckstead, Wise and others should create an “advancing Effective Altruism” group on facebook, and there and then will be a place where no blood will be shed on the hands of neither the feeling-oriented, nor the outcome-oriented group by having to decrease the signal to noise ratio within either.
Now once we create this hierarchy within the movement (not only the groups, but the mental hierarchy, and the feeling that it is fine to be at a feeling-oriented moment, or to have feeling-oriented experiences) we will also want to increase the chance that people will move up the hierarchical ladder. As many as possible, as soon as possible, after all, the higher up you are, by definition the more likely you are to be generating good outcome. We have already started doing so. The EA Self-Help (secret) group on Facebook serves this very purpose, it helps altruists when they are feeling down, unproductive, sad, or anything actually, and we will hear you and embrace you even if you are not being particularly effective and altruistic when you get there. It is the legacy of our deceased friend, Jonatas, to all of us, because of him, we now have some understanding that people need love and companionship especially when they are down. Or we may lose all of their future good moments. The monolithic individual fallacy is a very pricy one to pay. Let us not learn the hard way by losing another member.
Conclusions
I have argued here that the main problem indicated in Ben’s writing, that effective altruists are pretending to really try, is not to be viewed in this light. Instead, I argued that the very survival of the Effective Altruist movement may rely on finding a welcoming space for something that Viliam_Bur has called feeling-oriented community, without which many people would leave the movement, by experiencing it as too demanding during their bad times, or if it strongly conflicted a particular subset of themselves they consider important. Instead I advocate for hierarchically separate communities within the movement, allowing those who are at any particular level of commitment to grow stronger and win.
The first three initial measures I suggest for this re-design of the community are:
1) Making all effective altruists aware that the EA self-help group exists for anyone who, for any reason, wants help from the community, even for non EA related affairs.
2) Creating a Closed Facebook group with only those who are advancing the discussion at its best, for instance those who wrote long posts in their own blogs about it, or obvious major figures.
3) Creating a Study Hall equivalent for EA’s to increase their feeling of belonging to a large tribe of goal-sharing people, where they can lurk even when they have nothing to say, and just do a few pomodoros.
This is my first long writing on Effective Altruism, and my first attempt at an apostasy, and my first explicit attempt to be meta-contrarian. I hope I may have helped shed some light on the discussion, and that my critique can be taken by all, specially Ben, to be oriented envisioning the same large scale goal that is shared by effective altruists around the world. The outlook of effective altruism is still being designed every day by all of us, and I hope my critique can be used, along with Ben’s and others, to build not only a movement that is stronger in it’s individuals emotions, as I have advocated here, but furthermore in being psychologically healthy and functional group, a whole that understands the role of its parts, and subdivides accordingly.
Democracy and rationality
Note: This is a draft; so far, about the first half is complete. I'm posting it to Discussion for now; when it's finished, I'll move it to Main. In the mean time, I'd appreciate comments, including suggestions on style and/or format. In particular, if you think I should(n't) try to post this as a sequence of separate sections, let me know.
Summary: You want to find the truth? You want to win? You're gonna have to learn the right way to vote. Plurality voting sucks; better voting systems are built from the blocks of approval, medians (Bucklin cutoffs), delegation, and pairwise opposition. I'm working to promote these systems and I want your help.
Contents: 1. Overblown¹ rhetorical setup ... 2. Condorcet's ideals and Arrow's problem ... 3. Further issues for politics ... 4. Rating versus ranking; a solution? ... 5. Delegation and SODA ... 6. Criteria and pathologies ... 7. Representation, Proportional representation, and Sortition ... 8. What I'm doing about it and what you can ... 9. Conclusions and future directions ... 10. Appendix: voting systems table ... 11. Footnotes
1.
This is a website focused on becoming more rational. But that can't just mean getting a black belt in individual epistemic rationality. In a situation where you're not the one making the decision, that black belt is just a recipe for frustration.
Of course, there's also plenty of content here about how to interact rationally; how to argue for truth, including both hacking yourself to give in when you're wrong and hacking others to give in when they are. You can learn plenty here about Aumann's Agreement Theorem on how two rational Bayesians should never knowingly disagree.
But "two rational Bayesians" isn't a whole lot better as a model for society than "one rational Bayesian". Aspiring to be rational is well and good, but the Socratic ideal of a world tied together by two-person dialogue alone is as unrealistic as the sociopath's ideal of a world where their own voice rules alone. Society needs structures for more than two people to interact. And just as we need techniques for checking irrationality in one- and two-person contexts, we need them, perhaps all the more, in multi-person contexts.
Most of the basic individual and dialogical rationality techniques carry over. Things like noticing when you are confused, or making your opponent's arguments into a steel man, are still perfectly applicable. But there's also a new set of issues when n>2: the issues of democracy and voting. For a group of aspiring rationalists to come to a working consensus, of course they need to begin by evaluating and discussing the evidence, but eventually it will be time to cut off the discussion and just vote. When they do so, they should understand the strengths and pitfalls of voting in general and of their chosen voting method in particular.
And voting's not just useful for an aspiring rationalist community. As it happens, it's an important part of how governments are run. Discussing politics may be a mind-killer in many contexts, but there are an awful lot of domains where politics is a part of the road to winning.² Understanding voting processes a little bit can help you navigate that road; understanding them deeply opens the possibility of improving that road and thus winning more often.
2. Collective rationality: Condorcet's ideals and Arrow's problem
Imagine it's 1785, and you're a member of the French Academy of Sciences. You're rubbing elbows with most of the giants of science and mathematics of your day: Coulomb, Fourier, Lalande, Lagrange, Laplace, Lavoisier, Monge; even the odd foreign notable like Franklin with his ideas to unify electrostatics and electric flow.

One day, they'll put your names in front of lots of cameras (even though that foreign yokel Franklin will be in more pictures)
And this academy, with many of the smartest people in the world, has votes on stuff. Who will be our next president; who should edit and schedule our publications; etc. You're sure that if you all could just find the right way to do the voting, you'd get the right answer. In fact, you can easily prove that, or something like it: if a group is deciding between one right and one wrong option, and each member is independently more than 50% likely to get it right, then as the group size grows the chance of a majority vote choosing the right option goes to 1.
But somehow, there's still annoying politics getting in the way. Some people seem to win the elections simply because everyone expects them to win. So last year, the academy decided on a new election system to use, proposed by your rival, Charles de Borda, in which candidates get different points for being a voters first, second, or third choice, and the one with the most points wins. But you're convinced that this new system will lead to the opposite problem: people who win the election precisely because nobody expected them to win, by getting the points that voters strategically don't want to give to a strong rival. But when people point that possibility out to Borda, he only huffs that "my system is meant for honest men!"
So with your proof of the above intuitive, useful result about two-way elections, you try to figure out how to reduce an n-way election to the two-candidate case. Clearly, you can show that Borda's system will frequently give the wrong results from that perspective. But frustratingly, you find that there could sometimes be no right answer; that there will be no candidate who would beat all the others in one-on-one races. A crack has opened up; could it be that the collective decisions of intelligent individual rational agents could be irrational?
Of course, the "you" in this story is the Marquis de Condorcet, and the year 1785 is when he published his Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix, a work devoted to the question of how to acheive collective rationality. The theorem referenced above is Condorcet's Jury Theorem, which seems to offer hope that democracy can point the way from individually-imperfect rationality towards an ever-more-perfect collective rationality. Just as Aumann's Agreement Theorem shows that two rational agents should always move towards consensus, the Condorcet Jury Theorem apparently shows that if you have enough rational agents, the resulting consensus will be correct.
But as I said, Condorcet also opened a crack in that hope: the possibility that collective preferences will be cyclical. If the assumptions of the jury theorem don't hold — if each voter doesn't have a >50% chance of being right on a randomly-selected question, OR if the correctness of two randomly-selected voters is not independent and uncorrelated — then individually-sensible choices can lead to collectively-ridiculous ones.
What do I mean by "collectively-ridiculous"? Let's imagine that the Rationalist Marching Band is choosing the colors for their summer, winter, and spring uniforms, and that they all agree that the only goal is to have as much as possible of the best possible colors. The summer-style uniforms come in red or blue, and they vote and pick blue; the winter-style ones come in blue or green, and they pick green; and the spring ones come in green or red, and they pick red.
Obviously, this makes us doubt their collective rationality. If, as they all agree they should, they had a consistent favorite color, they should have chosen that color both times that it was available, rather than choosing three different colors in the three cases. Theoretically, the salesperson could use such a fact to pump money out of them; for instance, offering to let them "trade up" their spring uniform from red to blue, then to green, then back to red, charging them a small fee each time; if they voted consistently as above, they would agree to each trade (though of course in reality human voters would probably catch on to the trick pretty soon, so the abstract ideal of an unending circular money pump wouldn't work).
This is the kind of irrationality that Condorcet showed was possible in collective decisionmaking. He also realized that there was a related issue with logical inconsistencies. If you were take a vote on 3 logically related propositions — say, "Should we have a Minister of Silly Walks, to be appointed by the Chancellor of the Excalibur", "Should we have a Minister of Silly Walks, but not appointed by the Chancellor of the Excalibur", and "Should we in fact have a Minister of Silly Walks at all", where the third cannot be true unless one of the first is — then you could easily get majority votes for inconsistent results — in this case, no, no, and yes, respectively. Obviously, there are many ways to fix the problem in this simple case — probably many less-wrong'ers would suggest some Bayesian tricks related to logical networks and treating votes as evidence⁸ — but it's a tough problem in general even today, especially when the logical relationships can be complex, and Condorcet was quite right to be worried about its implications for collective rationality.³
And that's not the only tough problem he correctly foresaw. Nearly 200 years later and an ocean away, in the 1960s, Kenneth Arrow showed that it was impossible for a preferential voting system to avoid the problem of a "Condorcet cycle" of preferences. Arrows theorem shows that any voting system which can consistently give the same winner (or, in ties, winners) for the same voter preferences; which does not make one voter the effective dictator; which is sure to elect a candidate if all voters prefer them; and which will switch the results for two candidates if you switch their names on all the votes... must exhibit, in at least some situation, the pathology that befell the Rationalist Marching Band above, or in other words, must fail "independence of irrelevant alternatives".
Arrow's theorem is far from obvious a priori, but proof is not hard to understand intuitively using Condorcet's insight. Say that there are three candidates, X, Y, and Z, with roughly equal bases of support; and that they form a Condorcet cycle, because in two-way races, X would beat Y with help from Z supporters, Y would beat Z with help from X supporters, and Z would beat X with help from Y supporters. So whoever wins in the three-way race — say, X — just remove the one who would have lost to them — Y in this case — and that "irrelevant" change will change the winner to be the third — Z in this case.
Summary of above: Collective rationality is harder than individual or two-way rationality. Condorcet saw the problem and tried to solve it, but Arrow saw that Condorcet had been doomed to fail.
3. Further issues for politics
So Condorcet's ideals of better rationality through voting appear to be in ruins. But at least we can hope that voting is a good way to do politics, right?
Not so fast. Arrow's theorem quickly led to further disturbing results. Alan Gibbard (and also Mark Satterthwaite) extended that there is no voting system which doesn't encourage voting strategy. That is, if you view an voting system as a class of games where the finite players and finite available strategies are fixed, no player is effectively a dictator, and the only thing that varies are the payoffs for each player from each outcome, there is no voting system where you can derive your best strategic vote purely by looking "honestly" at your own preferences; there is always the possibility of situations where you have to second-guess what others will do.
Amartya Sen piled on with another depressing extension of Arrows' logic. He showed that there is no possible way of aggregating individual choices into collective choice that satisfies two simple criteria. First, it shouldn't choose pareto-dominated outcomes; if everyone prefers situation XYZ to ABC, that they don't do XYZ. Second, it is "minimally liberal"; that is, there are at least two people who each get to freely make their own decision on at least one specific issue each, no matter what, so for instance I always get to decide between X and A (in Gibbard's⁴ example, colors for my house), and you always get to decide between Y and B (colors for your own house). The problem is that if you nosily care more about my house's color, the decision that should have been mine, and I nosily care about yours, more than we each care about our own, then the pareto-dominant situation is the one where we don't decide our own houses; and that nosiness could, in theory, be the case for any specific choice that, a priori, someone might have labelled as our Inalienable Right. It's not such a surprising result when you think about it that way, but it does clearly show that unswerving ideals of Democracy and Liberty will never truly be compatible.
Meanwhile, "public choice" theorists⁵ like Duncan Black, James Buchanan, etc. were busy undermining the idea of democratic government from another direction: the motivations of the politicians and bureaucrats who are supposed to keep it running. They showed that various incentives, including the strange voting scenarios explored by Condorcet and Arrow, would tend open a gap between the motives of the people and those of the government, and that strategic voting and agenda-setting within a legislature would tend to extend the impact of that gap. Where Gibbard and Sen had proved general results, these theorists worked from specific examples. And in one aspect, at least, their analysis is devastatingly unanswerable: the near-ubiquitous "democratic" system of plurality voting, also known as first-past-the-post or vote-for-one or biggest-minority-wins, is terrible in both theory and practice.
So, by the 1980s, things looked pretty depressing for the theory of democracy. Politics, the theory went, was doomed forever to be a worse than sausage factory; disgusting on the inside and distasteful even from outside.
Should an ethical rationalist just give up on politics, then? Of course not. As long as the results it produces are important, it's worth trying to optimize. And as soon as you take the engineer's attitude of optimizing, instead of dogmatically searching for perfection or uselessly whining about the problems, the results above don't seem nearly as bad.
From this engineer's perspective, public choice theory serves as an unsurprising warning that tradeoffs are necessary, but more usefully, as a map of where those tradeoffs can go particularly wrong. In particular, its clearest lesson, in all-caps bold with a blink tag, that PLURALITY IS BAD, can be seen as a hopeful suggestion that other voting systems may be better. Meanwhile, the logic of both Sen's and Gibbard's theorems are built on Arrow's earlier result. So if we could find a way around Arrow, it might help resolve the whole issue.
Summary of above: Democracy is the worst political system... (...except for all the others?) But perhaps it doesn't have to be quite so bad as it is today.
4. Rating versus ranking
So finding a way around Arrow's theorem could be key to this whole matter. As a mathematical theorem, of course, the logic is bulletproof. But it does make one crucial assumption: that the only inputs to a voting system are rankings, that is, voters' ordinal preference orders for the candidates. No distinctions can be made using ratings or grades; that is, as long as you prefer X to Y to Z, the strength of those preferences can't matter. Whether you put Y almost up near X or way down next to Z, the result must be the same.
Relax that assumption, and it's easy to create a voting system which meets Arrow's criteria. It's called Score voting⁶, and it just means rating each candidate with a number from some fixed interval (abstractly speaking, a real number; but in practice, usually an integer); the scores are added up and the highest total or average wins. (Unless there are missing values, of course, total or average amount to the same thing.) You've probably used it yourself on Yelp, IMDB, or similar sites. And it clearly passes all of Arrow's criteria. Non-dictatorship? Check. Unanimity? Check. Symmetry over switching candidate names? Check. Independence of irrelevant alternatives? In the mathematical sense — that is, as long as the scores for other candidates are unchanged — check.
So score voting is an ideal system? Well, it's certainly a far sight better than plurality. But let's check it against Sen and against Gibbard.
Sen's theorem was based on a logic similar to Arrow. However, while Arrow's theorem deals with broad outcomes like which candidate wins, Sen's deals with finely-grained outcomes like (in the example we discussed) how each separate house should be painted. Extending the cardinal numerical logic of score voting to such finely-grained outcomes, we find we've simply reinvented markets. While markets can be great things and often work well in practice, Sen's result still holds in this case; if everything is on the market, then there is no decision which is always yours to make. But since, in practice, as long as you aren't destitute, you tend to be able to make the decisions you care the most about, Sen's theorem seems to have lost its bite in this context.
What about Gibbard's theorem on strategy? Here, things are not so easy. Yes, Gibbard, like Sen, parallels Arrow. But while Arrow deals with what's written on the ballot, Gibbard deals with what's in the voters head. In particular, if a voter prefers X to Y by even the tiniest margin, Gibbard assumes (not unreasonably) that they may be willing to vote however they need to, if by doing so they can ensure X wins instead of Y. Thus, the internal preferences Gibbard treats are, effectively, just ordinal rankings; and the cardinal trick by which score voting avoided Arrovian problems no longer works.
How does score voting deal with strategic issues in practice? The answer to that has two sides. On the one hand, score never requires voters to be actually dishonest. Unlike the situation in a ranked system such as plurality, where we all know that the strategic vote may be to dishonestly ignore your true favorite and vote for a "lesser evil" among the two frontrunners, in score voting you never need to vote a less-preferred option above a more-preferred option. At worst, all you have to do is exaggerate some distinctions and minimize others, so that you might end giving equal votes to less- and more-preferred options.
Did I say "at worst"? I meant, "almost always". Voting strategy only matters to the result when, aside from your vote, two or more candidates are within one vote of being tied for first. Except in unrealistic, perfectly-balanced conditions, as the number of voters rises, the probability that anyone but the two a priori frontrunner candidates is in on this tie falls to zero.⁷ Thus, in score voting, the optimal strategy is nearly always to vote your preferred frontrunner and all candidate above at the maximum, and your less-preferred frontrunner and all candidates below at the minimum. In other words, strategic score voting is basically equivalent to approval voting, where you give each candidate a 1 or 0 and the highest total wins.
In one sense, score voting reducing to approval OK. Approval voting is not a bad system at all. For instance, if there's a known majority Condorcet winner — a candidate who could beat any other by a majority in a one-on-one race — and voters are strategic — they anticipate the unique strong Nash equilibrium, the situation where no group of voters could improve the outcome for all its members by changing their votes, whenever such a unique equilibrium exists — then the Condorcet winner will win under approval. That's a lot of words to say that approval will get the "democratic" results you'd expect in most cases.
But in another sense, it's a problem. If one side of an issue is more inclined to be strategic than the other side, the more-strategic faction could win even if it's a minority. That clashes with many people's ideals of democracy; and worse, it encourages mind-killing political attitudes, where arguments are used as soldiers rather than as ways to seek the truth.
But score and approval voting are not the only systems which escape Arrow's theorem through the trapdoor of ratings. If score voting, using the average of voter ratings, too-strongly encourages voters to strategically seek extreme ratings, then why not use the median rating instead? We know that medians are less sensitive to outliers than averages. And indeed, median-based systems are more resistant to one-sided strategy than average-based ones, giving better hope for reasonable discussion to prosper. That is to say, in a simple model, a minority would need twice as much strategic coordination under median as under average, in order to overcome a majority; and there's good reason to believe that, because of natural factional separation, reality is even more favorable to median systems than that model.
There are several different median systems available. In the US during the 1910-1925 Progressive Era, early versions collectively called "Bucklin voting" were used briefly in over a dozen cities. These reforms, based on counting all top preferences, then adding lower preferences one level at a time until some candidate(s) reach a majority, were all rolled back soon after, principally by party machines upset at upstart challenges or victories. The possibility of multiple, simultaneous majorities is a principal reason for the variety of Bucklin/Median systems. Modern proposals of median systems include Majority Approval Voting, Majority Judgment, and Graduated Majority Judgment, which would probably give the same winners almost all of the time. An important detail is that most median system ballots use verbal or letter grades rather than numeric scores. This is justifiable because the median is preserved under any monotonic transformation, and studies suggest that it would help discourage strategic voting.
Serious attention to rated systems like approval, score, and median systems barely began in the 1980s, and didn't really pick up until 2000. Meanwhile, the increased amateur interest in voting systems in this period — perhaps partially attributable to the anomalous 2000 US presidential election, or to more-recent anomalies in the UK, Canada, and Australia — has led to new discoveries in ranked systems as well. Though such systems are still clearly subject to Arrow's theorem, new "improved Condorcet" methods which use certain tricks to count a voter's equal preferences between to candidates on either side of the ledger depending on the strategic needs, seem to offer promise that Arrovian pathologies can be kept to a minimum.
With this embarrassment of riches of systems to choose from, how should we evaluate which is best? Well, at least one thing is a clear consensus: plurality is a horrible system. Beyond that, things are more controversial; there are dozens of possible objective criteria one could formulate, and any system's inventor and/or supporters can usually formulate some criterion by which it shines.
Ideally, we'd like to measure the utility of each voting system in the real world. Since that's impossible — it would take not just a statistically-significant sample of large-scale real-world elections for each system, but also some way to measure the true internal utility of a result in situations where voters are inevitably strategically motivated to lie about that utility — we must do the next best thing, and measure it in a computer, with simulated voters whose utilities are assigned measurable values. Unfortunately, that requires assumptions about how those utilities are distributed, how voter turnout is decided, and how and whether voters strategize. At best, those assumptions can be varied, to see if findings are robust.
In 2000, Warren Smith performed such simulations for a number of voting systems. He found that score voting had, very robustly, one of the top expected social utilities (or, as he termed it, lowest Bayesian regret). Close on its heels were a median system and approval voting. Unfortunately, though he explored a wide parameter space in terms of voter utility models and inherent strategic inclination of the voters, his simulations did not include voters who were more inclined to be strategic when strategy was more effective. His strategic assumptions were also unfavorable to ranked systems, and slightly unrealistic in other ways. Still, though certain of his numbers must be taken with a grain of salt, some of his results were large and robust enough to be trusted. For instance, he found that plurality voting and instant runoff voting were clearly inferior to rated systems; and that approval voting, even at its worst, offered over half the benefits compared to plurality of any other system.
Summary of above: Rated systems, such as approval voting, score voting, and Majority Approval Voting, can avoid the problems of Arrow's theorem. Though they are certainly not immune to issues of strategic voting, they are a clear step up from plurality. Starting with this section, the opinions are my own; the two prior sections were based on general expert views on the topic.
5. Delegation and SODA
Rated systems are not the only way to try to beat the problems of Arrow and Gibbard (/Satterthwaite).
Summary of above:
6. Criteria and pathologies
do.
Summary of above:
7. Representation, proportionality, and sortition
do.
Summary of above:
8. What I'm doing about it and what you can
do.
Summary of above:
9. Conclusions and future directions
do.
Summary of above:
10. Appendix: voting systems table
Compliance of selected systems (table)
The following table shows which of the above criteria are met by several single-winner systems. Note: contains some errors; I'll carefully vet this when I'm finished with the writing. Still generally reliable though.
| Majority/ MMC | Condorcet/ Majority Condorcet | Cond. loser | Monotone | Consistency/ Participation | Reversal symmetry | IIA | Cloneproof | Polytime/ Resolvable | Summable | Equal rankings allowed | Later prefs allowed | Later-no-harm/ Later-no-help | FBC:No favorite betrayal |
||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Approval[nb 1] | Ambiguous | No/Strategic yes[nb 2] | No | Yes | Yes[nb 2] | Yes | Ambiguous | Ambig.[nb 3] | Yes | O(N) | Yes | No | [nb 4] | Yes | |
| Borda count | No | No | Yes | Yes | Yes | Yes | No | No (teaming) | Yes | O(N) | No | Yes | No | No | |
| Copeland | Yes | Yes | Yes | Yes | No | Yes | No (but ISDA) | No (crowding) | Yes/No | O(N2) | Yes | Yes | No | No | |
| IRV (AV) | Yes | No | Yes | No | No | No | No | Yes | Yes | O(N!)[nb 5] | No | Yes | Yes | No | |
| Kemeny-Young | Yes | Yes | Yes | Yes | No | Yes | No (but ISDA) | No (teaming) | No/Yes | O(N2)[nb 6] | Yes | Yes | No | No | |
| Majority Judgment[nb 7] | Yes[nb 8] | No/Strategic yes[nb 2] | No[nb 9] | Yes | No[nb 10] | No[nb 11] | Yes | Yes | Yes | O(N)[nb 12] | Yes | Yes | No[nb 13] | Yes | Yes |
| Minimax | Yes/No | Yes[nb 14] | No | Yes | No | No | No | No (spoilers) | Yes | O(N2) | Some variants | Yes | No[nb 14] | No | |
| Plurality | Yes/No | No | No | Yes | Yes | No | No | No (spoilers) | Yes | O(N) | No | No | [nb 4] | No | |
| Range voting[nb 1] | No | No/Strategic yes[nb 2] | No | Yes | Yes[nb 2] | Yes | Yes[nb 15] | Ambig.[nb 3] | Yes | O(N) | Yes | Yes | No | Yes | |
| Ranked pairs | Yes | Yes | Yes | Yes | No | Yes | No (but ISDA) | Yes | Yes | O(N2) | Yes | Yes | No | No | |
| Runoff voting | Yes/No | No | Yes | No | No | No | No | No (spoilers) | Yes | O(N)[nb 16] | No | No[nb 17] | Yes[nb 18] | No | |
| Schulze | Yes | Yes | Yes | Yes | No | Yes | No (but ISDA) | Yes | Yes | O(N2) | Yes | Yes | No | No | |
| SODA voting[nb 19] | Yes | Strategic yes/yes | Yes | Ambiguous[nb 20] | Yes/Up to 4 cand. [nb 21] | Yes[nb 22] | Up to 4 candidates[nb 21] | Up to 4 cand. (then crowds) [nb 21] | Yes[nb 23] | O(N) | Yes | Limited[nb 24] | Yes | Yes | |
| Random winner/ arbitrary winner[nb 25] |
No | No | No | NA | No | Yes | Yes | NA | Yes/No | O(1) | No | No | Yes | ||
| Random ballot[nb 26] | No | No | No | Yes | Yes | Yes | Yes | Yes | Yes/No | O(N) | No | No | Yes | ||
"Yes/No", in a column which covers two related criteria, signifies that the given system passes the first criterion and not the second one.
- ^ a b These criteria assume that all voters vote their true preference order. This is problematic for Approval and Range, where various votes are consistent with the same order. See approval voting for compliance under various voter models.
- ^ a b c d e In Approval, Range, and Majority Judgment, if all voters have perfect information about each other's true preferences and use rational strategy, any Majority Condorcet or Majority winner will be strategically forced – that is, win in the unique Strong Nash equilibrium. In particular if every voter knows that "A or B are the two most-likely to win" and places their "approval threshold" between the two, then the Condorcet winner, if one exists and is in the set {A,B}, will always win. These systems also satisfy the majority criterion in the weaker sense that any majority can force their candidate to win, if it so desires. (However, as the Condorcet criterion is incompatible with the participation criterion and the consistency criterion, these systems cannot satisfy these criteria in this Nash-equilibrium sense. Laslier, J.-F. (2006) "Strategic approval voting in a large electorate,"IDEP Working Papers No. 405 (Marseille, France: Institut D'Economie Publique).)
- ^ a b The original independence of clones criterion applied only to ranked voting methods. (T. Nicolaus Tideman, "Independence of clones as a criterion for voting rules", Social Choice and Welfare Vol. 4, No. 3 (1987), pp. 185–206.) There is some disagreement about how to extend it to unranked methods, and this disagreement affects whether approval and range voting are considered independent of clones. If the definition of "clones" is that "every voter scores them within ±ε in the limit ε→0+", then range voting is immune to clones.
- ^ a b Approval and Plurality do not allow later preferences. Technically speaking, this means that they pass the technical definition of the LNH criteria - if later preferences or ratings are impossible, then such preferences can not help or harm. However, from the perspective of a voter, these systems do not pass these criteria. Approval, in particular, encourages the voter to give the same ballot rating to a candidate who, in another voting system, would get a later rating or ranking. Thus, for approval, the practically meaningful criterion would be not "later-no-harm" but "same-no-harm" - something neither approval nor any other system satisfies.
- ^ The number of piles that can be summed from various precincts is floor((e-1) N!) - 1.
- ^ Each prospective Kemeny-Young ordering has score equal to the sum of the pairwise entries that agree with it, and so the best ordering can be found using the pairwise matrix.
- ^ Bucklin voting, with skipped and equal-rankings allowed, meets the same criteria as Majority Judgment; in fact, Majority Judgment may be considered a form of Bucklin voting. Without allowing equal rankings, Bucklin's criteria compliance is worse; in particular, it fails Independence of Irrelevant Alternatives, which for a ranked method like this variant is incompatible with the Majority Criterion.
- ^ Majority judgment passes the rated majority criterion (a candidate rated solo-top by a majority must win). It does not pass the ranked majority criterion, which is incompatible with Independence of Irrelevant Alternatives.
- ^ Majority judgment passes the "majority condorcet loser" criterion; that is, a candidate who loses to all others by a majority cannot win. However, if some of the losses are not by a majority (including equal-rankings), the Condorcet loser can, theoretically, win in MJ, although such scenarios are rare.
- ^ Balinski and Laraki, Majority Judgment's inventors, point out that it meets a weaker criterion they call "grade consistency": if two electorates give the same rating for a candidate, then so will the combined electorate. Majority Judgment explicitly requires that ratings be expressed in a "common language", that is, that each rating have an absolute meaning. They claim that this is what makes "grade consistency" significant. MJ. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
- ^ Majority judgment can actually pass or fail reversal symmetry depending on the rounding method used to find the median when there are even numbers of voters. For instance, in a two-candidate, two-voter race, if the ratings are converted to numbers and the two central ratings are averaged, then MJ meets reversal symmetry; but if the lower one is taken, it does not, because a candidate with ["fair","fair"] would beat a candidate with ["good","poor"] with or without reversal. However, for rounding methods which do not meet reversal symmetry, the chances of breaking it are on the order of the inverse of the number of voters; this is comparable with the probability of an exact tie in a two-candidate race, and when there's a tie, any method can break reversal symmetry.
- ^ Majority Judgment is summable at order KN, where K, the number of ranking categories, is set beforehand.
- ^ Majority judgment meets a related, weaker criterion: ranking an additional candidate below the median grade (rather than your own grade) of your favorite candidate, cannot harm your favorite.
- ^ a b A variant of Minimax that counts only pairwise opposition, not opposition minus support, fails the Condorcet criterion and meets later-no-harm.
- ^ Range satisfies the mathematical definition of IIA, that is, if each voter scores each candidate independently of which other candidates are in the race. However, since a given range score has no agreed-upon meaning, it is thought that most voters would either "normalize" or exaggerate their vote such that it votes at least one candidate each at the top and bottom possible ratings. In this case, Range would not be independent of irrelevant alternatives. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
- ^ Once for each round.
- ^ Later preferences are only possible between the two candidates who make it to the second round.
- ^ That is, second-round votes cannot harm candidates already eliminated.
- ^ Unless otherwise noted, for SODA's compliances:
- Delegated votes are considered to be equivalent to voting the candidate's predeclared preferences.
- Ballots only are considered (In other words, voters are assumed not to have preferences that cannot be expressed by a delegated or approval vote.)
- Since at the time of assigning approvals on delegated votes there is always enough information to find an optimum strategy, candidates are assumed to use such a strategy.
- ^ For up to 4 candidates, SODA is monotonic. For more than 4 candidates, it is monotonic for adding an approval, for changing from an approval to a delegation ballot, and for changes in a candidate's preferences. However, if changes in a voter's preferences are executed as changes from a delegation to an approval ballot, such changes are not necessarily monotonic with more than 4 candidates.
- ^ a b c For up to 4 candidates, SODA meets the Participation, IIA, and Cloneproof criteria. It can fail these criteria in certain rare cases with more than 4 candidates. This is considered here as a qualified success for the Consistency and Participation criteria, which do not intrinsically have to do with numerous candidates, and as a qualified failure for the IIA and Cloneproof criteria, which do.
- ^ SODA voting passes reversal symmetry for all scenarios that are reversible under SODA; that is, if each delegated ballot has a unique last choice. In other situations, it is not clear what it would mean to reverse the ballots, but there is always some possible interpretation under which SODA would pass the criterion.
- ^ SODA voting is always polytime computable. There are some cases where the optimal strategy for a candidate assigning delegated votes may not be polytime computable; however, such cases are entirely implausible for a real-world election.
- ^ Later preferences are only possible through delegation, that is, if they agree with the predeclared preferences of the favorite.
- ^ Random winner: Uniformly randomly chosen candidate is winner. Arbitrary winner: some external entity, not a voter, chooses the winner. These systems are not, properly speaking, voting systems at all, but are included to show that even a horrible system can still pass some of the criteria.
- ^ Random ballot: Uniformly random-chosen ballot determines winner. This and closely related systems are of mathematical interest because they are the only possible systems which are truly strategy-free, that is, your best vote will never depend on anything about the other voters. They also satisfy both consistency and IIA, which is impossible for a deterministic ranked system. However, this system is not generally considered as a serious proposal for a practical method.
11. Footnotes
¹ When I call my introduction "overblown", I mean that I reserve the right to make broad generalizations there, without getting distracted by caveats. If you don't like this style, feel free to skip to section 2.
² Of course, the original "politics is a mind killer" sequence was perfectly clear about this: "Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational." The focus here is on the first part of that quote, because I think Less Wrong as a whole has moved too far in the direction of avoiding politics as not a domain for rationalists.
³ Bayes developed his theorem decades before Condorcet's Essai, but Condorcet probably didn't know of it, as it wasn't popularized by Laplace until about 30 years later, after Condorcet was dead.
⁴ Yes, this happens to be the same Alan Gibbard from the previous paragraph.
⁵ Confusingly, "public choice" refers to a school of thought, while "social choice" is the name for the broader domain of study. Stop reading this footnote now if you don't want to hear mind-killing partisan identification. "Public choice" theorists are generally seen as politically conservative in the solutions they suggest. It seems to me that the broader "social choice" has avoided taking on a partisan connotation in this sense.
⁶ Score voting is also called "range voting" by some. It is not a particularly new idea — for instance, the "loudest cheer wins" rule of ancient Sparta, and even aspects of honeybees' process for choosing new hives, can be seen as score voting — but it was first analyzed theoretically around 2000. Approval voting, which can be seen as a form of score voting where the scores are restricted to 0 and 1, had entered theory only about two decades earlier, though it too has a history of practical use back to antiquity.
⁷ OK, fine, this is a simplification. As a voter, you have imperfect information about the true level of support and propensity to vote in the superpopulation of eligible voters, so in reality the chances of a decisive tie between other than your two expected frontrunners is non-zero. Still, in most cases, it's utterly negligible.
⁸ This article will focus more on the literature on multi-player strategic voting (competing boundedly-instrumentally-rational agents) than on multi-player Aumann (cooperating boundedly-epistemically-rational agents). If you're interested in the latter, here are some starting points: Scott Aaronson's work is, as far as I know, the state of the art on 2-player Aumann, but its framework assumes that the players have a sophisticated ability to empathize and reason about each others' internal knowledge, and the problems with this that Aaronson plausibly handwaves away in the 2-player case are probably less tractable in the multi-player one. Dalkiran et al deal with an Aumann-like problem over a social network; they find that attempts to "jump ahead" to a final consensus value instead of simply dumbly approaching it asymptotically can lead to failure to converge. And Kanoria et al have perhaps the most interesting result from the perspective of this article; they use the convergence of agents using a naive voting-based algorithm to give a nice upper bound on the difficulty of full Bayesian reasoning itself. None of these papers explicitly considers the problem of coming to consensus on more than one logically-related question at once, though Aaronson's work at least would clearly be easy to extend in that direction, and I think such extensions would be unsurprisingly Bayesian.
Proxy Donating as Spam Filter
One thing that sometimes makes me hesitate to donate to a cause is that, unless you're donating in person and using cash, you're inevitably signing up for a gigantic stream of junk mail, not just from the organization you gave money to, but other, often totally unrelated charities as well. I haven't noticed a lot of these charities offering a privacy policy that lets you avoid this, but I haven't paid close attention because frankly, I don't think I'd have a lot of confidence in such a privacy policy even if I saw one in some literature.
I wonder if there are donations to be gained in guaranteeing this sort of privacy by going through a third party. Charities could include the usual pre-addressed envelope in their mailings, only instead of their own address it would go to an organization called Givepal. The envelope would include the charity's id, and donors would be instructed to make their checks out to Givepal, who would then distribute the money to the specified charity, keeping the transaction anonymous. Givepal could survive by taking a cut of the donations if necessary, or could itself operate as a non-profit.
Part of a THINK Meetup Group? We Want to Hear From You!
Hello everybody, I've recently started as a volunteer for The High Impact NetworK, an effective altruism group with local chapters mostly in the US and UK. We're aiming to get more people, especially students, interested in effective altruism and enlarging the netowrk in a big way. Edit: Thank you to BenLowell for suggesting that I include a description!
One of the first things I'm trying to do is make THINK more personal and accessible. The modules provide a pretty good outline of the topics discussed, but they're highly structured and don't capture the feel of a live meeting very well. We'd like to advertise to potential members that there's a lot of value in attending a physical meetup beyond just learning the material already presented in the modules. We'd like it to feel more like a club and less like a classroom.
So if any of you are members of a THINK meetup group, I would love to hear your stories or see pictures and videos of your meetup groups in action. I'm hoping to convey the discussions and debates that go on after the more instructive module part is over. You don't have to reveal your name or face, but if you don't mind having your picture on the website, I would really appreciate seeing a name and face. If you choose to submit an anecdote, try focusing on something surprising or interesting that happened at a meetup, an experience you were unlikely to get elsewhere.
Please send your pictures/videos/anecdotes to ajeyac@berkeley.edu, and I'll forward them to THINK leader Mark Lee. I'll do my best to try to set up a section on the THINK website and put this up, but it may take a while, and if we get more submissions than we anticipated, they may not all show up.
Thank you!
The principle of ‘altruistic arbitrage’
Cross-posted from http://www.robertwiblin.com
There is a principle in finance that obvious and guaranteed ways to make a lot of money, so called ‘arbitrages’, should not exist. It has a simple rationale. If market prices made it possible to trade assets around and in the process make a guaranteed profit, people would do it, in so doing shifting some prices up and others down. They would only stop making these trades once the prices had adjusted and the opportunity to make money had disappeared. While opportunities to make ‘free money’ appear all the time, they are quickly noticed and the behaviour of traders eliminates them. The logic of selfishness and competition mean the only remaining ways to make big money should involve risk taking, luck and hard work. This is the ’no arbitrage‘ principle.
Should a similar principle exist for selfless as well as selfish finance? When a guaranteed opportunity to do a lot of good for the world appears, philanthropists should notice and pounce on it, and only stop shifting resources into that activity once the opportunity has been exhausted. This wouldn’t work as quickly as the elimination of arbitrage on financial markets of course. Rather it would look more like entrepreneurs searching for and exploiting opportunities to open new and profitable businesses. Still, in general competition to do good should make it challenging for an altruistic start-up or budding young philanthropist to beat existing charities at their own game.
There is a very important difference though. Most investors are looking to make money and so for them a dollar is a dollar, whatever business activity it comes from. Competition between investors makes opportunities to get those dollars hard to find. The same is not true of altruists, who have very diverse preferences about who is most deserving of help and how we should help them; a ‘util’ from one charitable activity is not the same as a ‘util’ from another. This suggests that unlike in finance, we may able to find ‘altruistic arbitrages’, that is to say ‘opportunities to do a lot of good for the world that others have left unexploited.’
The rule is simple: target groups you care about that other people mostly don’t, and take advantage of strategies other people are biased against using. That rule is the root of a lot of advice offered to thoughtful givers and consequentialist-oriented folks. An obvious example is that you shouldn’t look to help poor people in rich countries. There are already a lot of government and private dollars chasing opportunities to assist them, so the low hanging fruit has all been used up and then some. The better value opportunities are going to be in poor, unromantic places you have never heard of, where fewer competing philanthropist dollars are directed. Similarly, you should think about taking high risk-high return strategies. Most do-gooders are searching for guaranteed and respectable opportunities to do a bit of good, rather than peculiar long-shot opportunities to do a lot of good. If you only care about the ‘expected‘ return to your charity, then you can do more by taking advantage of the quirky, improbable bets neglected by others.
Who do I personally care about more than others? For me the main candidates are animals, especially wild ones, and people who don’t yet exist and may never exist – interest groups that go largely ignored by the majority of humanity. What are the risky strategies I can employ to help these groups? Working on future technologies most people think are farcical naturally jumps to mind but I’m sure there are others and would love to hear them.
This principle is the main reason I am skeptical of mainstream political activism as a way to improve the world. If you are part of a significant worldwide movement, it’s unlikely that you’re working in a neglected area and exploiting how your altruistic preferences are distinct from those of others.
What other conclusions can we draw thinking about philanthropy in this way?
View more: Next
= 783df68a0f980790206b9ea87794c5b6)


Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)