You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Putanumonit - Discarding empathy to save the world

7 Jacobian 06 October 2016 07:03AM

Towards cause priotisation estimates for child abuse

-6 Clarity 21 June 2016 12:30AM

 

 

 

Closest community background reading: http://www.givewell.org/labs/causes/criminal-justice-reform

 

Scale

 

prevalence

 

Back of the envelope estimate of the number of abused excluding those who are emotionaly abused and neglected (because those stats aren’t on the wikipedia page for child abuse):

 

>Despite these limitations, international studies show that a quarter of all adults report experiencing physical abuse as children, and that and 1 in 5 women and 1 in 13 men report experiencing childhood sexual abuse. Emotional abuse and neglect are also common childhood experiences ("Child maltreatment: Fact sheet No. 150". World Health Organization. December 2014)

 

If all those physically abused are the same as those sexually abused (most conservative estimate) then 0.2 of all people are abused as children. If they are completely seperate populations then ((1/5 + 1/13)/2) + (1/4) = 0.39 (~0.4) of all people are abused as children. So, 0.2-0.4 of all people are abused.

 

>A long-term study of adults retrospectively reporting adverse childhood experiences including verbal, physical and sexual abuse, as well as other forms of childhood trauma found 25.9% of adults reported verbal abuse as children, 14.8% reported physical abuse, and 12.2% reported sexual abuse

 

More likely ¼ of all people are abused as children in some way or another

 

Harm (qualitatively)

 

>reduction in lifespan of 7 to 15 years (Kolassa, Iris – Tatjana. "Biological memory of childhood maltreatment – current knowledge and recommendations for future research" (PDF). Ulmer Volltextserver – Institutional Repository der Universität Ulm. Retrieved 30 March 2014.)

 

>more likely to suffer from physical ailments such as allergies, arthritis, asthma, bronchitis, high blood pressure, and ulcers (Dolezal, T.; McCollum, D.; Callahan, M. (2009). Hidden Costs in Health Care: The Economic Impact of Violence and Abuse. Academy on Violence and Abuse.)

 

>emotional abuse has been linked to increased depression, anxiety, and difficulties in interpersonal relationships ("Reactive attachment disorder")

 

>One long-term study found that up to 80% of abused people had at least one psychiatric disorder at age 21, with problems including depression, anxiety, eating disorders, and suicide attempts.[95] One Canadian hospital found that between 36% and 76% of women mental health outpatients had been abused, as had 58% of women and 23% of men schizophrenic inpatients.[96] A recent study has discovered that a crucial structure in the brain's brain's reward circuits is compromised by childhood abuse and neglect, and predicts Depressive Symptoms later in life.[9]


Exponential growth, externalities or diminishment of the problem


>90 percent of maltreating adults were maltreated as children (Starr RH, Wolfe DA (1991). The Effects of Child Abuse and Neglect (pp. 1–33). New York: The Guilford Press. ISBN 978-0-89862-759-6)

 

>children who experience child abuse and/or neglect are 59% more likely to be arrested as juveniles, 28% more likely to be arrested as adults, and 30% more likely to commit violent crime ("Child Abuse Statistics". Childhelp. Retrieved 5 March 2015.)

 

> A study by Dante Cicchetti found that 80% of abused and maltreated infants exhibited symptoms of disorganized attachment. When some of these children become parents, especially if they suffer from posttraumatic stress disorder (PTSD), dissociative symptoms, and other sequelae of child

 

Shut up, stop dumping qutes and give me the QALY’s

 

>The combined strata-level effects of maltreatment on Short Form–6D utility was a reduction of 0.028 per year (95% confidence interval=0.022, 0.034; P<.001). (www.ncbi.nlm.nih.gov/pmc/articles/PMC2377283/)

 

0.028per year * world population * 0.25 = 51800000 QALY’s per year

 

Neglectedness

 

>In the U.S. in 2013, of the 294,000 reported child abuse cases only 81,124 received any sort of counseling or therapy. ("National Statistics on Child Abuse". National Children's Alliance. Archived from the original on 2 May 2014.)


It's likely to be more neglected in low and middle income countries.

 

Tractability

 

>Most acts of physical violence against children are undertaken with the intent to punish.[106] In the United States, interviews with parents reveal that as many as two thirds of documented instances of physical abuse begin as acts of corporal punishment meant to correct a child's behavior, while a large-scale Canadian study found that three quarters of substantiated cases of physical abuse of children have occurred within the context of physical punishment.[107] Other studies have shown that children and infants who are spanked by parents are several times more likely to be severely assaulted by their parents or suffer an injury requiring medical attention. Studies indicate that such abusive treatment often involves parents attributing conflict to their child's willfulness or rejection, as well as "coercive family dynamics and conditioned emotional responses".[16] Factors involved in the escalation of ordinary physical punishment by parents into confirmed child abuse may be the punishing parent's inability to control their anger or judge their own strength, and the parent being unaware of the child's physical vulnerabilities.[15]

 

>Some professionals argue that cultural norms that sanction physical punishment are one of the causes of child abuse, and have undertaken campaigns to redefine such norms.[108][109][110]

 

>Into the 21st century many countries have taken steps to eradicate domestic violence, such as criminalization of violence against women and other abuses. Organizations have been formed which provide assistance and protection of domestic abuse victims, laws and criminal remedies, and domestic violence courts (https://en.wikipedia.org/wiki/Management_of_domestic_violence)

 

 

What can we do about it?


Given that ‘’three quarters of substantiated cases of physical abuse of children have occurred within the context of physical punishment’’, (see tractability section) assuming that a ban on corporal punishment towards children could be enforced with just 10% compliance worldwide, we could save a minimum of 10% * ¾ * 51800000 QALY’s per year = 3885000 QALY’s per year.


Now how cost effective would it be? What could we use as a reference class for how much resources would need to be invested to outlaw and enforce bans on corporal punishment of children? I don’t have the subject matter experience to say, so if anybody can help me out here please do. If you can also estimate how much money would be saved from everything from healthcare costs to criminal justice aversion costs, please chime in.


Instead, let’s compare with one Open Philanthropy Project funded area [clearing the organ donation waitlist](http://www.givewell.org/labs/causes/organ-transplantation). They’ve simply funded trying to figure out the solution, whereas some steps are more obvious for child abuse. They decide to go ahead on that based on estimates for merely thousands of QALY’s. It should be overwhelmingly evident that averting child abuse probably dominates the organ donation waitlist problem.


Faced with such aberrant findings, I think it’s appropriate to hand this over to the community for input before collaboratively investigating this area. Could averting child abuse be the most important cause? If it is at least an important cause, what does it’s neglectedness from the cause prioritisation community thus far say about the methods by which potential important causes are identified?

Call for information, examples, case studies and analysis: votes and shareholder resolutions v.s. divestment for social and environmental outcomes

-1 Clarity 05 May 2016 12:08AM

Typology: since not elsewhere disambiguated, divestment will be considered a form of shareholder activism in this article.


The aim of this call for information is to identify under what conditions shareholder activism or divestment is more appropriate. Shareholder activism referrers to the action and activities around proposing and rallying support for a resolution at a company AGM such as reinstatement or impeachment of a director, or a specific action like renouncing a strategic direction (like investment in coal). In contrast, divestment infers to withdrawal of an investment in a company by shareholders, such as a tobacco or fossil fuel company. By identifying the important variables that determine which strategy is most appropriate, activists and shareholders will be able to choose strategies that maximise social and environmental outcomes while companies will be able to maximise shareholder value.


Very little published academic literature exists on the consequences of divestment. Very little published academic literature exists on the social and environmental consequences of shareholder activism other than the impact on the financial performance of the firm, and conventional metrics of shareholder value.


Controversy (1)


One item of non academic literature, a manifestos on a socially responsible investing blog (http://www.socialfunds.com/media/index.cgi/activism.htm) weighs up the option of divestment against shareholder activism by suggesting that divestment is appropriate as a last resort, if considerable support is rallied, the firm is interested in its long term financial sustainability, and responds whereas voting on shareholder resolutions is appropriate when groups of investors are interested in having an impact. It’s unclear how these contexts are distinguished. DVDivest, a divestment activist group (dcdivest.org/faq/#Wouldn’t shareholder activism have more impact than divestment?) contends in their manifesto the shareholder activism is better suited to changing one aspect of a company's operation whereas divestment is appropriate when rejected a basic business model. This answer too is inadequate as a decision model since one companies can operate multiple simultaneous business models, own several businesses, and one element of their operation may not be easily distinguished from the whole system - the business. They also identify non-responsiveness of companies to shareholder action as a plausible reason to side with divestment.


Controversy (2)


Some have claimed that resolutions that are turned down have an impact. It’s unclear how to enumerate that impact and others. The enumeration of impacts is itself controversially and of course methodologically challenging.


Research Question(s)


Population: In publicly listed companies

Exposure: is shareholder activism in the form of proxy voting, submitting shareholder resolutions and rallying support for shareholder resolution

Comparator: compared to shareholder activism in the form of divestment

Outcome: associated with outcomes  - shareholder resolutions (votes and resolutions) and/or indicators or eventuation of financial (non)sustainability (divestment) and/or media attention (both)



Potential EA application:

Activists could nudge corporations to do the rest of their activism for them. To illustrate: Telstra, Paypal UPS Disney, Coca Cola, Apple and plenty other corporations have objected to specific pieces of legislation and commanded political change in different instances, independently and in unison, in different places, as described [here](http://www.onlineopinion.com.au/view.asp?article=18183). This could be a way to leverage just a controlling share of influence in an organisation to leverage a whole organisations lobbying power and magnify impact.

Link: Thoughts on the basic income pilot, with hedgehogs

3 Jacobian 04 May 2016 05:47PM

I have resisted the urge of promoting my blog for many months, but this is literally (per my analysis) for the best cause.

We have also raised a decent amount of money so far, so at least some people were convinced by the arguments and didn't stop at the cute hedgehog pictures.

Altruistic parenting

1 Clarity 08 February 2016 11:57AM

I just read this article about the felicific calculus of parenthood.

 The average happiness worldwide is 5.1 on a one out of ten scale; Americans are at 7.1. Arbitrarily deciding that one year of a 10 life is equivalent to two years of a 5 life, the cost per QALY of having a child for total utilitarians is $5500.

However, NICE’s threshold for cost effectiveness of a health intervention is about $30,000 (20,000 pounds) per QALY. Therefore, for total utilitarians, having a child may be considered a cost-effective intervention, although not an optimal intervention.

...surrogacy is an underexplored way to do good. Rather than costing money, the first-time surrogate earns thirty thousand dollars, which can grow to forty thousand dollars for experienced surrogates– and it still creates 109 QALYs that otherwise would not exist. These children are likely to grow up in wealthy families who really, really want to have them, and are thus likely to be even happier than this analysis suggests.

 In the comments section, the following grabbed my attention.

Estimates for the size of a sustainable human population appear to mostly range between 2 billion and 10 billion, and the meta-analysis here (http://bioscience.oxfordjournals.org/content/54/3/195) suggests that the best point estimate is around 7.7 billion. Meanwhile most estimates of population growth over the next hundred years suggest the total population will reach 10-11 billion. It seems likely that at some point in the next couple hundred years, the population will decrease substantially due to a Malthusian catastrophe. This transition is likely to cause a great deal of suffering. Surely even a total utilitarian would agree that it would be better for the necessary drop in population to be as small as possible.

And even if the population never rises above sustainable carrying capacity, it’s not obvious that total utilitarians should see a larger population as preferable. The drop in happiness due to increased competition for resources could outweigh the benefit of an additional person existing and having experiences.

Then, I read this article. Here are the highlights:

 

Bryan Caplan’s excellent book Selfish Reasons to Have More Kids[7] reviews the evidence from 40 years of adoption and twin studies with a frankly liberating result: *barring actual deprivation or trauma, children are largely who they are going to be as a result of their genetic makeup. In long-term measures of well-being, education and employment, parental influence exerts a temporary effect which disappears when we are no longer living with our parents. So costly added extras (music lessons, coaching and tutoring, private school fees) are probably not going to change your child’s life in the long term.  (However, data on the antenatal environment suggests benefit to taking iodine, but avoiding ice-storms and licorice during pregnancy.[8]) Sharing time together and finding common interests can build a good relationship and help a child develop without major costs.

In addition to straightforward financial outlay, parenthood comes with costs of time and opportunity. Loss of flexibility and leisure mean you won’t be able to take all opportunities (like taking on extra work to make more money or advance your career). Late notice travel is unlikely to be possible. You will probably be sleep deprived for a large part of the first year or more of your child’s life, and this may impact on your work performance. The work of parenting will take time, though some of it may be outsourced at the cost of increased financial outlay.

So, this baby is going to cost you about £2000 a year and take a variable but large amount of your time, which will equate in the end to another chunk of money. For parents taking parental leave or working less than full time to provide childcare, there may be delay to career progression as well as income.  Does this represent an unacceptably large sum of money and time to be compatible with the goal of maximising our impacts for the good?

In the light of this reality, the rationalist suggestion I have encountered – that one guard against a desire to become a parent by pre-emptively being sterilised before the desire has arisen – seems a recipe for psychological disaster.

 Finally we may ask whether parenthood – and the resulting person created – will benefit the wider world? This is a harder good to calculate or rely upon. The inheritance of specific character traits is difficult to predict. It’s certainly not guaranteed that your offspring will embrace all of your values throughout their lifetime. The burden of onerous parental expectations are extensively documented, and it would appear foolish to have children on the expectation they will be altruistic in the same way you are. However, your child is likely to resemble you in many important respects. By adulthood, the heritability of IQ is between 0.7 and 0.8,[13] and there is evidence from twin studies of significant heritability of complex traits like empathy.[14] This would give them a high probability of adding significant net good to the world.

 

That's rather confronting:

* a '5' on a scale of happiness ain't that bad

* don't stress too much when raising your biological kids, you can't do that much

* they're probably not worth having anyway

Just kidding. But, the evidence is quite fascinating.

[Link] Review of "Doing Good Better"

0 fortyeridania 26 September 2015 07:58AM

The article is here.

The book is by William MacAskill, founder of 80000 Hours and Giving What We Can. Excerpt:

Effective altruism takes up the spirit of Singer’s argument but shields us from the full blast of its conclusion; moral indictment is transformed into an empowering investment opportunity...

Either effective altruism, like utilitarianism, demands that we do the most good possible, or it asks merely that we try to make things better. The first thought is genuinely radical, requiring us to overhaul our daily lives in ways unimaginable to most...The second thought – that we try to make things better – is shared by every plausible moral system and every decent person. If effective altruism is simply in the business of getting us to be more effective when we try to help others, then it’s hard to object to it. But in that case it’s also hard to see what it’s offering in the way of fresh moral insight, still less how it could be the last social movement we’ll ever need.

GiveDirectly, SCI and health outcomes

-5 Clarity 20 September 2015 09:15AM

**What GiveDirectly says:**


>This study documented large, positive, and sustainable impacts across a wide range of outcomes including assets, earnings, food security, **    mental health**, and domestic violence. It found no evidence of impacts on alcohol or tobacco use, crime, or inflation. It also examined a number of design questions such as how to size transfers and whether to give them to men or women.

Source: [GiveDirectly](https://www.givedirectly.org/research-at-give-directly.html)

**What the evidence says:**

*GiveDirectly*


>Overall, GiveDirectly increased households’ assets, consumption, and food security. The program also improved psychological well-being, especially among households with female recipients and households that received the large transfer. GiveDirectly had no impact on health or education measures.


>Psychological impacts: GiveDirectly households reported a 0.2 standard deviation increase (0.35 sd for large transfer recipients) on an index measuring psychological well-being. This improvement was largely driven by increases in happiness and life satisfaction, and reductions in stress and depression. There were no differences in self-reported measures between monthly-transfer and lump-sum recipients, but cortisol levels were significantly higher for monthly-transfer recipients. A potential explanation being that the monthly-transfer recipients seemed to have difficulty saving or investing the transfer, which may have led to increased stress.


Source: [Innovations for Poverty Action](http://www.poverty-action.org/project/0522)

*SCI*



>There is a very strong case that mass deworming is effective in reducing infections. The evidence on the connection to positive quality-of-life impacts is less clear, but there is a fairly strong possibility that deworming is highly beneficial.


>There is strong evidence that administration of the drugs reduces worm loads, but weaker evidence on the causal relationship between reducing worm loads and improved life outcomes.


>Evidence for the impact of deworming on short-term general health is thin, especially for soil-transmitted helminth (STH)-only deworming. Most of the potential effects are relatively small, the evidence is mixed, and different approaches have varied effects. We would guess that deworming populations with schistosomiasis and STH (combination deworming) does have some small impacts on general health, but do not believe it has a large impact on health in most cases. We are uncertain that STH-only deworming affects general health.



>In our view, the most compelling case for deworming as a cost-effective intervention comes not from its subtle impacts on general health (which appear relatively minor and uncertain) nor from its potential reduction in severe symptoms of disease effects (which we believe to be rare), but from the possibility that deworming children has a subtle, lasting impact on their development, and thus on their ability to be productive and successful throughout life.



>Community deworming before a child’s first birthday brings about a 0.2-standard-deviation improvement in performance on Raven’s Matrices, a decade after the intervention. Estimated effects on vocabulary measures are similar in magnitude, but not always as significant; effects on memory are not statistically distinguishable from zero. A summary measure, the first principal component of all six cognitive measurements, also shows a roughly 0.2-standard-deviation effect. These effects are equivalent to between 0.5 and 0.8 additional grades in school … The effect of community deworming spillovers on height, height-for-age, and stunting all appear statistically


Source: [GiveWell](http://www.givewell.org/international/top-charities/schistosomiasis-control-initiative).



GiveWell goes on to argue that this leads to improvements in income. In turn, I would expect that this leads to increases in income, assets and consumption with consequences similar to direct cash transfers as in the case of GiveDirectly.

Deworming a movement

-6 Clarity 30 August 2015 09:25AM

Over the last few days I've been reviewing the evidence for EA charity recommendations. Based on my personal experience alone, the community seems to be comprehensively inept, poor at marketing, extremely insular, methodologically unsophisticated but meticulous, transparent and well-intentioned. I currently hold the belief that EA movement building does more harm than good and that is requires significant rebranding and shifts in its informal leadership or to die out before it damages the reputation of the rationalist community and our capacity to cooperate with communities that share mutual interests.

It's one thing to be ineffective and know it. It's another thing to be ineffective and not know it. It's yet another thing to be ineffective, not know it, yet champion effectiveness and make a claim to moral superiority.

In case you missed the memo deworming is controversial, GiveWell doesn't engage with the meat of the debate, and my investigations of the EA community's spaces suggests that it's not at all known. I've even briefly posted about it elsewhere on LessWrong to see if there was unspoken knowledge about it, but it seems not. Given that it's the hot topic in mainstream development studies and related academic communities, I'm aghast at how irresponsive 'we' are.

What's actionable for us here. If you're looking for a high reliability effective altruism prospect, do not donate to SCI or Evidence Action. And by extension, do not donate to EA organisations to donate to these groups, including GiveWell. I am assuming you will use those funds more wisely instead, say buying healthier food for yourself.

For who don't to review the links for a more comprehensive analyses from Cochrane and GiveWell, here is one summary of the debate recommended in the Cochrane article:

Last month there was another battle in an ongoing dispute between economists and epidemiologists over the merits of mass deworming. In brief, economists claim there is clear evidence that cheap deworming interventions have large effects on welfare via increased education and ultimately job opportunities. It’s a best buy development intervention. Epidemiologists claim that although worms are widespread and can cause illnesses sometimes, the evidence of important links to health is weak and knock-on effects of deworming to education seem implausible. As stated by Garner “the belief that deworming will impact substantially on economic development seems delusional when you look at the results of reliable controlled trials.”

Aside: Framing this debate as one between economists and epidemiologists captures some of the dynamic of what has unfortunately been called the “worm wars” but it is a caricature. The dispute is not just between economists and epidemiologists. For an earlier round of this see this discussion here, involving health scientists on both sides. Note also that the WHO advocates deworming campaigns.

So. Deworming: good for educational outcomes or not?

On their side, epidemiologists point to 45 studies that are jointly analyzed in Cochrane reports. Among these they see few high quality studies on school attendance in particular, with a recent report concluding that they “do not know if there is an effect on school attendance (very low quality evidence).” Indeed they also see surprisingly few health benefits. One randomized control trial included one million Indian students and found little evidence of impact on health outcomes. Much bigger than all other trials combined; such results raise questions for them about the possibility of strong downstream effects. Economists question the relevance of this result and other studies in the Cochrane review.

On their side, the chief weapon in the economists’ arsenal has for some time been a paper from 2004 on a study of deworming in West Kenya by Ted Miguel and Michael Kremer, two leading development economists that have had an enormous impact on the quality of research in their field. In this paper, Miguel and Kremer (henceforth MK) claimed to show strong effects of deworming on school attendance not just for kids in treated schools but also for the kids in untreated schools nearby. More recently a set of new papers focusing on longer term impacts, some building on this study, have been added to this arsenal. In addition, on their side, economists have a few things that do not depend on the evidence at all: determination, sway, and the moral high ground. After all, who could be against deworming kids?

 


 

Additional criticisms of GiveWelL charities: http://lesswrong.com/lw/mo0/open_thread_aug_24_aug_30/cp8h

The kind of work I think EA's should be focussing on http://lesswrong.com/lw/mld/genosets/cnys AND

http://lesswrong.com/r/discussion/lw/mk2/lets_pour_some_chlorine_into_the_mosquito_gene/

The problem with MIRI: http://lesswrong.com/lw/cr7/proposal_for_open_problems_in_friendly_ai/cm2j

 

 

Effective Altruism from XYZ perspective

4 Clarity 08 July 2015 04:34AM

In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.

Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.

I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.

I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.

Effective Altruism vs Missionaries? Advice Requested from a Newly-Built Crowdfunding Platform.

3 lululu 30 June 2015 05:39PM

Hi, I'm developing a next-generation crowdfunding platform for non-profit fundraising. From what we have seen, it is aeffective tool, more about it below. I'm working with two other cofounders, both of whom are evangelical Christians. We get along well in general, but that I strongly believe in effective altruism and they do not.

We will launch a second pilot fundraising campaign in 2-3 weeks. My co-founders have arranged for us fund raise for is a "church planting" missionary organization. This is so opposed my belief in effective altruism I feel uncomfortable using our effective tool to funnel donors' dollars in THIS of all directions. This is not the reason I got involved in this project.

My argument with them is that we should charge more to ineffective nonprofits such as colleges, religious, or political organizations, and use that extra to subsidize the campaign and money-processing costs of the effective non-profits. I think this is logically consistent with earning to give. But I am being outvoted two-to-one by people who believe saving lives and saving souls are nearly equally important.

So I have two requests:

1. If anyone has advise on how to navigate this (including any especially well written articles that would appeal to evangelical Christians, or experience negotiating with start-up cofounders). 

2. If anyone has personal connections with effective or effective-ish non-profits, I would much prefer to fundraise for them than my co-founder's church connections. Caveat: the org must have US non-profit legal status. 

About the platform: the gist our concept is that we're using a lot of psychology and biases and altruism research to nudge more people towards giving and also nudge them towards a sustained involvement with the nonprofit in question. We're using some of the tricks that made the ice bucket challenge so successful (but with added accountability to ensure that visible involvement actually leads to monetary donations). Users can pledge money contingent on their friend's involvement, which motivates people in the same way that matching donations motivate people. Giving is very visible, and people are more likely to give if they see friends giving. Friends are making the request for funding, which creates a sense of personal connection. Each person's mini-campaign has an involvement goal and a time limit (3 friends in 3 days) to create a sense of urgency. The money your friends donate visibly increases your impact so it also feel like getting money from nothing - a $20 pledge can become hundreds of dollars. We nudge people towards automated smaller monthly reoccurring gifts. We try to minimize the number of barriers to making a donation (less steps, fewer fields).  

 

4 days left in Giving What We Can's 2015 fundraiser - £34k to go

5 RobertWiblin 27 June 2015 02:16AM

We at Giving What We Can have been running a fundraiser to raise £150,000 by the end of June, so that we can make our budget through the end of 2015. We are really keen to keep the team focussed on their job of growing the movement behind effective giving, and ensure they aren't distracted worrying about fundraising and paying the bills.

With 4 days to go, we are now short just £34,000!

We also still have £6,000 worth of matching funds available for those who haven't given more than £1,000 to GWWC before and donate £1,000-£5,000 before next Tuesday! (For those who are asking, 2 of the matchers I think wouldn't have given otherwise and 2 I would guess would have.)

If you've been one of those holding out to see if we would easily reach the goal, now's the time to pitch in to ensure Giving What We Can can continue to achieve its vision of making effective giving the societal default and move millions more to GiveWell-recommended and other high impact organisations.

So please give now or email me for our bank details: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org.

If you want to learn more, please see this more complete explanation for why we might be the highest impact place you can donate. This fundraiser has also been discussed on LessWrong before, as well as the Effective Altruist forum.

Thanks so much!


Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction

13 diegocaleiro 08 June 2015 04:37PM

Cross Posted at the EA Forum

At Event Horizon (a Rationalist/Effective Altruist house in Berkeley) my roommates yesterday were worried about Slate Star Codex. Their worries also apply to the Effective Altruism Forum, so I'll extend them. 

The Problem:

Lesswrong was for many years the gravitational center for young rationalists worldwide, and it permits posting by new users, so good new ideas had a strong incentive to emerge.

With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there. 

The Effective Altruism forum doesn't have that particular problem. It is however more constrained in terms of what can be posted there. It is after all supposed to be about Effective Altruism. 

We thus have three different strong attractors for the large community of people who enjoy reading blog posts online and are nearby in idea space. 

Possible Solutions: 

(EDIT: By possible solutions I merely mean to say "these are some bad solutions I came up with in 5 minutes, and the reason I'm posting them here is because if I post bad solutions, other people will be incentivized to post better solutions)

If Slate Star Codex became an open blog like Lesswrong, more people would consider transitioning from passive lurkers to actual posters. 

If the Effective Altruism Forum got as many readers as Lesswrong, there could be two gravity centers at the same time. 

If the moderation and self selection of Main was changed into something that attracts those who have been on LW for a long time, and discussion was changed to something like Newcomers discussion, LW could go back to being the main space, with a two tier system (maybe one modulated by karma as well). 

The Past:

In the past there was Overcoming Bias, and Lesswrong in part became a stronger attractor because it was more open. Eventually lesswrongers migrated from Main to Discussion, and from there to Slate Star Codex, 80k blog, Effective Altruism forum, back to Overcoming Bias, and Wait But Why. 

It is possible that Lesswrong had simply exerted it's capacity. 

It is possible that a new higher tier league was needed to keep post quality high.

A Suggestion: 

I suggest two things should be preserved:

Interesting content being created by those with more experience and knowledge who have interacted in this memespace for longer (part of why Slate Star Codex is powerful), and 

The opportunity (and total absence of trivial inconveniences) for new people to try creating their own new posts. 

If these two properties are kept, there is a lot of value to be gained by everyone. 

The Status Quo: 

I feel like we are living in a very suboptimal blogosphere. On LW, Discussion is more read than Main, which means what is being promoted to Main is not attractive to the people who are actually reading Lesswrong. The top tier quality for actually read posting is dominated by one individual (a great one, but still), disincentivizing high quality posts by other high quality people. The EA Forum has high quality posts that go unread because it isn't the center of attention. 

 

Taking Effective Altruism Seriously

2 Salemicus 07 June 2015 06:59AM

Epistemic status: 90% confident.

Inspiration: Arjun Narayan, Tyler Cowen.

The noblest charity is to prevent a man from accepting charity, and the best alms are to show and enable a man to dispense with alms.

Moses Maimonides.

Background

Effective Altruism (EA) is "a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world." Along with the related organisation GiveWell, it often focuses on getting the most "bang for your buck" in charitable donations. Unfortunately, despite their stated aims, their actual charitable recommendations are generally wasteful, such as cash transfers to poor Africans. This leads to the obvious question - how can we do better?

Doing better

One of the positive aspects of EA theory is its attempt to widen the scope of altruism beyond the traditional. For instance, to take into account catastrophic risks, and the far future. However, altruism often produces a far-mode bias where intentions matter above results. This can be a particular problem for EA - for example, it is very hard to get evidence about how we are affecting the far future. An effective method needs to rely on a tight feedback loop between action and results, so that continual updates are possible. At the extreme, Far Mode operates in a manner where no updating on results takes place at all. However, it is also important that those results are of significant magnitude as to justify the effort. EA has mostly fallen into the latter trap - achieving measurable results, but which are of no greater consequence.

The population of sub-Saharan Africa is around 950 million people, and growing. They have been a prime target of aid for generations, but it remains the poorest region of the world. Providing cash transfers to them mostly merely raises consumption, rather than substantially raising productivity. A truly altruistic program would enable the people in these countries to generate their own wealth so that they no longer needed poverty - unconditional transfers, by contrast, is an idea so lazy even Bob Geldof could stumble on it. The only novel thing about the GiveWell program is that the transfers are in cash.

Unfortunately, no-one knows how to turn poor African countries into productive Western ones, short of colonization. The problem is emphatically not a shortage of capital, but rather low productivity, and the absence of effective institutions in which that capital can be deployed. Sadly, these conditions and institutions cannot simply be transplanted into those countries.

A greater charity

However, there do exist countries with high productivity, and effective institutions in which that capital can be deployed. That capital then raises world productivity. As F.A. Harper wrote:

Savings invested in privately owned economic tools of production amount to... the greatest economic charity of all.

That is because those tools increase the productivity of labour, and so raise output. The pie has grown. Moreover, the person who invests their portion of the pie into new capital is particularly altruistic, both because they are not taking a share themselves, and because they are making a particularly large contribution to future pies.

In the same way that using steel to build tanks means (on the margin) fewer cars and vice-versa, using craftsmen to build a new home means (on the margin) fewer factories and vice-versa. Investment in capital is foregone consumption. Moreover, you do not need to personally build those economic tools; rather, you can part-finance a range of those tools by investing in the stock market, or other financial mechanisms.

Now, it's true that little of that capital will be deployed in sub-Saharan Africa at present, due to the institutional problems already mentioned. Investing in these countries will likely lead to your capital being stolen or becoming unproductive - the same trap that prevents locals from advancing equally prevents foreign investors from doing so. However, if sub-Saharan Africa ever does fix its culture and institutions, then the availability of that capital will then serve to rapidly raise productivity and then living standards, much as is taking place in China. Moreover, by making the rest of the world richer, this increases the level of aid other countries could provide to sub-Saharan Africa in future, should this ever be judged desirable. It also serves to improve the emigration prospects of individuals within these countries.

Feedback

Another great benefit of capital investment is the sharp feedback mechanism. The market economy in general, and financial markets in particular, serve to redistribute capital from ineffective to effective ventures, and from ineffective to effective investors. As a result, it is no longer necessary to make direct (and expensive) measurements of standards of living in sub-Saharan Africa; as long as your investment fund is gaining in value, you can rest safe in the knowledge that its growth is contributing, in a small way, to future prosperity.

Commitment mechanisms

However, if investment in capital is foregone consumption, then consumption is foregone investment. If I invest in the stock market today (altruistic), then in ten years' time spend my profits on a bigger house (selfish), then some of the good is undone. So the true altruist will not merely create capital, he will make sure that capital will never get spent down. One good way of doing that would be to donate to an institution likely to hold onto its capital in perpetuity, and likely to grow that capital over time. Perhaps the best example of such an institution would be a richly-endowed private university, such as Harvard, which has existed for almost 400 years and is said to have an endowment of $32 billion.

John Paulson recently gave Harvard $400 million. Unfortunately, this meant he came in for a torrent of criticism from people claiming he should have given the money to poor Africans, etc. I hope to see Effective Altruists defending him, as he has clearly followed through on their concepts in the finest way.

Further thoughts and alternatives

 

  • Some people say that we are currently going through a "savings glut" in which capital is less productive than previously thought. In this case, it may be that Effective Altruists should focus on funding (and becoming!) successful entrepreneurs in different spaces.
  • I am sympathetic to the Thielian critique that innovation is being steadily stifled by hostile forces. I view the past 50 years, and the foreseeable future, as a race between technology and regulation, which technology is by no means certain to win. It may be that Effective Altruists should focus on political activity, to defend and expand economic liberty where it exists - this is currently the focus of my altruism.
  • However, government is not the enemy; rather, the enemy is the cultural beliefs and conditions that create a demand for the destruction of economic liberty. To the extent this critique, it may be that Effective Altruists should focus on promoting a pro-innovation and pro-liberty mindset; for example, through movies and novels.

Conclusion


Effective altruists should be applauded for trying to bring evidence and reason to a subject that is plagued by far-mode thinking. But taking their ideas seriously quickly leads to a much more radical approach.

 

Giving What We Can needs your help!

23 RobertWiblin 29 May 2015 04:30PM

As you probably know, Giving What We Can exists to move donations to the charities that can most effectively help others. Our members take a pledge to give 10% of their incomes for the rest of their life to the most impactful charities. Along with other extensive resources for donors such as GiveWell and OpenPhil, we produce and communicate, in an accessible way, research to help members determine where their money will do the most good. We also impress upon members and the general public the vast differences between the best charities and the rest.

Many LessWrongers are members or supporters, including of course the author of Slate Star Codex. We also recently changed our pledge so that people could give to whichever cause they felt best helped others, such as existential risk reduction or life extension, depending on their views. Many new members now choose to do this.

What you might not know is that 2014 was a fantastic year for us - our rate of membership growth more than tripled! Amazingly, our 1066 members have now pledged over $422 million, and already given over $2 million to our top rated charities. We've accomplished this on a total budget of just $400,000 since we were founded. This new rapid growth is thanks to the many lessons we have learned by trial and error, and the hard work of our team of staff and volunteers.

To make it to the end of the year we need to raise just another £110,000. Most charities have a budget in the millions or tens of millions of pounds and we do what we do with a fraction of that.

We want to raise the money as quickly as possible, so that our staff can stop focusing on fundraising (which takes up a considerable amount of energy), and get back to the job of growing our membership.

Some of our supporters are willing to sweeten the deal as well: if you haven't given us more than £1,000 before, then they'll match 1:1 a gift between £1,000 and £5,000.

You can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for our bank details. Info on tax deductible giving from the USA and non-UK Europe are also available on our website.

What we are doing this year

The second half of this year is looking like it will be a very exciting for us. Four books about effective altruism are being released this year, including one by our own trustee William MacAskill, which will be heavily promoted in the US and UK. The Effective Altruism Summit is also turning into 'EA Global' with events at Google Headquarters in San Francisco, Oxford University and Melbourne, headlined by Elon Musk.

Tens, if not hundreds of thousands of people will be finding out about our philosophy of effective giving for the first time.

To do these opportunities justice Giving What We Can needs to expand its staff to support its rapidly growing membership and local chapters, and ensure we properly follow up with all prospective members. We want to take people who are starting to think about how they can best make the world a better place, and encourage them to make a serious long-term commitment to effective giving, and help them discover where their money can do the most good.

Looking back at our experience over the last five years, we estimate that each $1 given to Giving What We Can has already moved $6, and will likely end up moving between $60 and $100 to the most effective charities in the world. (This are time discounted, counterfactual donations, only to charities we regard very highly. Check out this report for more details.)

This represents a great return on investment, and I would be very sad if we couldn't take these opportunities just because we lacked the necessary funding.

Our marginal hire

If we don't raise this money we will not have the resources to keep on our current Director of Communications. He has invaluable experience as a Communications Director for several high-profile Australian politicians, which has given him skills in web-development, public relations, graphic design, public speaking and social media. Amongst the things he has already achieved in his three months here are: automation of the book-keeping on our Trust (saving huge amounts of time and minimising errors), very much improved our published materials including our fundraising prospectus, written a press release and planned a media push to capitalise on our getting to 1,000 members and Peter Singer’s book release in the UK.

His wide variety of skills mean that there are a large number of projects he would be capable of doing which would increase our member growth, and we are keen for him to test a number of these. His first project would be to optimise our website to make the most of the increased attention effective altruism will be generating over the summer and turn that into people actually donating 10% of their incomes to the most effective causes. In the past we have had trouble finding someone with such a broad set of crucial skills. Combined with how swiftly and well he has integrated into our team, it would be a massive loss to have to let him go and later down the line need to try to recruit a replacement.

As I wrote earlier you can give now or email me (robert dot wiblin at centreforeffectivealtruism dot org) for bank details or personalised advice on how to give best. If you need tax deductibility in another country check these pages on the USA and non-UK Europe.

I'm happy to take questions here or by email!

What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy

1 chaosmage 20 May 2015 05:10PM

Epistemic status: Wild guesswork based on half-understood studies from way outside my field. More food for thought than trustworthy information.

tl/dr: Estimates of familial relatedness between people should help promote empathy, so here's how to make them - and might this be useful for Effective Altruism?

The why

I don't know how it is for you, but for me, knowing I'm related to someone makes a specific emotional difference. Scenario: I'm at a big family-and-friends get-together, I meet a guy, we get along. (For clarity, let's assume no sexual tension.) And then we're told we're third cousins via some weird aunt. From the moment I'm told, I feel different towards him. Firm, forthcoming, obliging. Some kind of basic kinship emotion, I guess, noticeable when it shifts on these rare occasions but basically going on, deep down in System 1, every time that emailing a remote uncle feels different from emailing a similarly remote associate.

Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins and likes to point out everyone I've ever had sex with was a cousin of some degree. That similarly remote associate where I don't have that kinship feeling - he's a relative too, just a more distant one. And when I notice that, I get a bit of that kinship feeling too...

With me so far? Here's my thesis: the two human feelings of kinship and empathy are closely connected, and to make one of them more salient is to increase the salience of the other.

I don't think this has been tested properly. A. J. Jacobs, who is running a huge family reunion event in New York this summer, said "some ambitious psychology professor needs to conduct a study about whether we deliver lower electrical shocks to people if we know we’re related" and I think he's exactly right.

Has anybody here not heard of circles of empathy? They're a concept invented by the very cool 19th century rationalist William Edward Hartpole Lecky in his "History of European Morals From Augustus to Charlemagne". Peter Singer summarizes it as follows:

Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle... includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.

There's more to read about this in Peter Singer's "The Expanding Circle" or Steven Pinker's "The Better Angels of Our Nature", but what strikes me about it is contained in that single sentence: The expansion that is described tracks actual genetic relatedness, or Consanguinity. The list goes down a gradient of (expected) genetic relatedness. This makes the size of the circle of empathy seem to depend on a threshold of how related you need to be to someone in order to care about them.

(Note that Becky published his "History of European Morals" - with this inclusion of concern about animals - in 1869, i.e. only ten years after the publication of "On the Origin of Species". There was some animal rights legislation before Darwin, but animal rights as a movement only arose after we knew animals to be our relatives.)

On the other hand, those who would promote empathy have always relied on familial vocabulary, chiefly "brother" and "sister", to refer to people who evidently weren't actual brothers or sisters. Martin Luther King, Jesus, the Buddha, Mandela, Gandhi, they all do this. So maybe it works a bit. Maybe it helps trigger that emotional kinship response and that somehow helps people get along.

Now to see how these emotional responses would arise, we could discuss reciprocal altruism and gene-centered Darwinism and whatnot, but "The Selfish Gene" is required reading anyway and I assume you've done your homework. I'd like to instead go to the second part of my thesis, the one about increasing salience.

Recognizing you're related to somebody does something. (Especially if you have an incest fetish, of course.) I propose that whatever it does increases empathy. And empathy might not be a categorically good thing, but it comes pretty close, at least until you extend it to all food groups. So maybe we could increase empathy among people by pointing out their relatedness. And maybe we can do this more vividly, more strikingly than by simply saying "we're all descended from apes, so we're all related, duh" or by boring the non-nerd majority to death with talk of human genetic clustering and fixation indexes.

So I'd like to revisit that "brothers and sisters" thing from MLK and those other guys. Maybe they shouldn't have used figurative language. Maybe a more lasting feeling of kinship can be created by literal language: By telling people how related they are. Detailed ancestry information is being collected at various Wiki-like sites, but even assuming they'll grow and become less US-centric, they don't go back very far (except around very famous people) and what came before remains guesswork. So let's do some Fermi-ish estimates.

The how

The drop dead amazing Nature Article Modelling the recent common ancestry of all living humans is way too careful and scientific to put an exact number on how long ago the last common ancestor lived, unfortunately. But the mean date their simulations come up with is 1415 BC, which will be approximately 120 generations ago, so let's say really remote people like the Karitiana tribe are, at most, something like 125th degree cousins of all of us. So that's a useful upper bound for the degree of cousinhood between any two arbitrary humans, such as you and me.

The lower bound could be something like 3 - if you and I were that closely related, we'd share a great-great-grandparent and could probably ascertain rather than guess that. With fairly extensive genealogy, the lower bound might go up to around 5 - which is the level where you need to look at 64 ancestors for each of us who lived in the middle of the 19th century and failed to use Facebook. We'd find it hard to ascertain whether your great-great-great-great-grandmother Mary was identical to mine.

There are a lot of special cases where the lower bound can be higher. If both people involved know their family more than 3 generations were deep-rooted peasant folks from two distinct populations, the history books might tell them how many centuries further back are very unlikely to contain a common ancestor. (This will of course be much rarer among descendants of immigrants, like Americans, than it is for citizens of older or more rural countries.) If they're of different ethnicities, castes or classes that wouldn't normally date each other 80 years ago, the lower bound should probably go up a few more generations. If both people involved are Icelanders, they can just look up their last common ancestor in the comprehensive Icelandic family tree. But let's assume you and I don't have any of these special cases, and we're stuck with a lower bound of 3. Now between that and 125, how do we narrow it down?

Turns out the authors of that gorgeous Nature paper don't hand out access to their simulations to random dudes who just email them. So lets see how far we get on the hard way.

In a completely random mating model (where people do not tend to mate with people who happen to live near them, i.e. happen to be descendants of the same people), your number of ancestors doubles with every generation you go back, in a sort of ancestor tree that grows backwards. We're looking for the point where the two ancestor trees first meet. If we assume generations have homogenous lengths (which implies further simplifying assumptions like moms and dads are the same age) and further assume only people from within the same generation have kids with each other, cousins of the Nth degree have a common ancestor N+1 generations ago, and each has 2N+1 ancestors belonging to that generation.

This means that for you and me to be, say, 15th degree cousins, our two sets of 215+1=65536 ancestors have to have one person in common, some 480 years ago, assuming 30 years as mean parenthood age. Of course we each probably have less than 65536 unique ancestors due to... um... "reticulations".

But empirically, it seems that "a pair of modern Europeans living in neighboring populations share around 2–12 genetic common ancestors from the last 1,500 years" and even individuals from opposite ends of Europe will normally have common ancestors if you search back 3000 years (source). That isn't what you get from the simplistic model above - the numbers of ancestors it calculates exceed the world population less than 32 generations (about 800 years) ago. The empirical genetic data from this paper would indicate that it is likely the median first common ancestor between me and anybody in central Europe is somewhere like 1200 years (or 40 generations) ago and any two people anywhere in Europe would probably be at most 100th degree cousins.

Around 600 years ago is a good time to look at, because that's shortly before intercontinental travel started to intricately connect all regions of the world, including genetically. If most of your 600-years-ago ancestors lived outside Europe, you and I might still be <25 degrees cousins - maybe you have some ancestor who left for Europe 300 years ago, leaving siblings behind (your ancestors) and having kids in Europe (mine). Or vice versa. But that kind of thing is unlikely and since we're doing rough estimates I suggest we round that probability down to zero.

In genetic studies, no other continent is anywhere near as well-studied as Europe, so I guess we'll just have to roll with it and assume that other places are about the same as this paper found and the nice exponential drop-off with geographic distance that's the case in Europe is also the case elsewhere. America and Australia as continents of immigrants continue to be a special cases. But for two people with families from, say, West Africa, I'd be comfortable assuming that if they're from roughly the same large region (say around the Bight of Benin) they're probably something like 40th degree cousins and if not, they're still something like 100 degree cousins at least.

It gets only slightly more complicated if the set of ancestors you know - say your four grandparents - are a mix of descendants from different regions or continents. Just add the number of generations between you and them to your expected degree of cousinhood to everybody from that region or continent.

Needless to say these are all wild guesses. I'm basically hoping someone more qualified than me will see this and be horrified enough to go do the job properly.

Now I'm not an American but statistically you probably are, and you might be more interested in know how closely you're related to other Americans - your boss, your sexual partners, or Mel Gibson. The bad news is that as a member of a nation of relatively recent immigrants, and particularly if your ancestors didn't all come from different continents, you have a harder time estimating most recent common ancestors with people than most other people on Earth. The good news, however, is that the data collected at the large ancestry sites ancestry.com, FamilySearch.org, Geni.com and WikiTree.com are all growing fastest in the US-centric part of their "world trees".

For cousinhood between people whose ancestors seem to have lived on entirely seperate continents as far as anyone knows, I think we can only fall back on our upper bound of 125 degrees of cousinhood. Things get fuzzy so far back, the world population was much smaller, and the population of those who have descendants living today is smaller still. Shared ancestry within any particular generation remains unlikely, but over the centuries and millenia, between trade (particularly in slaves), the various empires and the mass rapes of warfare, genes did get mixed around. Again, see that spectacular Nature paper if you still haven't.

Side note: The most recent common ancestor of two arbitrarily chosen people on different continents is likely to be someone who had kids on different continents. So it is probably a very rich person, a sailor or a soldier, i.e. a male. In general, the number of unique males in anybody's ancestor tree will likely be much smaller than the number of unique females. I expect the difference will be sharper in most recent common ancestors of humans from different continents, because women have shorter fertility windows inside which to travel intercontinentally and don't seem to have moved nearly as much as men except as slaves.

The point of all this is simple. Now you can look at somebody and figure she's not only your cousin, you even have a guess as to the degree of cousin she is. I like to do that when I'm angry with people, because for me, it makes a distinct emotional difference. Maybe try if that works for you too.

Relation to the care allocation problem

I suspect this cousinhood thing could be a fairly principled solution to the problem of how to allocate caring between humans and animals, which Yvain/Scott laid out in a recent SSC post. Why not go by actual (known or estimated) blood relations, and privilege closer relatives over more distant ones?

Our last common ancestor with chimps was something like 5 to 6 million years ago, so our ancestor trees merge about 250000 (human) generations ago, making chimps something like quarter-million-degrees-cousins of all of us. Generations get a lot shorter further back, so our last common ancestor with cattle and dogs, about 92 million years ago, may be 30 million generations ago. Birds would be much more distant, our last common ancestor with them was around 310 million years ago, and so forth. (Richard Dawkins The Ancestor's Tale has much more on this.) For me, this maps rather nicely onto my intuitive prejudices as to how much I should care about which creatures. It fails to map my caring for plants far more than I care for bacteria, but EA has nothing to improve on in that department.

If EA has to have impartiality in the sense that your neighbor can't be more important to you than a tribesman in Mongolia, this isn't EA. Quoth Yvain:

allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it.

So anybody trying to grow EA might want to make that step easier. Maybe a "closeness multiplier" on units of caring works better than a series of unprincipled exceptions, and still gets across the idea that units of caring are to be distributed between everybody (or everybody's QALYs), if unevenly. And then to become more impartial would be to have that multiplier approach 1.

And if that were the case, my personal preference for how to design that multiplier would be that it shouldn't rely on arbitrary constructs like citizenships. Maybe if EAs want to find a principled solution to the care allocation problem, consanguinity should be one of the options.

Log-normal Lamentations

12 Thrasymachus 19 May 2015 09:12PM

[Morose. Also very roughly drafted.]

Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.

There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1

normal

Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.

Look at our thick-tailed works, ye average, and despair! 2

One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.

Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.

Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.

A shattered visage

Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields': Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex ante extremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.

So there are a few ways an Effective Altruist mindset can depress our egos:

  1. It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
  2. ‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
  3. (Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
  4. Many of these fields have ‘lottery-like’ characteristics where ex ante and ex post value diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.

What remains besides

I haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.

In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4

Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5

‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6

Notes:

  1. As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free).
  2. I wonder how much this post is a monument to the grasping vaingloriousness of my character…
  3. Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA.
  4. Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases.
  5. Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them.
  6. cf. Max Ehrmann put it well:

    … If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.

    Enjoy your achievements as well as your plans. Keep interested in your own career, however humble…

LW survey: Effective Altruists and donations

18 gwern 14 May 2015 12:44AM

Analysis of 2013-2014 LessWrong survey results on how much more self-identified EAers donate

http://www.gwern.net/EA%20donations

Effective effective altruism: Get $400 off your next charity donation

9 Baisius 17 April 2015 05:45AM

For those of you unfamiliar with Churning, it's the practice of signing up for a rewards credit card, spending enough with your everyday purchases to get the (usually significant) reward and then cancelling it. Many of these cards are cards with annual fees (which is commonly waived and/or the one-time reward will pay for). For a nominal amount of work, you can churn cards for significant bonuses.

Ordinarily I wouldn't come close to spending enough money to qualify for many of these rewards, but I recently made the Giving What You Can pledge. I now have a steady stream of predictable expenses, and conveniently, GiveWell allows donations via most any credit card. I've started using new rewards cards to pay for these expenses each time, resulting in free flights (this is how I'm paying to fly to NYC this summer), Amazon gift cards, or sometimes just straight cash.

Since the first of the year (total expenses $4000, including some personal expenses) I've churned $700 worth of bonuses (from a Delta Amazon Express Gold and a Capital One Venture Card). This money can be redonated, saved, spent, or whatever.

Disclaimers:

1. I hope it goes without saying that you should pay off your balance in full each month, just like you should with any other card.

2. This has some negative impact on your credit, in the short term.

3. It should be noted that credit card companies make at least some money (I think 3%) off of your transactions, so if you're trying to hit a target of X% to charity, you would need to donate X/0.97, or 10.31% for 10% to account for that 3%. The reward should more than cover it.

4. Read more about this, including the pros and cons, from multiple sources before you try it. It's not something that should be done lightly, but does synergize very nicely with charity donations.

Effective Sustainability - results from a meetup discussion

9 Gunnar_Zarncke 29 March 2015 10:15PM

Related-to Focus Areas of Effective Altruism

These are some small tidbits from our LW-like Meetup in Hamburg. The focus was on sustainability not on altruism as that was more in the spirit of our group. EA was mentioned but no comparison was made. Well-informed effective altruists will probably find little new in this writeup.

So we discussed effective sustainability. To this end we were primed to think rationally by my 11-year old who moderated a session on mind-mapping 'reason' (with contributions from the children). Then we set out to objectively compare concrete everyday things by their sustainability. And how to do this. 

Is it better to drink fruit juice or wine? Or wine or water? Or wine vs. nothing (i.e.to forego sth.)? Or wine vs. paper towels? (the latter intentionally different)

The idea was to arrive at simple rules of thumb to evaluate the sustainability of something. But we discovered that even simple comparisons are not that simple and intuition can run afoul (surpise!). One example was that apparently tote bags are not clearly better than plastic bags in terms of sustainability. But even the simple comparison of tap water vs. wine which seems like a trivial subset case is non-trivial when you consider where the water comes from and how it is extracted from the ground (we still think that water is better but we not as sure as before).

We discussed some ways to measure sustainability (in brackets to which we reduced it):

  • fresh water use -> energy
  • packaging material used -> energy, permanent ressources
  • transport -> energy
  • energy -> CO_2, permanent ressources
  • CO_2 production 
  • permanent consumption of ressources

Life-Cycle-Assessment (German: Ökobilanz) was mentioned in this context but it was unclear what that meant precisely. Only afterwards was it discovered that it's a blanket term for exactly this question (with lots of estabilished measurements for which it is unclear how to simplify them for everyday use).

We didn't try to break this down - a practical everyday approch doesn't allow for that and the time spent on analysing and comparing options is also equivalent to ressources possibly not spent efficiently.

One unanswered question was how much time to invest in comparing alternatives. Too little comparison means to take the nextbest option which is what most people apparently do and which also apparently doesn't lead to overall sustainable behavior. But too much analysis of simple decisions is also no option.

The idea was still to arrive at actionable criteria. One first approximation be settled on was

1) Forego consumption. 

A nobrainer really, but maybe even that has to be stated. Instead of comparing options that are hard to compare try to avoid consumption where you can. Water instead of wine or fruit juice or lemonde. This saves lots of cognitive ressources.

Shortly after we agreed on the second approximation:

2) Spend more time on optimizing ressources you consume large amounts of.

The example at hand was wine (which we consume only a few times a year) versus toilet paper... No need to feel remorse over a one-time present packaging.

Note that we mostly excluded personal well-being, happiness and hedons from our consideration. We were aware that our goals affect our choices and hedons have to factored into any real strategy, but we left this additional complication out of our analysis - at least for this time.

We did discuss signalling effects. Mostly in the context of how effective ressources can be saved by convincing others to act sustainably. One important aspect for the parents was to pass on the idea and to act as a role model (with the caveat that children need a simplified model to grasp the concept). It was also mentioned humorously that one approach to minimize personal ressource consumption is suicide and transitively to convice others of same. The ultimate solution having no humans on the planet (a solution my 8-year old son - a friend of nature - arrived at too). This apparently being the problem when utilons/hedons are expluded.

A short time we considered whether outreach comes for free (can be done in addition to abstinence) and should be the no-brainer number 3. But it was then realized that at least right now and for us most abstinence comes at a price. It was quoted that buying sustainable products is about 20% more expensive than normal products. Forgoing e.g. a car comes at reduced job options. Some jobs involve supporting less sustainable large-scale action. Having less money means less options to act sustaibale. Time being convertible to money and so on.

At this point the key insight mentioned was that it could be much more efficient from a sustainability point of view to e.g. buy CO_2 certificates than to buy organic products. Except that the CO_2 certificate market is oversupplied currently. But there seem to be organisations which promise to achieve effective CO_2 reduction in developing countries (e.g. solar cooking) at a much higher rate than be achieved here. Thus the thrid rule was

3) Spend money on sustainable organisations instead of on everyday products that only give you a good feeling.

And with this the meetup concluded. We will likely continue this.

A note for parents: Meetups with children can be productive (in the sense of results like the above). We were 7 adults and 7 children (aged 3 to 11). The children mostly entertained themselves and no parent had to leave the discussion for long. And the 11-year-old played a significant role in the meetup itself.

Impartial ethics and personal decisions

9 Emile 08 March 2015 12:14PM

Some moral questions I’ve seen discussed here:

  • A trolley is about to run over five people, and the only way to prevent that is to push a fat bystander in front of the trolley to stop it. Should I?
  • Is it better to allow 3^^^3 people to get a dust speck in their eye, or one man to be tortured for 50 years?
  • Who should I save, if I have to pick between one very talented artist, and five random nobodies?
  • Do I identify as an utilitarian? a consequentialist? a deontologist? a virtue ethicist?

Yet I spend time and money on my children and parents, that may be “better” spent elsewhere under many moral systems. And if I cared as much about my parents and children as I do about random strangers, many people would see me as somewhat of a monster.

In other words, “commonsense moral judgements” finds it normal to care differently about different groups; in roughly decreasing order:

  • immediate family
  • friends, pets, distant family
  • neighbors, acquaintances, coworkers
  • fellow citizens
  • foreigners
  • sometimes, animals
  • (possibly, plants...)
… and sometimes, we’re even perceived as having a *duty* to care more about one group than another (if someone saved three strangers instead of two of his children, how would he be seen?).

In consequentialist / utilitarian discussions, a regular discussion is “who counts as agents worthy of moral concern” (humans? sentient beings? intelligent beings? those who feel pain? how about unborn beings?), which covers the later part of the spectrum. However I have seen little discussion of the earlier part of the spectrum (friends and family vs. strangers), and it seems to be the one on which our intuitions agree the most reliably - which is why I think it deserves more of our attention (and having clear ideas about it might help about the rest).

Let’s consider two rough categories of decisions:

  • impersonal decisions: what should government policy be? By what standard should we judge moral systems? On which cause is charity money best spent? Who should I hire?
  • personal decisions: where should I go on holidays this summer? Should I lend money to an unreliable friend? Should I take a part-time job so I can take care of my children and/or parents better? How much of my money should I devote to charity? In which country should I live?

Impartial utilitarianism and consequentialism (like the question at the head of this post) make sense for impersonal decisions (including when an individual is acting in a role that require impartiality - a ruler, a hiring manager, a judge), but clash with our usual intuitions for personal decisions. Is this because under those moral systems we should apply the same impartial standards for our personal decisions, or because those systems are only meant for discussing impersonal decisions, and personal decisions require additional standards ?

I don’t really know, and because of that, I don’t know whether or not I count as a consequentialist (not that I mind much apart from confusion during the yearly survey; not knowing my values would be a problem, but not knowing which label I should stick on them? eh, who cares).

I also have similar ambivalence about Effective Altruism:

  • If it means that I should care as much about poor people in third world countries than I do about my family and friends, then it’s a bit hard to swallow.
  • However, if it means that assuming one is going to spend money to help people, one should better make sure that money helps them in the most effective way possible.

Scott’s “give ten percent” seems like a good compromise on the first point.

So what do you think? How does "caring for your friend’s and family" fit in a consequentialist/utilitarian framework ?

Other places this has been discussed:

  • This was a big debate in ancient China, between the Confucians who considered it normal to have “care with distinctions” (愛有差等), whereas Mozi preached “universal love” (兼愛) in opposition to that, claiming that care with distinctions was a source of conflict and injustice.
  • Impartiality” is a big debate in philosophy - the question of whether partiality is acceptable or even required.
  • The philosophical debate between “egoism and altruism” seems like it should cover this, but it feels a bit like a false dichotomy to me (it’s not even clear whether “care only for one’s friends and family” counts as altruism or egoism)
  • Special obligations” (towards Friends and family, those one made a promise to) is a common objection to impartial, impersonal moral theories
  • The Ethics of Care seem to cover some of what I’m talking about.
  • A middle part of the spectrum - fellow citizens versus foreigners - is discussed under Cosmopolitanism.
  • Peter Singer’s “expanding circle of concern” presents moral progress as caring for a wider and wider group of people (counterpoint: Gwern's Narrowing Circle) (I haven't read it, so can't say much)

Other related points:

  • The use of “care” here hides an important distinction between “how one feels” (My dog dying makes me feel worse than hearing about a schoolbus in China falling off a cliff) and “how one is motivated to act” (I would sacrifice my dog to save a schoolbus in China from falling off a cliff). Yet I think we have the gradations on both criteria.
  • Hanson’s “far mode vs. near mode” seems pretty relevant here.

Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission

12 tog 24 November 2014 08:29AM

If you shop on Amazon in the countries listed below, you can earn a substantial commission for charity by doing so via the links below. This is a cost-free way to do a lot of good, so I'd encourage you to do so! You can bookmark one of the direct links to Amazon below and then use that bookmark every time you shop.

The commission will be at least 5%, varying by product category. This is substantially better than the AmazonSmile scheme available in the US, which only gives 0.5% of the money you spend to charity. It works through Amazon's 'Associates Program', which pays this commission for referring purchasers to them, from the unaltered purchase price (details here). It doesn't cost the purchaser anything. The money goes to Associates Program accounts owned by the EA non-profit Charity Science, money to which always gets regranted to GiveWell-recommended charities unless explicitly earmarked otherwise. For ease of administration and to get tax-deductibility, commission will get regranted to the Schistosomiasis Control Initiative until further notice.

Direct links to Amazon for your bookmarks

If you'd like to shop for charity, please bookmark the appropriate link below now:

 

From now through November 28: Black Friday Deals Week

Amazon's biggest cut price sale is this week. The links below take you to currently available deals:

Please share these links

I'll add other links on the main 'Shop for Charity' page later. I'd love to hear suggestions for good commission schemes in other countries. If you'd like to share these links with friends and family, please point them to this post or even better this project's main page.

Happy shopping!

'Shop for Charity' is a Charity Science project

The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach

10 RobertWiblin 19 November 2014 10:41PM

The Centre for Effective Altruism, the group behind 80,000 Hours, Giving What We Can, the Global Priorities Project, Effective Altruism Outreach, and to a lesser extent The Life You Can Save and Animal Charity Evaluators, is looking to grow its team with a number of new roles:

We are so keen to find great people that if you introduce us to someone new who we end up hiring, we will pay you $1,000 for the favour! If you know anyone awesome who would be a good fit for us please let me know: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org. They can also book a short meeting with me directly.

We may be able to sponsor outstanding applicants from the USA.

Applications close Friday 5th December 2014.

Why is CEA an excellent place to work? 

First and foremost, “making the world a better place” is our bottom line and central aim. We work on the projects we do because we think they’re the best way for us to make a contribution. But there’s more.

What are we looking for?

The specifics of what we are looking for depend on the role and details can be found in the job descriptions. In general, we're looking for people who have many of the following traits:

  • Self-motivated, hard-working, and independent;
  • Able to deal with pressure and unfamiliar problems;
  • Have a strong desire for personal development;
  • Able to quickly master complex, abstract ideas, and solve problems;
  • Able to communicate clearly and persuasively in writing and in person;
  • Comfortable working in a team and quick to get on with new people;
  • Able to lead a team and manage a complex project;
  • Keen to work with a young team in a startup environment;
  • Deeply interested in making the world a better place in an effective way, using evidence and research;
  • A good understanding of the aims of the Centre for Effective Altruism and its constituent organisations.

I hope to work at CEA in the future. What should I do now?

Of course this will depend on the role, but generally good ideas include:

  • Study hard, including gaining useful knowledge and skills outside of the classroom.
  • Degrees we have found provide useful training include: philosophy, statistics, economics, mathematics and physics. However, we are hoping to hire people from a more diverse range of academic and practical backgrounds in the future. In particular, we hope to find new members of the team who have worked in operations, or creative industries.
  • Write regularly and consider starting a blog.
  • Manage student and workplace clubs or societies.
  • Work on exciting projects in your spare time.
  • Found a start-up business or non-profit, or join someone else early in the life of a new project.
  • Gain impressive professional experience in established organisations, such as those working in consulting, government, politics, advocacy, law, think-tanks, movement building, journalism, etc.
  • Get experience promoting effective altruist ideas online, or to people you already know.
  • Use 80,000 Hours' research to do a detailed analysis of your own future career plans.

A website standard that is affordable to the poorest demographics in developing countries?

10 Ritalin 01 November 2014 01:43PM

Fact: the Internet is excruciatingly slow in many developing countries, especially outside of the big cities.

Fact: today's websites are designed in such a way that they become practically impossible to navigate with connections in the order of, say, 512kps. Ram below 4GB and a 7-year old CPU are also a guarantee of a terrible experience.

Fact: operating systems are usually designed in such an obsolescence-inducing way as well.

Fact: the Internet is a massive source of free-flowing information and a medium of fast, cheap communication and networking.

Conclusion: lots of humans in the developing world are missing out on the benefits of a technology that could be amazingly empowering and enlightening.

I just came across this: what would the internet 2.0 have looked like in the 1980s. This threw me back to my first forays in Linux's command shell and how enamoured I became with its responsiveness and customizability. Back then my laptop had very little autonomy, and very few classrooms had plugs, but by switching to pure command mode I could spend the entire day at school taking notes (in LaTeX) without running out. But I switched back to the GUI environment as soon as I got the chance, because navigating the internet on the likes of Lynx is a pain in the neck.

As it turns out, I'm currently going through a course on energy distribution in isolated rural areas in developing countries. It's quite a fascinating topic, because of the very tight resource margins, the dramatic impact of societal considerations, and the need to tailor the technology to the existing natural renewable resources. And yet, there's actually a profit to be made investing in these projects; if managed properly, it's win-win.

And I was thinking that, after bringing them electricity and drinkable water, it might make sense to apply a similar cost-optimizing, shoestring-budget mentality to the Internet. We already have mobile apps and mobile web standards which are built with the mindset of "let's make this smartphone's battery last as long as possible".

Even then, (well-to-do, smartphone-buying) thrid-worlders are somewhat neglected: Samsung and the like have special chains of cheap Android smartphones for Africa and the Middle East. I used to own one; "this cool app that you want to try out is not available for use on this system" were a misery I had to get used to. 

It doesn't seem to be much of a stretch to do the same thing for outdated desktops. I've been in cybercafés in North Africa that still employ IBM Aptiva machines, mechanical keyboard and all—with a Linux operating system, though. Heck, I've seen town "pubs", way up in the hills, where the NES was still a big deal among the kids, not to mention old arcades—Guile's theme goes everywhere.

The logical thing to do would be to adapt a system that's less CPU intensive, mostly by toning down the graphics. A bare-bones, low-bandwith internet that would let kids worldwide read wikipedia, or classic literature, and even write fiction (by them, for them), that would let nationwide groups tweet to each other in real time, that would let people discuss projects and thoughts, converse and play, and do all of those amazing things you can do on the Internet, on a very, very tight budget, with very, very limited means. Internet is supposed to make knowledge and information free and universal. But there's an entry-level cost that most humans can't afford. I think we need to bridge that. What do you guys think?

 

 

Donation Discussion - alternatives to the Against Malaria Foundation

4 ancientcampus 28 October 2014 03:00AM

About a year and a half ago, I made a donation to the Against Malaria Foundation. This was during jkaufman's generous matching offer.

That was 20 months ago, and my money is still in the "underwriting" phase - funding projects that are still, of yet, just plans and no nets.

Now, the AMF has had a reasonable reason it was taking longer than expected:

"A provisional, large distribution in a province of the [Democratic Republic of the Congo] will not proceed as the distribution agent was unable to agree to the process requested by AMF during the timeframe needed by our co-funding partner."

So they've hit a snag, the earlier project fell through, and they are only now allocating my money to a new project. Don't get me wrong, I am very glad they are telling me where my money is going, and especially glad it didn't just end up in someone's pocket instead. With that said, though, I still must come to this conclusion:

The AMF seems to have more money than they can use, right now.

So, LW, I have the following questions:

  1. Is this a problem? Should one give their funds to another charity for the time being?
  2. Regardless of your answer to the above, are there any recommendations for other transparent, efficient charities? [other than MIRI]

Should EA's be Superrational cooperators?

8 diegocaleiro 16 September 2014 09:41PM

Back in 2012 when visiting Leverage Research, I was amazed by the level of cooperation in daily situations I got from Mark. Mark wasn't just nice, or kind, or generous. Mark seemed to be playing a different game than everyone else.

If someone needed X, and Mark had X, he would provide X to them. This was true for lending, but also for giving away.

If there was a situation in which someone needed to direct attention to a particular topic, Mark would do it.

You get the picture. Faced with prisoner dilemmas, Mark would cooperate. Faced with tragedy of the commons, Mark would cooperate. Faced with non-egalitarian distributions of resources, time or luck (which are convoluted forms of the dictator game), Mark would rearrange resources without any indexical evaluation. The action would be the same, and the consequentialist one, regardless of which side of a dispute was the Mark side.

I never got over that impression. The impression that I could try to be as cooperative as my idealized fiction of Mark was.

In game theoretic terms, Mark was a Cooperational agent.

  1. Altruistic - MaxOther
  2. Cooperational - MaxSum
  3. Individualist - MaxOwn
  4. Equalitarian - MinDiff
  5. Competitive - MaxDiff
  6. Aggressive - MinOther

Under these definitions of kinds of agents used in research on game theoretical scenarios, what we call Effective Altruism would be called Effective Cooperation. The reason why we call it "altruism" is because even the most parochial EA's care about a set containing a minimum of 7 billion minds, where to a first approximation MaxSum ≈ MaxOther.

Locally however the distinction makes sense. In biology Altruism usually refers to a third concept, different from both the "A" in EA, and Alt, it means acting in such a way that Other>Own without reference to maximizing or minimizing, since evolution designs adaptation executors, not maximizers.

A globally Cooperational agent acts as a consequentialist globally. So does an Alt agent.

The question then is,

How should a consequentialist act locally?

The mathematical response is obviously as a Coo. What real people do is a mix of Coo and Ind.

My suggestion is that we use our undesirable yet unavoidable moral tribe distinction instinct, the one that separates Us from Them, and act always as Coos with Effective Altruists and mix Coo and Ind only with non EAs. That is what Mark did.

 

Effective Writing

6 diegocaleiro 18 July 2014 08:45PM

Granted, writing is not very effective. But some of us just love writing...

Earning to Give Writing: Which are the places that pay 1USD or more dollars per word?

Mind Changing Writing: What books need being written that can actually help people effectively change the world?

Clarification Writing: What needs being written because it is only through writing that these ideas will emerge in the first place?

Writing About Efficacy: Maybe nothing else needs to be written on this.

What should we be writing about if we have already been, for very long, training the craft? What has not yet been written, what is the new thing?

The world surely won't save itself through writing, but it surely won't write itself either.

 

The effect of effectiveness information on charitable giving

15 Unnamed 15 April 2014 04:43PM

A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.

The Abstract:

We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness.

In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness:

In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that women who were offered our Credit with Education program had 16% higher profits in their businesses than those who were not, and they increased profits in bad months by 27%! This is particularly important because it means our program helped women generate more stable incomes throughout the year.

These independent researchers used a randomized evaluation, the methodology routinely used in medicine, to measure the impact of our programs on things like business growth, children's health, investment in education, and women's empowerment.

In the control condition, the mailing instead included this paragraph:

Many people would have met Rita and decided she was too poor to repay a loan. Five hungry children and a small plot of mango trees don’t count as collateral. But Freedom from Hunger knows that women like Rita are ready to end hunger in their own families and in their communities.

High school students and effective altruism

9 VipulNaik 14 March 2014 07:04PM

The cluster of ideas underlying effective altruism is an important part of my worldview, and I believe it would be valuable for many people to be broadly familiar with these ideas. As I mentioned in an earlier LessWrong post, I was pleasantly surprised that many advisees for Cognito Mentoring (including some who are still in high school) were familiar with and interested in effective altruism. Further, our page on effective altruism learning resources has been one of our more viewed pages in recent times, with people spending about eight minutes on average on the page according to Google Analytics.

In this post, I consider the two questions:

  1. Are people in high school ready to understand the ideas of effective altruism?
  2. Are there benefits from exposing people to effective altruist ideas when they are still in high school?

1. Are people in high school ready to understand the ideas of effective altruism?

I think that the typical LessWrong reader would have been able to grasp key ideas of effective altruism (such as room for more funding and earning to give) back in ninth or tenth grade from the existing standard expositions. Roughly, I expect that people who are 2 or more standard deviations above the mean in IQ can understand the ideas when they begin high school, and those who are 1.5 standard deviations above the mean in IQ can understand the ideas by the time they end high school. Certainly, some aspects of the discussion, such as the one charity argument, benefit from knowledge of calculus. Both the one charity argument and the closely related concept of room for more funding are linked with the idea of marginalism in economics. But it's not a dealbreaker: people can understand the argument better with calculus or economics, but they can understand it reasonably well even without. And it might also work in reverse: seeing these applications before studying the formal mathematics or economics may make people more interested in mastering the mathematics or economics.

Of course, just because people can understand effective altruist ideas if they really want to, doesn't mean they will do so. It may be necessary to simplify the explanations and improve the exposition so as to make it more attractive to younger people. An alternative route would be to sneak the explanations into things young people are already engaging with. This could be an academic curriculum or a story. Harry Potter and the Methods of Rationality is arguably an example of the latter, though it is focused more on rationality than on effective altruism.

However, I'm highly uncertain of my guesstimates, partly because I'm not very actively in touch with a representative cross-section of typical, or even of intellectually gifted, high school students. The subset of people I know is generally mediated by several levels of selection bias. I'm therefore quite eager to hear thoughts, particularly from people who are themselves high school students or have tried to discuss effective altruist ideas with high school students.

2. Are there benefits from exposing people to effective altruist ideas when they are still in high school?

Effective altruism as it was originally conceived has been highly focused on the question of where to donate money for the most impact (this is the focus of organizations such as GiveWell and Giving What We Can). This makes it of less direct relevance to people still in high school, because they don't have much disposable income. But there are arguably other benefits. Some examples:

  • In recent times, there has been more discussion in the effective altruist community about smart career choice. This seems to have begun with discussion of earning to give. 80,000 Hours has played an important role in shaping the conversation on altruistic career choice. Since people start thinking about careers while in high school, effective altruism is potentially relevant. (This page compiles some links to discussions of altruistic career choice -- we'll be adding more to that as we learn more).
  • Lifestyle choices and habits can have an effect on the world both directly (for instance, being vegetarian, or recycling) and indirectly (good habits promote better earning or higher savings that can then be redirected to altruistic causes, or people can become more productive and generate more social value through their jobs). For the lifestyle choices that have a direct effect, it's never too early to start.  For instance, if being vegetarian is the right thing, one might as well switch as a teenager. For the indirect effects, starting earlier gives one more lead time to develop skills and habits. If frugal living habits and greater stamina at work promote earning to give, then these habits may be better to set while still a teenager than when one is 25. The Effective Altruists Facebook page includes discussions of many questions of this sort in addition to discussions about where to donate.
  • A number of people in high school and college are attracted to activities that ostensibly generate social value. Learning effective altruist ideas may make students more skeptical of many such activities and approach the decision of whether to participate in them more critically. For instance, a stalwart of effective altruism may not see much point (from the social value perspective) in going on a school-sponsored trip to lay bricks for a schoolhouse in Africa. The person may still engage in it as a fun activity, but will not have illusions about it being an activity of high social value. Similarly, people may be more skeptical of the social value of activities that involve volunteering in one's community for tasks where they are easily replaceable by others.
  • The effective altruist movement could itself benefit from a greater diversity of people contributing and participating. High school students may have insights that adults overlook.

Did I miss other points? Counterpoints? Do you have relevant experience that can shed light on the discussion? I'm eager to hear thoughts.

Some ideas in the post were based on discussion with my Cognito Mentoring collaborator Jonah Sinick.

UPDATE: The post provoked some discussion in a thread on the Effective Altruists Facebook group.

On not diversifying charity

1 DanielLC 14 March 2014 05:14AM

A common belief within the Effective Altruism movement that you should not diversify charity donations when your donation is small compared to the size of the charity. This is counter-intuitive, and most people disagree with this. A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified has already been written, but it uses a simplistic model. Perhaps you're uncertain about which charity is best, charities are not continuous, let alone differentiable, and any donation is worthless unless it gives the charity enough money to finally afford another project, your utility function is nonlinear, and to top it all off, rather than accepting the standard idea of expected utility, you are risk-averse.

Standard Explanation:

If you are too lazy to follow the link, or you just want to see me rehash the same argument, here's a summary.

The utility of a donation is differentiable. That is to say, if donating one dollar gives you one utilon, donating another dollar will give you close to one utilon. Not exactly the same, but close. This means that, for small donations, it can be approximated as a linear function. In this case, the best way to donate is to find the charity that has the highest slope, and donate everything you can to it. Since the amount you donate is small compared to the size of the charity, a first-order approximation will be fairly accurate. The amount of good you do with that strategy is close to what you predicted it would do, which is more than you'd predict of any other strategy, which is close to what you'd predict for them, so even if this strategy is sub-optimal, it's at least very close.

Corrections to Account for Reality:

Uncertainty:

Uncertainty is simple enough. Just replace utility with expected utility. Everything will still be continuous, and the reasoning works pretty much the same.

Nonlinear Utility Function:

If your utility function is nonlinear, this is fine as long as it's differentiable. Perhaps saving a million lives isn't a million times better than saving one, but saving the millionth life is about as good as the one after that, right? Maybe each additional person counts for a little less, but it's not like the first million all matter the same, but you don't care about additional people after that.

In this case, the effect of the charity is differentiable with respect to the donation, and the utility is differentiable with respect to the effect of the charity, so the utility is differentiable with respect to the donation.

Risk-Aversion:

If you're risk-averse, it gets a little more complicated.

In this case, you don't use expected utility. You use something else, which I will call meta-utility. Perhaps it's expected utility minus the standard deviation of utility. Perhaps it's expected utility, but largely ignoring extreme tails. What it is is a function from a random variable representing all the possibilities of what could happen to the reals. Strictly speaking, you only need an ordering, but that's not good enough here, since it needs to be differentiable.

Differentiable is more confusing in this case. It depends on the metric you're using. The way we'll be using it here is that having a sufficiently small probability of a given change, or a given probability of a sufficiently small change, counts as a small change. For example, if you only care about the median utility, this isn't differentiable. If I flip a coin, and you win a million dollars if it lands on heads, then you will count that as worth a million dollars if the coin is slightly weighted towards heads, and nothing if it's slightly weighted towards tails, no matter how close it is to being fair. But that's not realistic. You can't track probabilities that precisely. You might care less about the tails, so that only things in the 40% - 60% range matter much, but you're going to pick something continuous. In fact, I think we can safely say that you're going to pick something differentiable. If I add a 0.1% chance of saving a life given some condition, it will make about the same difference as adding another 0.1% chance given the same condition. If you're risk-averse, you'd care more about a 0.1% chance of saving a life it's takes effect during the worst-case scenario than the best-case, but you'd still care about the same for a 0.1% chance of saving a life during the worst case as for upgrading it to saving two lives in that case.

Once you accept that it's continuous, the same reasoning follows as with expected utility. A continuous function of a continuous function is continuous, so the meta-utility of a donation with respect to the amount donated is continuous.

To make the reasoning more clear, here's an example:

Charity A saves one life per grand. Charity B saves 0.9 lives per grand. Charity A has ten million dollars, and Charity B has five million. One or more of these charities may be fraudulent, and not actually doing any good. You have $100, and you can decide where to donate it.

The naive view is to split the $100, since you don't want to risk spending it on something fraudulent. That makes sense if you care about how many lives you save, but not if you care about how many people die. They sound like they're the same thing, but they're not.

If you donate everything to Charity A, it has $10,000,100 and Charity B has $5,000,000. If you donate half and half, Charity A has $10,000,050 and Charity B has $5,000,050. It's a little more diversified. Not much more, but you're only donating $100. Maybe the diversification outweighs the good, maybe not. But if you decide that it is diversifying enough to matter more, why not donate everything to Charity B? That way, Charity A has $10,000,000, and Charity B has $5,000,100. If you were controlling all the money, you'd probably move a million or so from Charity A to Charity B, until it's well and truly diversified. Or maybe it's already pretty close to the ideal and you'd just move a few grand. You'd definitely move more than $100. There's no way it's that close to the optimum. But you only control the $100, so you just do as much as you can with that to make it more diversified, and send it all to Charity B. Maybe it turns out that Charity B is a fraud, but all is not lost, because other people donated ten million dollars to Charity A, and lots of lives were saved, just not by you.

Discontinuity:

The final problem to look at is that the effects of donations aren't continuous. The time I've seen this come up the most is when discussing vegetarianism. If you don't it meat, it's not going to make enough difference to keep the stores from ordering another crate of meat, which means exactly the same number of animals are slaughtered.

Unless, of course, you were the straw that broke the camel's back, and you did keep a store from ordering a crate of meat, and you made a huge difference.

There are times where you might be able to figure that out before-hand. If you're deciding whether or not to vote, and you're not in a battleground state, you know you're not going to cast the deciding vote, because you have a fair idea of who will win and by how much. But you have no idea at what point a store will order another crate of meat, or when a charity will be able send another crate of mosquito nets to Africa, or something like that. If you make a graph of the number of crates a charity sends by percentile, you'll get a step function, where there's a certain chance of sending 500 crates, a certain chance of sending 501, etc. You're just shifting the whole thing to the left by epsilon, so it's a little more likely each shipment will be made. What actually happens isn't continuous with respect to your donation, but you're uncertain, and taking what happens as a random variable, it is continuous.

A few other notes:

Small Charities:

In the case of a sufficiently small charity or large donation, the argument is invalid. It's not that it takes more finesse like those other things I listed. The conclusion is false. If you're paying a good portion of the budget, and the marginal effects change significantly due to your donations, you should probably donate to more than one charity even if you're not risk-averse and your utility function is linear.

I would expect that the next best charity you manage to find would be worse by more than a few percent, so I really doubt it would be worth diversifying unless you personally are responsible for more than a third of the donations.

An example of this is keeping money for yourself. The hundredth dollar you spend on yourself has about a tenth of the effect the thousandth does, and the entire budget is donated by you. The only time you shouldn't diversify is if the marginal benefit of the last dollar is still higher than what you could get donating to charity.

Another example is avoiding animal products. Avoiding steak is much more cost-effective than avoiding milk, but once you've stopped eating meat, you're stuck with things like avoiding milk.

Timeless Decision Theory:

If other people are going to make similar decisions to you, your effective donation is larger, so the caveats about small charities applies. That being said, I don't think this is really much of an issue.

If everyone is choosing independently, even if most of them correlate, the end result will be that the charities get just enough funding that some people donate to some and others donate to others. If this happens, chances are that it would be worth while for a few people to actually split their investments, but it won't make a big difference. They might as well just donate it all to one.

I think this will only become a problem if you're just donating to the top charity on GiveWell, regardless of how closely they rated second place, or you're just donating based purely on theory, and you have no idea if that charity is capable of using more money.

Jobs and internships available at the Centre for Effective Altruism: new 'EA outreach' roles added

5 tog 21 February 2014 11:50AM

I recently posted on LessWrong main about the jobs and internships currently available at the Centre for Effective Altruism. (As I mentioned, effective altruism in general and CEA in particular have been discussed many times on LessWrong, so these opportunities might be of interest to some readers!) We're just starting a new project to do effective altruism outreach and 'marketing' (much like what Peter Hurford discusses in this post), so have added some new roles in this to the recruitment round; there's a full description of them here. If you're interested, apply by 5pm GMT on February 28th, and if you know anyone who might be, do pass it along!

Private currency to generate funds for effective altruism

1 Stefan_Schubert 14 February 2014 12:00AM

In the last few years we have seen two interesting revolutionary ideas on how to change the monetary system. The first is Bitcoin: the most well-known peer-to-peer currency. It has been wildly debated recently and I won't go into the detail of allegations of use in criminal activities etc (for one thing, I don't know much about it). My interest is rather in the money creation part. The people who run the Bitcoin software are rewarded for their work with new Bitcoins - a process called mining. Now the pace at which new Bitcoins are mined is limited, which means that Bitcoin creation is a zero-sum game: the more one miner contributes to the Bitcoin software, the less Bitcoins other miners get. Unsurprisingly, this has led to an arms race: miners spend nearly as much on running the software as they get back in form of new Bitcoins.

The second idea is the Chicago Plan, which was debated already in the 30's, after the great crash of 1929, but which recently was resurrected by Michael Kumhof (senior economist at IMF, of all places). The central idea of the Chicago Plan is to abolish fractional reserve banking - the system by which private banks in effect create money out of thin air. Instead of lending out most of the depositors' money, banks would effectively have to let them stay in the bank. 

Instead money would be created by the central bank/government, a process that would generate a massive seignorage for the government. According to Kumhof, it would also have other beneficial effects, such as killing off the "boom-and-bust"-cycles which he thinks fractional reserve banking are mostly responsible for, and diminishing the wasteful parts of the financial sector.

Kumhof ideas' have not been well received. Overall, it is remarkable how little reform there has been of the financial and monetary system given that the world had a major financial meltdown 2008 (and was close to an even greater one, in my understanding). Governments won't challenge the financial system radically in the near future, that's for sure.

Instead radical reforms can only come from private hands. Let us now compare the two ideas. In the Bitcoin system money is created by private hands, but in wasteful ways, which effectively means that there is very little seignorage. Under the Chicago plan, money is created by the government in much more efficient ways, which leads to a large seignorage. Now my idea is to take the best part of both of these ideas: let a private player - more exactly, an altruistic organization such as CEA - produce the money centrally, Chicago plan-like, and let the seignorage be used for altruistic purposes. (Of course, there would be some costs of running the system, but if the system was sufficiently large, these would be negligible in relation to the seignorage.)

If the altruistic organization that did this had a sufficiently good reputation, chances are greater that people would trust the system. Of course, it would try to stop the currency from being used to launder money, drug trade etc. 

Generally, people would be suspicious of private currencies where the central authority collected a seignorage, but if this seignorage was used for charitable and other altruistic purposes (and people really trusted that that would be the case), this would, I hope, be less of a problem.

What do you think? I'd be happy to get comments from people who know more about the Bitcoin system, since I don't really know it (though I find it interesting). Perhaps there is some info concerning Bitcoins that tells against this proposal; if so, I'd be interested in that.

In Praise of Tribes that Pretend to Try: Counter-"Critique of Effective Altruism"

15 diegocaleiro 02 December 2013 07:18AM
Disclaimer: I endorse the EA movement and direct an EA/Transhumanist organization, www.IERFH.org

We finally have created the first "inside view" critique of EA.

The critique's main worry would please Hofstadter by being self-referential: Being the first, having taken too long to emerge, thus indicating that EA's (Effective Altruists) are pretending to try instead of actually trying, or else they’d have self-criticized already.

Here I will try to clash head-on with what seems to be the most important point of that critique. This will be the only point I'll address, for the sake of brevity, mnemonics and force of argument. This is a meta-contrarian apostasy, in its purpose. I'm not sure it is a view I hold, anymore than a view I think has to be out there in the open, being thought of and criticized. I am mostly indebted to this comment by Viliam_Bur, which was marinating in my mind while I read Ben Kuhn's apostasy.


Original Version Abstract

Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.

By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.

Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.

Counterargument: Tribes have internal structure, and so should the EA movement.


This includes a free reconstruction, containing nearly the whole original, of what I took to be important in Viliam's comment.

Feeling-oriented, and outcome-oriented communities

People probably need two kinds of communities -- let's call them "feelings-oriented community" and "outcome-oriented community". To many people this division has been "home" and "work" over the centuries, but that has some misleading connotations. A very popular medieval alternative was "church" and "work". Organized large scale societies have many alternatives that fill up these roles, to greater or lesser degrees. Indigenous tribes have the three realms separated, "work" has a time and a place, likewise, rituals and late afternoon discussions, chants etc... fulfill the purpose of "church".

A "feelings-oriented community" is a community of people who meet because they enjoy being together and feel safe with each other. The examples are a functional family, a church group, friends meeting in a pub, etc... One of the important properties of feeling oriented communities, that according to Dennett has not yet sunk in the naturalist community is that nothing is a precondition for belonging to the group which feels, or the sacredness taking place. You could spend the rest of your life going to church without becoming a priest, listening to the tribal leaders and shamans talk without saying a word. There are no pre-requisites to become your parent's son, or your sister's brother every time you enter the house.

An "outcome-oriented community" is a community that has an explicit goal, and people genuinely contribute to making that goal happen. The examples are a business company, an NGO, a Toastmasters meetup, an intentional household etc... To become a member of an outcome-oriented community, you have to show that you are willing and able to bring about the goal (either for itself, or in exchange of something valuable). There is some tolerance if you stop doing things well, either by ignorance or, say, bad health. But the tolerance is finite and the group can frown upon, punish, or even expel those who are not clearly helping the goal.  

What are communities good for? What is good for communities?

The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do. For example, you could have an NGO with twelve members, where two of them want to have the work done, but the remaining ten only come to socialize. Of course, even those ten will verbally support the explicit goals of the organization, but they will be much more relaxed about timing, care less about verifying the outcomes, etc. For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals. (I have seen a similar tension in a few organizations, and the usual solution seems to be the serious members forming an "organization within an organization", keeping the other ones around them for social and other purposes.)

As an evolutionary just-so story, we have a tribe composed of many different people, and within the tribe we have a hunters group, containing the best hunters. Members of the tribe are required to follow the norms of the tribe. Hunters must be efficient in their jobs. But hunters don't become a separate tribe... they go hunting for a while, and then return back to their original tribe. The tribe membership is for life, or at least for a long time; it provides safety and fulfills the emotional needs. Each hunting expedition is a short-termed event; it requires skills and determination. If a hunter breaks his legs, he can no longer be a hunter; but he still remains a member of his tribe. The hunter has now descended from the feeling and work status, to only the feeling status, this is part of expected cycles - a woman may stop working while having a child, a teenager may decide work is evil and stop working, an existentialist may pause for a year to reflect on the value of life itself in different ways - but throughout they do are not cast away from the reassuring arms of the "feeling's oriented community".

A healthy double layered movement

Viliam and I think a healthy way of living should be modeled like this; on two layers. To have a larger tribe based on shared values (rationality and altruism), and within this tribe a few working groups, both long-term (MIRI, CFAR) and short-term (organizers of the next meetup). Of course it could be a few overlapping tribes (the rationalists, the altruists), but the important thing is that you keep your social network even if you stop participating in some specific project -- otherwise we get either cultish pressure (you have to remain hard-working on our project even if you no longer feel so great about it, or you lose your whole social network) or inefficiency (people remain formally members of the project, but lately barely any work gets done, and the more active ones are warned not to rock the boat). Joining or leaving a project should not be motivated or punished socially.

This is the crux of Viliam's argument and of my disagreement with Ben's Critique: The Effective Altruist community has grown large enough that it can easily afford to have two kinds of communities inside it: The feelings-oriented EA's, whom Ben calls (unfairly in my opinion) pretending to try to be effective altruists, and the outcome-oriented EA’s, whom are Really trying to be effective altruists.

Now that is not how he put it in his critique. He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first, and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.

Intentional Agents, communities or individuals, are not monolithic

Most importantly, if you consider the argument above that Effective Altruim can’t be criticized on accounts of being one single entity, because factually, it isn’t, then I wish you to bring this intuition pump one step further: Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game. Don’t forget the demanding objection to utilitarianism, if you ask of a smoker to stop smoking because it is irrational to smoke, and he believes you, he may end up abandoning rationalism just because a small subset of him was addicted to smoking, and he just couldn't live with that much inconsistency in his self view. Likewise, if to be an utilitarian is infinitely demanding, you lose the utilitarians to “what the hell” effects.  

The same goes for Effective Altruists. Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not. I don’t expect and don’t think anyone should expect that any single individual becomes a perfect altruist. There are parts of us that just won’t let go of some thing they crave for and praise. We don’t want to lose the entire community if one individual is not effective enough, and we don’t want to lose one individual if a part of him, or a time-slice, is not satisfying the canonical expectation of the outcome-oriented community.

Rationalists already accepted a layered structure

We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.

For a less sensitive example, consider an average job: you may think about your colleagues as your friends, but if you leave the job, how many of them will you keep regular contact with? In contrast with this, a regular church just asks you to come to sunday prayers, gives you some keywords and a few relatively simple rules. If this level of participation is ideal for you, welcome, brother or sister! And if you want more, feel free to join some higher-commitment group within the church. You choose the level of your participation, and you can change it during your life. For a non-religious example, in a dance group you could just go and dance, or chose to do the new year’s presentation, or choose to find new dancers, all the way up to being the dance organizer and coordinator.

The current rationalist community has solved this problem to some extent. Your level of participation can range from being a lurker at LW, all the way up, from meetup organizer to CFAR creator to writing the next HPMOR or it’s analogue.

Viliam ends his comment by saying: It would be great to have a LW village, where some people would work on effective altruism, others would work on building artificial intelligence, yet others would develop a rationality curriculum, and some would be too busy with their personal issues to do any of this now... but everyone would know that this is a village where good and sane people live, where cool things happen, and whichever of these good and real goals I will choose to prioritize, it's still a community where I belong. Actually, it would be great to have a village where 5% or 10% of people would be the LW community. Connotatively, it's not about being away from other people, but about being with my people.

The challenge, in my view from now on is not how to make effective altruists stop pretending, but how to surround effective altruists with welcoming arms even when the subset of them that is active at that moment is not doing the right things? How can we make EA’s a loving and caring community of people who help each other, so that people feel taken care of enough that they actually have the attentional and emotional resources necessary to really go there and do the impossible.

Here are some examples of this layered system working in non-religious non-tribal settings: Lesswrong has a karma system to tell different functions within the community. It also has meetups, it also has a Study Hall, and it also has strong relations with CFAR and MIRI.

Leverage research, as community/house has active hard-core members, new hirees, people in training, and friends/relationships of people there, very different outcomes expected from each.

Transhumanists have people who only self-identify, people who attend events, people who write for H+ magazine, a board of directors, and it goes all the way up to Nick Bostrom, who spends 70 hours a week working on academic content in related topics.

The solution is not just introspection, but the maintenance of a welcoming environment at every layer of effectiveness

The Effective Altruist community does not need to get introspectively even more focused on effectiveness - at least not right now - what it needs is a designed hierarchical structure which allows it to let everyone in, and let everyone transition smoothly between different levels of commitment.

Most people will transition upward, since understanding more makes you more interested, more effective, etc… in an upward spiral. But people also need to be able to slide down for a bit. To meet their relatives for thanksgiving, to play Go with their workfriends, to dance, to pretend they don’t care about animals. To do their thing. Their internal thing which has not converted to EA like the rest of them have. This is not only okay, it is not only tolerable, it is essential for the movement’s survival.

But then how can those who are at their very best, healthy, strong, smart, and at the edge of the movement push it forward?

Here is an obvious place not to do it: Open groups on Facebook.

Open Facebook is not the place to move it forward. Some people who are recognized as being in the forefront of the movement, like Toby, Will, Holden, Beckstead, Wise and others should create an “advancing Effective Altruism” group on facebook, and there and then will be a place where no blood will be shed on the hands of neither the feeling-oriented, nor the outcome-oriented group by having to decrease the signal to noise ratio within either.

Now once we create this hierarchy within the movement (not only the groups, but the mental hierarchy, and the feeling that it is fine to be at a feeling-oriented moment, or to have feeling-oriented experiences) we will also want to increase the chance that people will move up the hierarchical ladder. As many as possible, as soon as possible, after all, the higher up you are, by definition the more likely you are to be generating good outcome. We have already started doing so. The EA Self-Help (secret) group on Facebook serves this very purpose, it helps altruists when they are feeling down, unproductive, sad, or anything actually, and we will hear you and embrace you even if you are not being particularly effective and altruistic when you get there. It is the legacy of our deceased friend, Jonatas, to all of us, because of him, we now have some understanding that people need love and companionship especially when they are down. Or we may lose all of their future good moments. The monolithic individual fallacy is a very pricy one to pay. Let us not learn the hard way by losing another member.

Conclusions


I have argued here that the main problem indicated in Ben’s writing, that effective altruists are pretending to really try, is not to be viewed in this light. Instead, I argued that the very survival of the Effective Altruist movement may rely on finding a welcoming space for something that Viliam_Bur has called feeling-oriented community, without which many people would leave the movement, by experiencing it as too demanding during their bad times, or if it strongly conflicted a particular subset of themselves they consider important. Instead I advocate for hierarchically separate communities within the movement, allowing those who are at any particular level of commitment to grow stronger and win.


The first three initial measures I suggest for this re-design of the community are:

1) Making all effective altruists aware that the EA self-help group exists for anyone who, for any reason, wants help from the community, even for non EA related affairs.

2) Creating a Closed Facebook group with only those who are advancing the discussion at its best, for instance those who wrote long posts in their own blogs about it, or obvious major figures.

3) Creating a Study Hall equivalent for EA’s to increase their feeling of belonging to a large tribe of goal-sharing people, where they can lurk even when they have nothing to say, and just do a few pomodoros.

 

This is my first long writing on Effective Altruism, and my first attempt at an apostasy, and my first explicit attempt to be meta-contrarian. I hope I may have helped shed some light on the discussion, and that my critique can be taken by all, specially Ben, to be oriented envisioning the same large scale goal that is shared by effective altruists around the world. The outlook of effective altruism is still being designed every day by all of us, and I hope my critique can be used, along with Ben’s and others, to build not only a movement that is stronger in it’s individuals emotions, as I have advocated here, but furthermore in being psychologically healthy and functional group, a whole that understands the role of its parts, and subdivides accordingly.

Buying Debt as Effective Altruism?

10 aarongertler 13 November 2013 06:09AM

http://www.theguardian.com/world/2013/nov/12/occupy-wall-street-activists-15m-personal-debt

A collection of Occupy activists recently bought over $14,000,000 in personal debt for $400,000.

Normally, debt-buying companies do this with the intention of collecting the money from the debtors--Occupy did not, and I was struck by the lopsidedness of the figures.

A number I see often in the high-impact philanthropy world is $2300 to save a life (with plenty of caveats). At Occupy's rates, that would buy roughly $80,000 in debt--enough to get two or three families out of a hole that would otherwise render them bankrupt.

By itself, this isn't enough to be better than mosquito nets or deworming. But the thing about personal debt is that, thanks to interest payments and stress, it prevents people with high earning potential (compared to an average African) from making decisions that would optimal were they debt-free--like finishing college or buying a used car so they can take on a higher-paying job.

My idea, though it's a tentative, spur-of-the-moment thing:

Why not found a charity that acts like a combination of Vittana and Giving What We Can, freeing people with good prospects from debt in exchange for their signing a contract to donate a small portion of their future salary to charity?


A few issues that come to mind:

1) Occupy bought a lot of medical debt, which this company wouldn't, and other types of debt might be harder to buy.

2) People who have decent earning potential have more valuable debt, since they're more likely to pay it off later. (On the other hand, freeing them of interest payments might help them get into a better position for repayment.)

3) The idea is a lot like micro-lending, and organizations that offer that service don't have a great track record (though some have been successful).

4) People just freed from debt might not be in a position to donate much salary/might be unreliable. (Deferred payments until college is finished/the new job is had could be helpful here.)

5) There might be (well, almost certainly are) difficult legal issues with finding information on people in debt before you actually own their debt.

Are there any other obstacles you all can think of? Other features of the charity that might make it more effective? How does it sound as an intervention that increases the world's productivity in the long run, stacked up against other such interventions?

Democracy and rationality

8 homunq 30 October 2013 12:07PM

Note: This is a draft; so far, about the first half is complete. I'm posting it to Discussion for now; when it's finished, I'll move it to Main. In the mean time, I'd appreciate comments, including suggestions on style and/or format. In particular, if you think I should(n't) try to post this as a sequence of separate sections, let me know.

Summary: You want to find the truth? You want to win? You're gonna have to learn the right way to vote. Plurality voting sucks; better voting systems are built from the blocks of approval, medians (Bucklin cutoffs), delegation, and pairwise opposition. I'm working to promote these systems and I want your help.

Contents: 1. Overblown¹ rhetorical setup ... 2. Condorcet's ideals and Arrow's problem ... 3. Further issues for politics ... 4. Rating versus ranking; a solution? ... 5. Delegation and SODA ... 6. Criteria and pathologies ... 7. Representation, Proportional representation, and Sortition ... 8. What I'm doing about it and what you can ... 9. Conclusions and future directions ... 10. Appendix: voting systems table ... 11. Footnotes

1.

This is a website focused on becoming more rational. But that can't just mean getting a black belt in individual epistemic rationality. In a situation where you're not the one making the decision, that black belt is just a recipe for frustration.

Of course, there's also plenty of content here about how to interact rationally; how to argue for truth, including both hacking yourself to give in when you're wrong and hacking others to give in when they are. You can learn plenty here about Aumann's Agreement Theorem on how two rational Bayesians should never knowingly disagree.

But "two rational Bayesians" isn't a whole lot better as a model for society than "one rational Bayesian". Aspiring to be rational is well and good, but the Socratic ideal of a world tied together by two-person dialogue alone is as unrealistic as the sociopath's ideal of a world where their own voice rules alone. Society needs structures for more than two people to interact. And just as we need techniques for checking irrationality in one- and two-person contexts, we need them, perhaps all the more, in multi-person contexts.

Most of the basic individual and dialogical rationality techniques carry over. Things like noticing when you are confused, or making your opponent's arguments into a steel man, are still perfectly applicable. But there's also a new set of issues when n>2: the issues of democracy and voting. For a group of aspiring rationalists to come to a working consensus, of course they need to begin by evaluating and discussing the evidence, but eventually it will be time to cut off the discussion and just vote. When they do so, they should understand the strengths and pitfalls of voting in general and of their chosen voting method in particular.

And voting's not just useful for an aspiring rationalist community. As it happens, it's an important part of how governments are run. Discussing politics may be a mind-killer in many contexts, but there are an awful lot of domains where politics is a part of the road to winning.² Understanding voting processes a little bit can help you navigate that road; understanding them deeply opens the possibility of improving that road and thus winning more often.

2. Collective rationality: Condorcet's ideals and Arrow's problem

Imagine it's 1785, and you're a member of the French Academy of Sciences. You're rubbing elbows with most of the giants of science and mathematics of your day: Coulomb, Fourier, Lalande, Lagrange, Laplace, Lavoisier, Monge; even the odd foreign notable like Franklin with his ideas to unify electrostatics and electric flow.

They'll remember your names

One day, they'll put your names in front of lots of cameras (even though that foreign yokel Franklin will be in more pictures)

And this academy, with many of the smartest people in the world, has votes on stuff. Who will be our next president; who should edit and schedule our publications; etc. You're sure that if you all could just find the right way to do the voting, you'd get the right answer. In fact, you can easily prove that, or something like it: if a group is deciding between one right and one wrong option, and each member is independently more than 50% likely to get it right, then as the group size grows the chance of a majority vote choosing the right option goes to 1.

But somehow, there's still annoying politics getting in the way. Some people seem to win the elections simply because everyone expects them to win. So last year, the academy decided on a new election system to use, proposed by your rival, Charles de Borda, in which candidates get different points for being a voters first, second, or third choice, and the one with the most points wins. But you're convinced that this new system will lead to the opposite problem: people who win the election precisely because nobody expected them to win, by getting the points that voters strategically don't want to give to a strong rival. But when people point that possibility out to Borda, he only huffs that "my system is meant for honest men!"

So with your proof of the above intuitive, useful result about two-way elections, you try to figure out how to reduce an n-way election to the two-candidate case. Clearly, you can show that Borda's system will frequently give the wrong results from that perspective. But frustratingly, you find that there could sometimes be no right answer; that there will be no candidate who would beat all the others in one-on-one races. A crack has opened up; could it be that the collective decisions of intelligent individual rational agents could be irrational?

Of course, the "you" in this story is the Marquis de Condorcet, and the year 1785 is when he published his Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix, a work devoted to the question of how to acheive collective rationality. The theorem referenced above is Condorcet's Jury Theorem, which seems to offer hope that democracy can point the way from individually-imperfect rationality towards an ever-more-perfect collective rationality. Just as Aumann's Agreement Theorem shows that two rational agents should always move towards consensus, the Condorcet Jury Theorem apparently shows that if you have enough rational agents, the resulting consensus will be correct.

But as I said, Condorcet also opened a crack in that hope: the possibility that collective preferences will be cyclical. If the assumptions of the jury theorem don't hold — if each voter doesn't have a >50% chance of being right on a randomly-selected question, OR if the correctness of two randomly-selected voters is not independent and uncorrelated — then individually-sensible choices can lead to collectively-ridiculous ones. 

What do I mean by "collectively-ridiculous"? Let's imagine that the Rationalist Marching Band is choosing the colors for their summer, winter, and spring uniforms, and that they all agree that the only goal is to have as much as possible of the best possible colors. The summer-style uniforms come in red or blue, and they vote and pick blue; the winter-style ones come in blue or green, and they pick green; and the spring ones come in green or red, and they pick red.

Obviously, this makes us doubt their collective rationality. If, as they all agree they should, they had a consistent favorite color, they should have chosen that color both times that it was available, rather than choosing three different colors in the three cases. Theoretically, the salesperson could use such a fact to pump money out of them; for instance, offering to let them "trade up" their spring uniform from red to blue, then to green, then back to red, charging them a small fee each time; if they voted consistently as above, they would agree to each trade (though of course in reality human voters would probably catch on to the trick pretty soon, so the abstract ideal of an unending circular money pump wouldn't work).

This is the kind of irrationality that Condorcet showed was possible in collective decisionmaking. He also realized that there was a related issue with logical inconsistencies. If you were take a vote on 3 logically related propositions — say, "Should we have a Minister of Silly Walks, to be appointed by the Chancellor of the Excalibur", "Should we have a Minister of Silly Walks, but not appointed by the Chancellor of the Excalibur", and "Should we in fact have a Minister of Silly Walks at all", where the third cannot be true unless one of the first is — then you could easily get majority votes for inconsistent results — in this case, no, no, and yes, respectively. Obviously, there are many ways to fix the problem in this simple case — probably many less-wrong'ers would suggest some Bayesian tricks related to logical networks and treating votes as evidence⁸ — but it's a tough problem in general even today, especially when the logical relationships can be complex, and Condorcet was quite right to be worried about its implications for collective rationality.³

And that's not the only tough problem he correctly foresaw. Nearly 200 years later and an ocean away, in the 1960s, Kenneth Arrow showed that it was impossible for a preferential voting system to avoid the problem of a "Condorcet cycle" of preferences. Arrows theorem shows that any voting system which can consistently give the same winner (or, in ties, winners) for the same voter preferences; which does not make one voter the effective dictator; which is sure to elect a candidate if all voters prefer them; and which will switch the results for two candidates if you switch their names on all the votes... must exhibit, in at least some situation, the pathology that befell the Rationalist Marching Band above, or in other words, must fail "independence of irrelevant alternatives".

Arrow's theorem is far from obvious a priori, but proof is not hard to understand intuitively using Condorcet's insight. Say that there are three candidates, X, Y, and Z, with roughly equal bases of support; and that they form a Condorcet cycle, because in two-way races, X would beat Y with help from Z supporters, Y would beat Z with help from X supporters, and Z would beat X with help from Y supporters. So whoever wins in the three-way race — say, X — just remove the one who would have lost to them — Y in this case — and that "irrelevant" change will change the winner to be the third — Z in this case.

Summary of above: Collective rationality is harder than individual or two-way rationality. Condorcet saw the problem and tried to solve it, but Arrow saw that Condorcet had been doomed to fail.

3. Further issues for politics

So Condorcet's ideals of better rationality through voting appear to be in ruins. But at least we can hope that voting is a good way to do politics, right?

Not so fast. Arrow's theorem quickly led to further disturbing results. Alan Gibbard (and also Mark Satterthwaite) extended that there is no voting system which doesn't encourage voting strategy. That is, if you view an voting system as a class of games where the finite players and finite available strategies are fixed, no player is effectively a dictator, and the only thing that varies are the payoffs for each player from each outcome, there is no voting system where you can derive your best strategic vote purely by looking "honestly" at your own preferences; there is always the possibility of situations where you have to second-guess what others will do.

Amartya Sen piled on with another depressing extension of Arrows' logic. He showed that there is no possible way of aggregating individual choices into collective choice that satisfies two simple criteria. First, it shouldn't choose pareto-dominated outcomes; if everyone prefers situation XYZ to ABC, that they don't do XYZ. Second, it is "minimally liberal"; that is, there are at least two people who each get to freely make their own decision on at least one specific issue each, no matter what, so for instance I always get to decide between X and A (in Gibbard's⁴ example, colors for my house), and you always get to decide between Y and B (colors for your own house). The problem is that if you nosily care more about my house's color, the decision that should have been mine, and I nosily care about yours, more than we each care about our own, then the pareto-dominant situation is the one where we don't decide our own houses; and that nosiness could, in theory, be the case for any specific choice that, a priori, someone might have labelled as our Inalienable Right. It's not such a surprising result when you think about it that way, but it does clearly show that unswerving ideals of Democracy and Liberty will never truly be compatible.

Meanwhile, "public choice" theorists⁵ like Duncan Black, James Buchanan, etc. were busy undermining the idea of democratic government from another direction: the motivations of the politicians and bureaucrats who are supposed to keep it running. They showed that various incentives, including the strange voting scenarios explored by Condorcet and Arrow, would tend open a gap between the motives of the people and those of the government, and that strategic voting and agenda-setting within a legislature would tend to extend the impact of that gap. Where Gibbard and Sen had proved general results, these theorists worked from specific examples. And in one aspect, at least, their analysis is devastatingly unanswerable: the near-ubiquitous "democratic" system of plurality voting, also known as first-past-the-post or vote-for-one or biggest-minority-wins, is terrible in both theory and practice.

So, by the 1980s, things looked pretty depressing for the theory of democracy. Politics, the theory went, was doomed forever to be a worse than sausage factory; disgusting on the inside and distasteful even from outside.

Should an ethical rationalist just give up on politics, then? Of course not. As long as the results it produces are important, it's worth trying to optimize. And as soon as you take the engineer's attitude of optimizing, instead of dogmatically searching for perfection or uselessly whining about the problems, the results above don't seem nearly as bad.

From this engineer's perspective, public choice theory serves as an unsurprising warning that tradeoffs are necessary, but more usefully, as a map of where those tradeoffs can go particularly wrong. In particular, its clearest lesson, in all-caps bold with a blink tag, that PLURALITY IS BAD, can be seen as a hopeful suggestion that other voting systems may be better. Meanwhile, the logic of both Sen's and Gibbard's theorems are built on Arrow's earlier result. So if we could find a way around Arrow, it might help resolve the whole issue.

Summary of above: Democracy is the worst political system... (...except for all the others?) But perhaps it doesn't have to be quite so bad as it is today.

4. Rating versus ranking

So finding a way around Arrow's theorem could be key to this whole matter. As a mathematical theorem, of course, the logic is bulletproof. But it does make one crucial assumption: that the only inputs to a voting system are rankings, that is, voters' ordinal preference orders for the candidates. No distinctions can be made using ratings or grades; that is, as long as you prefer X to Y to Z, the strength of those preferences can't matter. Whether you put Y almost up near X or way down next to Z, the result must be the same.

Relax that assumption, and it's easy to create a voting system which meets Arrow's criteria. It's called Score voting⁶, and it just means rating each candidate with a number from some fixed interval (abstractly speaking, a real number; but in practice, usually an integer); the scores are added up and the highest total or average wins. (Unless there are missing values, of course, total or average amount to the same thing.) You've probably used it yourself on Yelp, IMDB, or similar sites. And it clearly passes all of Arrow's criteria. Non-dictatorship? Check. Unanimity? Check. Symmetry over switching candidate names? Check. Independence of irrelevant alternatives? In the mathematical sense — that is, as long as the scores for other candidates are unchanged — check.

So score voting is an ideal system? Well, it's certainly a far sight better than plurality. But let's check it against Sen and against Gibbard.

Sen's theorem was based on a logic similar to Arrow. However, while Arrow's theorem deals with broad outcomes like which candidate wins, Sen's deals with finely-grained outcomes like (in the example we discussed) how each separate house should be painted. Extending the cardinal numerical logic of score voting to such finely-grained outcomes, we find we've simply reinvented markets. While markets can be great things and often work well in practice, Sen's result still holds in this case; if everything is on the market, then there is no decision which is always yours to make. But since, in practice, as long as you aren't destitute, you tend to be able to make the decisions you care the most about, Sen's theorem seems to have lost its bite in this context.

What about Gibbard's theorem on strategy? Here, things are not so easy. Yes, Gibbard, like Sen, parallels Arrow. But while Arrow deals with what's written on the ballot, Gibbard deals with what's in the voters head. In particular, if a voter prefers X to Y by even the tiniest margin, Gibbard assumes (not unreasonably) that they may be willing to vote however they need to, if by doing so they can ensure X wins instead of Y. Thus, the internal preferences Gibbard treats are, effectively, just ordinal rankings; and the cardinal trick by which score voting avoided Arrovian problems no longer works.

How does score voting deal with strategic issues in practice? The answer to that has two sides. On the one hand, score never requires voters to be actually dishonest. Unlike the situation in a ranked system such as plurality, where we all know that the strategic vote may be to dishonestly ignore your true favorite and vote for a "lesser evil" among the two frontrunners, in score voting you never need to vote a less-preferred option above a more-preferred option. At worst, all you have to do is exaggerate some distinctions and minimize others, so that you might end giving equal votes to less- and more-preferred options.

Did I say "at worst"? I meant, "almost always". Voting strategy only matters to the result when, aside from your vote, two or more candidates are within one vote of being tied for first. Except in unrealistic, perfectly-balanced conditions, as the number of voters rises, the probability that anyone but the two a priori frontrunner candidates is in on this tie falls to zero.⁷ Thus, in score voting, the optimal strategy is nearly always to vote your preferred frontrunner and all candidate above at the maximum, and your less-preferred frontrunner and all candidates below at the minimum. In other words, strategic score voting is basically equivalent to approval voting, where you give each candidate a 1 or 0 and the highest total wins.

In one sense, score voting reducing to approval OK. Approval voting is not a bad system at all. For instance, if there's a known majority Condorcet winner — a candidate who could beat any other by a majority in a one-on-one race — and voters are strategic — they anticipate the unique strong Nash equilibrium, the situation where no group of voters could improve the outcome for all its members by changing their votes, whenever such a unique equilibrium exists — then the Condorcet winner will win under approval. That's a lot of words to say that approval will get the "democratic" results you'd expect in most cases.

But in another sense, it's a problem. If one side of an issue is more inclined to be strategic than the other side, the more-strategic faction could win even if it's a minority. That clashes with many people's ideals of democracy; and worse, it encourages mind-killing political attitudes, where arguments are used as soldiers rather than as ways to seek the truth.

But score and approval voting are not the only systems which escape Arrow's theorem through the trapdoor of ratings. If score voting, using the average of voter ratings, too-strongly encourages voters to strategically seek extreme ratings, then why not use the median rating instead? We know that medians are less sensitive to outliers than averages. And indeed, median-based systems are more resistant to one-sided strategy than average-based ones, giving better hope for reasonable discussion to prosper. That is to say, in a simple model, a minority would need twice as much strategic coordination under median as under average, in order to overcome a majority; and there's good reason to believe that, because of natural factional separation, reality is even more favorable to median systems than that model.

There are several different median systems available. In the US during the 1910-1925 Progressive Era, early versions collectively called "Bucklin voting" were used briefly in over a dozen cities. These reforms, based on counting all top preferences, then adding lower preferences one level at a time until some candidate(s) reach a majority, were all rolled back soon after, principally by party machines upset at upstart challenges or victories. The possibility of multiple, simultaneous majorities is a principal reason for the variety of Bucklin/Median systems. Modern proposals of median systems include Majority Approval Voting, Majority Judgment, and Graduated Majority Judgment, which would probably give the same winners almost all of the time. An important detail is that most median system ballots use verbal or letter grades rather than numeric scores. This is justifiable because the median is preserved under any monotonic transformation, and studies suggest that it would help discourage strategic voting.

Serious attention to rated systems like approval, score, and median systems barely began in the 1980s, and didn't really pick up until 2000. Meanwhile, the increased amateur interest in voting systems in this period — perhaps partially attributable to the anomalous 2000 US presidential election, or to more-recent anomalies in the UK, Canada, and Australia — has led to new discoveries in ranked systems as well. Though such systems are still clearly subject to Arrow's theorem, new "improved Condorcet" methods which use certain tricks to count a voter's equal preferences between to candidates on either side of the ledger depending on the strategic needs, seem to offer promise that Arrovian pathologies can be kept to a minimum.

With this embarrassment of riches of systems to choose from, how should we evaluate which is best? Well, at least one thing is a clear consensus: plurality is a horrible system. Beyond that, things are more controversial; there are dozens of possible objective criteria one could formulate, and any system's inventor and/or supporters can usually formulate some criterion by which it shines.

Ideally, we'd like to measure the utility of each voting system in the real world. Since that's impossible — it would take not just a statistically-significant sample of large-scale real-world elections for each system, but also some way to measure the true internal utility of a result in situations where voters are inevitably strategically motivated to lie about that utility — we must do the next best thing, and measure it in a computer, with simulated voters whose utilities are assigned measurable values. Unfortunately, that requires assumptions about how those utilities are distributed, how voter turnout is decided, and how and whether voters strategize. At best, those assumptions can be varied, to see if findings are robust.

In 2000, Warren Smith performed such simulations for a number of voting systems. He found that score voting had, very robustly, one of the top expected social utilities (or, as he termed it, lowest Bayesian regret). Close on its heels were a median system and approval voting. Unfortunately, though he explored a wide parameter space in terms of voter utility models and inherent strategic inclination of the voters, his simulations did not include voters who were more inclined to be strategic when strategy was more effective. His strategic assumptions were also unfavorable to ranked systems, and slightly unrealistic in other ways. Still, though certain of his numbers must be taken with a grain of salt, some of his results were large and robust enough to be trusted. For instance, he found that plurality voting and instant runoff voting were clearly inferior to rated systems; and that approval voting, even at its worst, offered over half the benefits compared to plurality of any other system.

Summary of above: Rated systems, such as approval voting, score voting, and Majority Approval Voting, can avoid the problems of Arrow's theorem. Though they are certainly not immune to issues of strategic voting, they are a clear step up from plurality. Starting with this section, the opinions are my own; the two prior sections were based on general expert views on the topic.

5. Delegation and SODA

Rated systems are not the only way to try to beat the problems of Arrow and Gibbard (/Satterthwaite).

Summary of above:

6. Criteria and pathologies

do.

Summary of above:

7. Representation, proportionality, and sortition

do.

Summary of above:

8. What I'm doing about it and what you can

do.

Summary of above:

9. Conclusions and future directions

do.

Summary of above:

10. Appendix: voting systems table

Compliance of selected systems (table)

The following table shows which of the above criteria are met by several single-winner systems. Note: contains some errors; I'll carefully vet this when I'm finished with the writing. Still generally reliable though.

 Major­ity/
MMC
Condorcet/
Majority Condorcet
Cond.
loser

Mono­tone
Consist­ency/
Particip­ation
Rever­sal
sym­metry

IIA
Cloneproof
Poly­time/
Resolv­able
Summ­able
Equal rankings
allowed
Later
prefs
allowed
Later-no-harm­/
Later-no-help
FBC:No
favorite
betrayal
Approval[nb 1] Ambig­uous NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Ambig­uous Ambig.­[nb 3] Yes O(N) Yes No [nb 4] Yes
Borda count No No Yes Yes Yes Yes No No (teaming) Yes O(N) No Yes No No
Copeland Yes Yes Yes Yes No Yes No (but ISDA) No (crowding) Yes/No O(N2) Yes Yes No No
IRV (AV) Yes No Yes No No No No Yes Yes O(N!)­[nb 5] No Yes Yes No
Kemeny-Young Yes Yes Yes Yes No Yes No (but ISDA) No (teaming) No/Yes O(N2[nb 6] Yes Yes No No
Majority Judg­ment[nb 7] Yes[nb 8] NoStrategic yes[nb 2] No[nb 9] Yes No[nb 10] No[nb 11] Yes Yes Yes O(N)­[nb 12] Yes Yes No[nb 13] Yes Yes
Minimax Yes/No Yes[nb 14] No Yes No No No No (spoilers) Yes O(N2) Some variants Yes No[nb 14] No
Plurality Yes/No No No Yes Yes No No No (spoilers) Yes O(N) No No [nb 4] No
Range voting[nb 1] No NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Yes[nb 15] Ambig.­[nb 3] Yes O(N) Yes Yes No Yes
Ranked pairs Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
Runoff voting Yes/No No Yes No No No No No (spoilers) Yes O(N)­[nb 16] No No[nb 17] Yes[nb 18] No
Schulze Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
SODA voting[nb 19] Yes Strategic yes/yes Yes Ambig­uous[nb 20] Yes/Up to 4 cand. [nb 21] Yes[nb 22] Up to 4 candidates[nb 21] Up to 4 cand. (then crowds) [nb 21] Yes[nb 23] O(N) Yes Limited[nb 24] Yes Yes
Random winner/
arbitrary winner[nb 25]
No No No NA No Yes Yes NA Yes/No O(1) No No   Yes
Random ballot[nb 26] No No No Yes Yes Yes Yes Yes Yes/No O(N) No No   Yes

"Yes/No", in a column which covers two related criteria, signifies that the given system passes the first criterion and not the second one.

  1. Jump up to:a b These criteria assume that all voters vote their true preference order. This is problematic for Approval and Range, where various votes are consistent with the same order. See approval voting for compliance under various voter models.
  2. Jump up to:a b c d e In Approval, Range, and Majority Judgment, if all voters have perfect information about each other's true preferences and use rational strategy, any Majority Condorcet or Majority winner will be strategically forced – that is, win in the unique Strong Nash equilibrium. In particular if every voter knows that "A or B are the two most-likely to win" and places their "approval threshold" between the two, then the Condorcet winner, if one exists and is in the set {A,B}, will always win. These systems also satisfy the majority criterion in the weaker sense that any majority can force their candidate to win, if it so desires. (However, as the Condorcet criterion is incompatible with the participation criterion and the consistency criterion, these systems cannot satisfy these criteria in this Nash-equilibrium sense. Laslier, J.-F. (2006) "Strategic approval voting in a large electorate,"IDEP Working Papers No. 405 (Marseille, France: Institut D'Economie Publique).)
  3. Jump up to:a b The original independence of clones criterion applied only to ranked voting methods. (T. Nicolaus Tideman, "Independence of clones as a criterion for voting rules", Social Choice and Welfare Vol. 4, No. 3 (1987), pp. 185–206.) There is some disagreement about how to extend it to unranked methods, and this disagreement affects whether approval and range voting are considered independent of clones. If the definition of "clones" is that "every voter scores them within ±ε in the limit ε→0+", then range voting is immune to clones.
  4. Jump up to:a b Approval and Plurality do not allow later preferences. Technically speaking, this means that they pass the technical definition of the LNH criteria - if later preferences or ratings are impossible, then such preferences can not help or harm. However, from the perspective of a voter, these systems do not pass these criteria. Approval, in particular, encourages the voter to give the same ballot rating to a candidate who, in another voting system, would get a later rating or ranking. Thus, for approval, the practically meaningful criterion would be not "later-no-harm" but "same-no-harm" - something neither approval nor any other system satisfies.
  5. Jump up^ The number of piles that can be summed from various precincts is floor((e-1) N!) - 1.
  6. Jump up^ Each prospective Kemeny-Young ordering has score equal to the sum of the pairwise entries that agree with it, and so the best ordering can be found using the pairwise matrix.
  7. Jump up^ Bucklin voting, with skipped and equal-rankings allowed, meets the same criteria as Majority Judgment; in fact, Majority Judgment may be considered a form of Bucklin voting. Without allowing equal rankings, Bucklin's criteria compliance is worse; in particular, it fails Independence of Irrelevant Alternatives, which for a ranked method like this variant is incompatible with the Majority Criterion.
  8. Jump up^ Majority judgment passes the rated majority criterion (a candidate rated solo-top by a majority must win). It does not pass the ranked majority criterion, which is incompatible with Independence of Irrelevant Alternatives.
  9. Jump up^ Majority judgment passes the "majority condorcet loser" criterion; that is, a candidate who loses to all others by a majority cannot win. However, if some of the losses are not by a majority (including equal-rankings), the Condorcet loser can, theoretically, win in MJ, although such scenarios are rare.
  10. Jump up^ Balinski and Laraki, Majority Judgment's inventors, point out that it meets a weaker criterion they call "grade consistency": if two electorates give the same rating for a candidate, then so will the combined electorate. Majority Judgment explicitly requires that ratings be expressed in a "common language", that is, that each rating have an absolute meaning. They claim that this is what makes "grade consistency" significant. MJ. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  11. Jump up^ Majority judgment can actually pass or fail reversal symmetry depending on the rounding method used to find the median when there are even numbers of voters. For instance, in a two-candidate, two-voter race, if the ratings are converted to numbers and the two central ratings are averaged, then MJ meets reversal symmetry; but if the lower one is taken, it does not, because a candidate with ["fair","fair"] would beat a candidate with ["good","poor"] with or without reversal. However, for rounding methods which do not meet reversal symmetry, the chances of breaking it are on the order of the inverse of the number of voters; this is comparable with the probability of an exact tie in a two-candidate race, and when there's a tie, any method can break reversal symmetry.
  12. Jump up^ Majority Judgment is summable at order KN, where K, the number of ranking categories, is set beforehand.
  13. Jump up^ Majority judgment meets a related, weaker criterion: ranking an additional candidate below the median grade (rather than your own grade) of your favorite candidate, cannot harm your favorite.
  14. Jump up to:a b A variant of Minimax that counts only pairwise opposition, not opposition minus support, fails the Condorcet criterion and meets later-no-harm.
  15. Jump up^ Range satisfies the mathematical definition of IIA, that is, if each voter scores each candidate independently of which other candidates are in the race. However, since a given range score has no agreed-upon meaning, it is thought that most voters would either "normalize" or exaggerate their vote such that it votes at least one candidate each at the top and bottom possible ratings. In this case, Range would not be independent of irrelevant alternatives. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  16. Jump up^ Once for each round.
  17. Jump up^ Later preferences are only possible between the two candidates who make it to the second round.
  18. Jump up^ That is, second-round votes cannot harm candidates already eliminated.
  19. Jump up^ Unless otherwise noted, for SODA's compliances:
    • Delegated votes are considered to be equivalent to voting the candidate's predeclared preferences.
    • Ballots only are considered (In other words, voters are assumed not to have preferences that cannot be expressed by a delegated or approval vote.)
    • Since at the time of assigning approvals on delegated votes there is always enough information to find an optimum strategy, candidates are assumed to use such a strategy.
  20. Jump up^ For up to 4 candidates, SODA is monotonic. For more than 4 candidates, it is monotonic for adding an approval, for changing from an approval to a delegation ballot, and for changes in a candidate's preferences. However, if changes in a voter's preferences are executed as changes from a delegation to an approval ballot, such changes are not necessarily monotonic with more than 4 candidates.
  21. Jump up to:a b c For up to 4 candidates, SODA meets the Participation, IIA, and Cloneproof criteria. It can fail these criteria in certain rare cases with more than 4 candidates. This is considered here as a qualified success for the Consistency and Participation criteria, which do not intrinsically have to do with numerous candidates, and as a qualified failure for the IIA and Cloneproof criteria, which do.
  22. Jump up^ SODA voting passes reversal symmetry for all scenarios that are reversible under SODA; that is, if each delegated ballot has a unique last choice. In other situations, it is not clear what it would mean to reverse the ballots, but there is always some possible interpretation under which SODA would pass the criterion.
  23. Jump up^ SODA voting is always polytime computable. There are some cases where the optimal strategy for a candidate assigning delegated votes may not be polytime computable; however, such cases are entirely implausible for a real-world election.
  24. Jump up^ Later preferences are only possible through delegation, that is, if they agree with the predeclared preferences of the favorite.
  25. Jump up^ Random winner: Uniformly randomly chosen candidate is winner. Arbitrary winner: some external entity, not a voter, chooses the winner. These systems are not, properly speaking, voting systems at all, but are included to show that even a horrible system can still pass some of the criteria.
  26. Jump up^ Random ballot: Uniformly random-chosen ballot determines winner. This and closely related systems are of mathematical interest because they are the only possible systems which are truly strategy-free, that is, your best vote will never depend on anything about the other voters. They also satisfy both consistency and IIA, which is impossible for a deterministic ranked system. However, this system is not generally considered as a serious proposal for a practical method.

11. Footnotes

¹ When I call my introduction "overblown", I mean that I reserve the right to make broad generalizations there, without getting distracted by caveats. If you don't like this style, feel free to skip to section 2.

 

² Of course, the original "politics is a mind killer" sequence was perfectly clear about this: "Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational." The focus here is on the first part of that quote, because I think Less Wrong as a whole has moved too far in the direction of avoiding politics as not a domain for rationalists.

 

³ Bayes developed his theorem decades before Condorcet's Essai, but Condorcet probably didn't know of it, as it wasn't popularized by Laplace until about 30 years later, after Condorcet was dead.

 

⁴ Yes, this happens to be the same Alan Gibbard from the previous paragraph.

 

⁵ Confusingly, "public choice" refers to a school of thought, while "social choice" is the name for the broader domain of study. Stop reading this footnote now if you don't want to hear mind-killing partisan identification. "Public choice" theorists are generally seen as politically conservative in the solutions they suggest. It seems to me that the broader "social choice" has avoided taking on a partisan connotation in this sense.

 

⁶ Score voting is also called "range voting" by some. It is not a particularly new idea — for instance, the "loudest cheer wins" rule of ancient Sparta, and even aspects of honeybees' process for choosing new hives, can be seen as score voting — but it was first analyzed theoretically around 2000. Approval voting, which can be seen as a form of score voting where the scores are restricted to 0 and 1, had entered theory only about two decades earlier, though it too has a history of practical use back to antiquity.

 

⁷ OK, fine, this is a simplification. As a voter, you have imperfect information about the true level of support and propensity to vote in the superpopulation of eligible voters, so in reality the chances of a decisive tie between other than your two expected frontrunners is non-zero. Still, in most cases, it's utterly negligible.

 

⁸ This article will focus more on the literature on multi-player strategic voting (competing boundedly-instrumentally-rational agents) than on multi-player Aumann (cooperating boundedly-epistemically-rational agents). If you're interested in the latter, here are some starting points: Scott Aaronson's work is, as far as I know, the state of the art on 2-player Aumann, but its framework assumes that the players have a sophisticated ability to empathize and reason about each others' internal knowledge, and the problems with this that Aaronson plausibly handwaves away in the 2-player case are probably less tractable in the multi-player one. Dalkiran et al deal with an Aumann-like problem over a social network; they find that attempts to "jump ahead" to a final consensus value instead of simply dumbly approaching it asymptotically can lead to failure to converge. And Kanoria et al have perhaps the most interesting result from the perspective of this article; they use the convergence of agents using a naive voting-based algorithm to give a nice upper bound on the difficulty of full Bayesian reasoning itself. None of these papers explicitly considers the problem of coming to consensus on more than one logically-related question at once, though Aaronson's work at least would clearly be easy to extend in that direction, and I think such extensions would be unsurprisingly Bayesian.

Proxy Donating as Spam Filter

4 beth 19 September 2013 01:55AM

One thing that sometimes makes me hesitate to donate to a cause is that, unless you're donating in person and using cash, you're inevitably signing up for a gigantic stream of junk mail, not just from the organization you gave money to, but other, often totally unrelated charities as well. I haven't noticed a lot of these charities offering a privacy policy that lets you avoid this, but I haven't paid close attention because frankly, I don't think I'd have a lot of confidence in such a privacy policy even if I saw one in some literature.

I wonder if there are donations to be gained in guaranteeing this sort of privacy by going through a third party. Charities could include the usual pre-addressed envelope in their mailings, only instead of their own address it would go to an organization called Givepal. The envelope would include the charity's id, and donors would be instructed to make their checks out to Givepal, who would then distribute the money to the specified charity, keeping the transaction anonymous. Givepal could survive by taking a cut of the donations if necessary, or could itself operate as a non-profit.

From Capuchins to AI's, Setting an Agenda for the Study of Cultural Cooperation (Part2)

-4 diegocaleiro 28 June 2013 10:20AM
Today's writings are shaded dark green, the rest was also in Part1.
This is a multi-purpose essay-on-the-making, it is being written aiming at the following goals 1) Mandatory essay writing at the end of a semester studying "Cognitive Ethology: Culture in Human and Non-Human Animals" 2) Drafting something that can later on be published in a journal that deals with cultural evolution, hopefully inclining people in the area to glance at future oriented research, i.e. FAI and global coordination 3) Publishing it in Lesswrong and 4) Ultimately Saving the World, as everything should. If it's worth doing, it's worth doing in the way most likely to save the World.
Since many of my writings are frequently too long for Lesswrong, I'll publish this in a sequence-like form made of self-contained chunks. My deadline is Sunday, so I'll probably post daily, editing/creating the new sessions based on previous commentary.


Abstract: The study of cultural evolution has drawn much of its momentum from academic areas far removed from human and animal psychology, specially regarding the evolution of cooperation. Game theoretic results and parental investment theory come from economics, kin selection models from biology, and an ever growing amount of models describing the process of cultural evolution in general, and the evolution of altruism in particular come from mathematics. Even from Artificial Intelligence interest has been cast on how to create agents that can communicate, imitate and cooperate. In this article I begin to tackle the 'why?' question. By trying to retrospectively make sense of the convergence of all these fields, I contend that further refinements in these fields should be directed towards understanding how to create environmental incentives fostering cooperation.

 


 

We need systems that are wiser than we are. We need institutions and cultural norms that make us better than we tend to be. It seems to me that the greatest challenge we now face is to build them. - Sam Harris, 2013, The Power Of Bad Incentives

1) Introduction

2) Cultures evolve

Culture is perhaps the most remarkable outcome of the evolutionary algorithm (Dennett, 1996) so far. It is the cradle of most things we consider humane - that is, typically human and valuable - and it surrounds our lives to the point that we may be thought of as creatures made of culture even more than creatures of bone and flesh (Hofstadter, 2007; Dennett, 1992). The appearance of our cultural complexity has relied on many associated capacities, among them:

1) The ability to observe, be interested by, and go nearby an individual doing something interesting, an ability we share with norway rats, crows, and even lemurs (Galef & Laland, 2005).

2) Ability to learn from and scrounge the food of whoever knows how to get food, shared by capuchin monkeys (Ottoni et al, 2005).

3) Ability to tolerate learners, to accept learners, and to socially learn, probably shared by animals as diverse as fish, finches and Fins (Galef & Laland, 2005).

4) Understanding and emulating other minds - Theory of Mind - empathizing, relating, perhaps re-framing an experience as one's own, shared by chimpanzees, dogs, and at least some cetaceans (Rendella & Whitehead, 2001).

5) Learning the program level description of the action of others, for which the evidence among other animals is controversial (but see Cantor & Whitehead, 2013). And finally...

6) Sharing intentions. Intricate understanding of how two minds can collaborate with complementary tasks to achieve a mutually agreed goal (Tomasello et al, 2005).

Irrespective of definitional disputes around the true meaning of the word "culture" (which doesn't exist, see e.g. Pinker, 2007 pg115; Yudkowsky 2008A), each of these is more cognitively complex than its predecessor, and even (1) is sufficient for intra-specific non-environmental, non-genetic behavioral variation, which I will call "culture" here, whoever it may harm.

By transitivity, (2-6) allow the development of culture. It is interesting to notice that tool use, frequently but falsely cited as the hallmark of culture, is ubiquitously equiprobable in the animal kingdom. A graph showing, per biological family, which species shows tool use gives us a power law distribution, whose similarity with the universal prior will help in understanding that being from a family where a species uses tools tells us very little about a specie's own tool use (Michael Haslam, personal conversation).

Once some of those abilities are available, and given an amount of environmental facilities, need, and randomness, cultures begin to form. Occasionally, so do more developed traditions. Be it by imitation, program level imitation, goal emulation or intention sharing, information is transmitted between agents giving rise to elements sufficient to constitute a primeval Darwinian soup. That is, entities form such that they exhibit 1)Variation 2)Heredity or replication 3)Differential fitness (Dennett, 1996). In light of the article Five Misunderstandings About Cultural Evolution (Henrich, Boyd & Richerson, 2008) we can improve Dennett's conditions for the evolutionary algorithm as 1)Discrete or continuous variation 2)Heredity, replication, or less faithful replication plus content attractors 3)Differential fitness. Once this set of conditions is met, an evolutionary algorithm, or many, begin to carve their optimizing paws into whatever surpassed the threshold for long enough. Cultures, therefore, evolve. 

The intricacies of cultural evolution and mathematical and computational models of how cultures evolve have been the subject of much interdisciplinary research, for an extensive account of human culture see Not By Genes Alone (Richerson & Boyd, 2005). For computational models of social evolution, there is work by Mesoudi, Novak, and others e.g. (Hauert et al, 2007). For mathematical models, the aptly named Mathematical models of social evolution: A guide for the perplexed by McElrath and Rob Boyd (2007) makes the textbook-style walk-through. For animal culture, see (Laland & Galef, 2009).

Cultural evolution satisfies David Deutsch's criterion for existence, it kicks back, it satisfies the evolutionary equivalent of the  condition posed by the Quine-Putnam Indispensability argument in mathematics, i.e. it is a sine qua non condition for understanding how the World works nomologically. It is falsifiable to Popperian content, and it inflates the Worlds ontology a little, by inserting a new kind of "replicator", the meme. Contrary to what happened on the internet, the name 'meme' has lost much of it's appeal within cultural evolution theorists, and "memetics" is considered by some to refer only to the study of memes as monolithic atomic high fidelity replicators, which would make the theory obsolete. This has created the following conundrum: the name 'meme' remains by far the most well known one to speak of "that which evolves culturally" within, and specially outside, the specialist arena. Further, the niche occupied by the word 'meme' is so conceptually necessary within the area to communicate and explain that it is frequently put under scare quotes, or some other informal excuse. In fact, as argued by Tim Tyler - who frequently posts here - in the very sharp Memetics (2010), there are nearly no reasons to try to abandon the 'meme' meme, and nearly all reasons (practicality, Qwerty reasons, mnemonics) to keep it. To avoid contradicting the evidence ever since Dawkins first coined the term, I suggest we must redefine Meme as an attractor in cultural evolution (dual-inheritance) whose development over time structurally mimics to a significant extent the discrete behavior of genes, frequently coinciding with the smallest unit of cultural replication. The definition is long, but the idea is simple: Memes are not the best analogues of genes because they are discrete units that replicate just like genes, but because they are continuous conceptual clusters being attracted to a point in conceptual space whose replication is just like that of genes. Even more simply, memes are the mathematically closest things to genes in cultural evolution. So the suggestion here is for researchers of dual-inheritance and cultural evolution to take off the scare quotes of our memes and keep business as usual.  

The evolutionary algorithm has created a new attractor-replicator, the meme, it didn't privilege with it any specific families in the biological trees and it ended up creating a process of cultural-genetic coevolution known as dual-inheritance. This process has been studied in ever more quantified ways by primatologists, behavioral ecologists, population biologists, anthropologists, ethologists, sociologists, neuroscientists and even philosophers. I've shown at least six distinct abilities which helped scaffold our astounding level of cultural intricacy, and some animals who share them with us. We will now take a look at the evolution of cooperation, collaboration, altruism, moral behavior, a sub-area of cultural evolution that saw an explosion of interest and research during the last decade, with publications (most from the last 4 years) such as The Origins of Morality, Supercooperators, Good and Real, The Better Angels of Our Nature, Non-Zero, The Moral Animal, Primates and Philosophers, The Age of Empathy, Origins of Altruism and Cooperation, The Altruism Equation, Altruism in Humans, Cooperation and Its Evolution, Moral Tribes, The Expanding Circle, The Moral Landscape.


3) Cooperation evolves

Despite the selfish nature of genes (Dawkins, 1999) and other units of Darwinian transmission (Jablonka & Lamb, 2007), altruism at the individual level (cost to self for benefit to other) can and does arise because of several intertwined factors.

1) Alleles (the molecular biologist word for what less-specialized areas call genes) under normal conditions optimize for there being more copies of themselves in the future. This happens regardless of whether it is that physical instantiation - also known as token - that is present in the future. 

2) Copies of alleles are spread over space, individuals, groups, species and time, but they only care about the time dimension and the quantity dimension. In the long run alleles don't thrive if they are doing better than their neighbors, they thrive if they are doing better than the average allele. A token (instantiation) of an allele that codes for cancer, multiplying itself uncontrollably could, had he a mind, think he's doing great, but if the mutation that gave rise to it only happened in somatic cells (that do not go through the germ line), he'd be in for a surprise. One reason why biologists say natural selection is short-sighted. 

3) The above reasoning applies exactly equally and for the same reasons to an allele that codes for individual-selfish behavior in a species in which more altruist groups tend to outlive more egotistic ones. The allele for individual-selfishness, and the selfish individual, may think they are doing great, comparing to their neighbors, when all of a sudden, with high probability, their group dies. Altruism wins in this case not because there is a new spooky unit of selection that reverses reductionism, and applies downward causation which originates in groups. Altruism thrives because the average long term fitness of each allele that coded for it was higher than that of genes that code for individual-selfish behavior. Group selectionc  - as well as superoganism selection, somatic cells selection, species selection and individual selection - only happens when the selective forces operating on that level coincide with the allele's fitness increasing in relation to all the competing alleles. (Group selectionc is selection for altruist genes at the group level, the only definition under which the entire discussion was dealing with a controversy of substance instead of talking past each other, as brilliantly explained in this post by PhilGoetz, 2010, please read the case study section in that post to get a more precise understanding than the above short definition). See also the excursus on what a fitness function is below.

4) Completely independent from the reasons in (3), alleles, epigenetics, and learning can program individuals to be cooperative if they "expect" (consciously or not) the interaction with another individual, say, Malou, to: (a) Begin a cycle of reciprocation with Malou in the future whose benefit exceeds the current cost being paid; (b) Counterfactually increase their reputation with sufficiently many individuals that those will award more benefit than current cost; (c) Avoid being punished by third parties; (d) Conform to, or help enforce, by setting an example, social norms and rules upon which selection pressures act (Tomasello, 2005). A key notion in all these mechanisms based on this encoded "expectation" is that uncertainty must  be present. In the absence of uncertainty, a state that doesn't exist in nature, an agent in a prisoner dilemma like interaction would be required to defect instead of cooperating from round one, predicting the backwards-in-time cascade of defection from whichever was the last round of interaction, in which by definition cooperating is worse. The problems that in Lesswrong people are trying to solve using Timeless Decision Theory, Updateless Decision Theory, PrudentBot, and other IQ140+ gimmicks, evolution solved by inserting stupidity! More precisely by embracing higher level uncertainty about how many future interactions will there be. Kissing, saying "I love you", becoming engaged, and getting married are all increasingly honest ways in which the computer program programmed by your alleles informs Malou that there will be more cooperation and less defection in the future.

5) Finally, altruism only poses paradoxes of the "Group Selectionc" kind when we are trying to explain why a replicator that codes for Altruism emerged? And we are trying to explain it at that replicators level. It is no mystery why a composition of the phenotypic effects of a gene (replicator) and two memes (attractor-replicators) in all individuals who posses the three of them makes them altruistic, if it does. Each gene and meme in that composition may be fending for itself, but as things turn out, they do make some really nice people (or bonobos) once their extended phenotypes are clustered within those people. If we trust Jablonka & Lamb (2007), there are four streams of heredity flowing concomitantly: Genetic, Epigenetic, Behavioral and Symbolic. Some of the flowing hereditary entities are not even attractor-replicators (niche construction for instance), they don't exhibit replicator dynamics and any altruism that spreads through them requires no special explanation at all!

To the best of my knowledge, none of the 5 factors above, which all do play a role in the existence and maintenance of altruism, requires a revision of Neodarwinism of the Dawkins, Dennett, Trivers, Pinker sort. None of them challenges the validity of our models of replicator dynamics as replicator dynamics. None of them challenges the metaphysically fundamental notion of Darwinism as Universal Acid (Dennett,1996). None of them compromises the claim that everything in the universe that has complex design of which we are aware can be traced back to Darwinian mind-less processes operating, by and large, in replicator-like entities (Dennett, opus cit). None of them poses an obstacle to physicalist reductionism - in this biology-ladden context being the claim that all macrophysical facts, including biological facts, are materially determined by the microphysical facts.

Cooperation evolves, and altruism evolves. They evolve for natural, non-mysterious reasons, and before any more shaking of the edifice of Darwinism is made, and it's constitutive reductionism or universal corrosive powers are contested, any counteracting evidence must be able to traverse undetectably by the far less demanding possibility of being explained by any of the factors above or a combination of them, or being simply the result of one of the many confusions clarified in the excursus below. Despite many people's attempts to look for Skyhooks that would cast away the all-too-natural demons of Neodarwinism and reductionism, things remain as they were before, Cranes all the way up. I will be listening attentively for a case of altruism found in the biological world or mathematical simulations based on it that can pierce through these many layers of epistemic explanatory ability, but I won't be holding my breadth.      


Excursus: What is a fitness function?

It is worth pointing out here not only that the altruism and group selection confusion happens, but showing why it does. And PhilGoetz did half of the explanatory job already. The other half is noticing that the fitness function is a many-place function (there is a newer and better post on Lesswrong explaining many-place functions/words, but I didn't find it in 12min, please point to it if you can). The complicated description of "what the fitness function is", in David Lewis's manner of speaking, would be that it is a function from things to functions from functions to functions. More understandably, with e.g. the specific "thing" being a token of an altruistic allele of kind "Aallele", call it "Aallele334":

Aallele344--1-->((number of Aalleles--3-->total number of alleles)--2-->(amplitude configuration slice--4-->simplest ordering))

Here arrow 4 is the function we call time from a timeless physics, quantum physics perspective. Just substitute the whole parenthesis for "time" instead if you haven't read the Quantum Physics sequence. Arrow 3 is how good Aalleles are doing, i.e. how many of them there are in relation to the total number of competing alleles. Arrow 2 is how this relation between Aalleles and total varies over time. The fitness function is arrow 1, once you are given a specific token of an allele, it is the function that describes how well copies of that token do over time in relation to all the competing alleles. Needless to say, not many biologists are aware of that complex computation.

The reason why the unexplained half of controversies happen is that the punctual fitness of an allele will appear very different when you factor it against the competing alleles of other cells, of other individuals,  of other groups, or of other species. Fitness is what philosophers call an externalist concept, if you increase the amount of contextually relevant surroundings, the output number changes significantly. It will also appear very different when you factor it for final time T1 or T2. The fitness of an allele coding for a species specific characteristic of T-Rex's large bodies will be very high if the final time is 65 million years ago, but negative if 64.

I remember Feynman saying, I believe in this interview, that it is amazing what the eye does. Surrounded in a 3d equivalent of an insect floating up and down in the 2d surface of a swimming pool, we manage to abstract away all the waves going through the space between us and a seen object, and still capture information enough to locate it, interact with it, and admire it. It is as if the insect could tell only from his vertical oscillations how many children were in the pool, where they were located etc. The state of knowledge in many fields, adaptive fitness included, strikes me as similarly amazing. If this many-place function underlies what biologists should be talking about to avoid talking past each other, how can many of them be aware of only one or two of the many variables that should be input, and still be making good science? Or are they?
If you fail to see hidden variables, you can fall prey to anomalies like the Simpson's paradox, which is exactly the mistake described in PhilGoetz's post on group/species selection.

The function above also works for things other than alleles, like individuals with a characteristic, in which case it will be calculating the fitness of having that characteristic at the individual level.

 

4) The complexity of cultural items doesn't undermine the validity of mathematical models.

 4.1) Cognitive attractors and biases substitute for memes discreteness

The math becomes equivalent.

 4.2) Despite the Unilateralist Curse and the Tragedy of the Commons, dyadic interaction models help us understand large scale cooperation

Once we know these two failure modes, dyadic iterated (or reputation-sensitive) interaction is close enough.

5) From Monkeys to Apes to Humans to Transhumans to AIs, the ranges of achievable altruistic skill.

Possible modes of being altruistic. Graph like Bostrom's. Second and third order punishment and cooperation. Newcomb-like signaling problems within AI.

6) Unfit for the Future: the need for greater altruism.

We fail and will remain failing in Tragedy of the Commons problems unless we change our nature.

7) From Science, through Philosophy, towards Engineering: the future of studies of altruism.

Philosophy: Existential Risk prevention through global coordination and cooperation prior to technical maturity. Engineering Humans: creating enhancements and changing incentives. Engineering AI's: making them better and realer.

8) A different kind of Moral Landscape

Like Sam Harris's one, except comparing not how much a society approaches The Good Life (Moral Landscape pg15), but how much it fosters altruistic behavior.

9) Conclusions

Not yet.

 

 

 


 

Bibliography (Only of the parts already written, obviously):

Boyd, R., Gintis, H., Bowles, S., & Richerson, P. J. (2003). The evolution of altruistic punishment. Proceedings of the National Academy of Sciences, 100(6), 3531-3535.

Cantor, M., & Whitehead, H. (2013). The interplay between social networks and culture: theoretically and among whales and dolphins. Philosophical Transactions of the Royal Society B: Biological Sciences368(1618).

Dawkins, R. (1999). The extended phenotype: The long reach of the gene. Oxford University Press, USA.

Dennett, D. C. (1996). Darwin's dangerous idea: Evolution and the meanings of life (No. 39). Simon & Schuster.

Dennett, D. C. (1992). The self as a center of narrative gravity. Self and consciousness: Multiple perspectives.

Galef Jr, B. G., & Laland, K. N. (2005). Social learning in animals: empirical studies and theoretical models. Bioscience55(6), 489-499.

Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A., & Sigmund, K. (2007). Via freedom to coercion: the emergence of costly punishment. science316(5833), 1905-1907.

Henrich, J., Boyd, R., & Richerson, P. J. (2008). Five misunderstandings about cultural evolution. Human Nature, 19(2), 119-137.

Hofstadter, D. R. (2007). I am a Strange Loop. Basic Books

Jablonka, E., & Lamb, M. J. (2007). Precis of evolution in four dimensions. Behavioral and Brain Sciences, 30(4), 353-364.

McElreath, R., & Boyd, R. (2007). Mathematical models of social evolution: A guide for the perplexed. University of Chicago Press.

Ottoni, E. B., de Resende, B. D., & Izar, P. (2005). Watching the best nutcrackers: what capuchin monkeys (Cebus apella) know about others’ tool-using skills. Animal cognition8(4), 215-219.

Persson, I., & Savulescu, J. Unfit for the Future: The Need for Moral Enhancement Oxford: Oxford University Press, 2012 ISBN 978-0199653645 (HB)£ 21.00. 160pp. On the brink of civil war, Abraham Lincoln stood on the steps of the US Capitol and appealed.

PhilGoetz. (2010), Group selection update. Available at http://lesswrong.com/lw/300/group_selection_update/

Pinker, S. (2007). The stuff of thought: Language as a window into human nature. Viking Adult.

Rendella, L., & Whitehead, H. (2001). Culture in whales and dolphins.Behavioral and Brain Sciences24, 309-382.

Richardson, P. J., & Boyd, R. (2005). Not by genes alone. University of Chicago Press.

Tyler, T. (2011). Memetics: Memes and the Science of Cultural Evolution. Tim Tyler.

Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition.Behavioral and brain sciences28(5), 675-690.

Yudkowsky, E. (2008A). 37 ways words can be wrong. Available at http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/

Part of a THINK Meetup Group? We Want to Hear From You!

2 OnTheOtherHandle 25 June 2013 07:41AM

Hello everybody, I've recently started as a volunteer for The High Impact NetworK, an effective altruism group with local chapters mostly in the US and UK. We're aiming to get more people, especially students, interested in effective altruism and enlarging the netowrk in a big way. Edit: Thank you to BenLowell for suggesting that I include a description!

One of the first things I'm trying to do is make THINK more personal and accessible. The modules provide a pretty good outline of the topics discussed, but they're highly structured and don't capture the feel of a live meeting very well. We'd like to advertise to potential members that there's a lot of value in attending a physical meetup beyond just learning the material already presented in the modules. We'd like it to feel more like a club and less like a classroom.

So if any of you are members of a THINK meetup group, I would love to hear your stories or see pictures and videos of your meetup groups in action. I'm hoping to convey the discussions and debates that go on after the more instructive module part is over. You don't have to reveal your name or face, but if you don't mind having your picture on the website, I would really appreciate seeing a name and face. If you choose to submit an anecdote, try focusing on something surprising or interesting that happened at a meetup, an experience you were unlikely to get elsewhere.

Please send your pictures/videos/anecdotes to ajeyac@berkeley.edu, and I'll forward them to THINK leader Mark Lee. I'll do my best to try to set up a section on the THINK website and put this up, but it may take a while, and if we get more submissions than we anticipated, they may not all show up.

Thank you!

 

Givewell Survey - Opportunity to influence their research

8 Raemon 26 February 2013 05:20PM

Givewell's blog has recently begun a series of 5 self-evaluation posts (they are on the 4th right now) which discuss where the organization is at and where they're going. They're all worth a read. In particular, they build up to a survey for Givewell followers about how you'd like the organization to direct their research in the future, with options to emphasize existential risk and research even if the evidence is lower quality.

AidGrade - GiveWell finally has some competition

44 Raemon 22 January 2013 03:41PM

AidGrade is a new charity evaluator that looks to be comparable to GiveWell. Their primary difference is that they *only* focus on how charities compare along particular measured outcomes (such as school attendance, birthrate, chance of opening a business, malaria), without making any effort to compare between types of charities. (This includes interesting results like "Conditional Cash Transfers and Deworming are better at improving attendance rates than scholarships")

GiveWell also does this, but designs their site to direct people towards their top charities. This is better for people with don't have the time to do the (fairly complex) work of comparing charities across domains, but AidGrade aims to be better for people that just want the raw data and the ability to form their own conclusions.

I haven't looked it enough to compare the quality of the two organizations' work, but I'm glad we finally have another organization, to encourage some competition and dialog about different approaches.

This is a fun page to play around with to get a feel for what they do:
http://www.aidgrade.org/compare-programs-by-outcome

And this is a blog post outlining their differences with GiveWell:
http://www.aidgrade.org/uncategorized/some-friendly-concerns-with-givewell

Why (anthropic) probability isn't enough

19 Stuart_Armstrong 13 December 2012 04:09PM

A technical report of the Future of Humanity Institute (authored by me), on why anthropic probability isn't enough to reach decisions in anthropic situations. You also have to choose your decision theory, and take into account your altruism towards your copies. And these components can co-vary while leaving your ultimate decision the same - typically, EDT agents using SSA will reach the same decisions as CDT agents using SIA, and altruistic causal agents may decide the same way as selfish evidential agents.

 

Anthropics: why probability isn't enough

This paper argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These issues, generally irrelevant for non-anthropic problems, come to the forefront in anthropic situations and are at least as important as the anthropic probabilities: indeed they can erase the difference between different theories of anthropic probability, or increase their divergence. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.

 

View more: Next