Comment author: Journeyman 17 July 2015 07:01:01PM 5 points [-]

It's not the preferences of the West that are inherently more valuable, it's the integrity of its institutions, such as rule of law, freedom of speech, etc... If the West declines, then it's going to have negative flow-through effects for the rest of the world.

Comment author: TomStocker 17 July 2015 07:07:15PM 0 points [-]

I think its clearer then if you say sound institutions rather than the West?

Comment author: Journeyman 12 July 2015 11:01:26PM 4 points [-]

Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.

VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”

I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.

Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system.

Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?

Regardless of whether you are an antirealist, not all value systems are created equal. Many people's value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That's a contradiction.

I just don't think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don't know about, and which would cause them to update their approach if they knew about it and thought seriously about it.

Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I'm doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.

However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.

What is or isn't controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say "collectivize faster, comrade?" How do we know we aren't also deluded by present-day politics?

It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people's knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don't see most EAs or rationalists operating at this level (I'm certainly not: the more I learn, the more I realize I don't know).

The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.

Comment author: TomStocker 15 July 2015 10:20:55AM 0 points [-]

"I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West."

Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn't mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don't see many EA orgs asking Dalit groups for their cash or time yet.

Comment author: Journeyman 10 July 2015 02:20:18AM *  7 points [-]

Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.

Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer's styles of EA, which are the dominant EA approaches, but are not universal.

  • There is no good philosophical reason to hold EA's axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.

  • Even if you agree with EA's utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.

  • If EA is true, then moral philosophy is a solved problem. I don't think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.

  • EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don't know, but I feel like we can do better.

  • EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.

  • If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.

  • EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.

  • Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.

  • Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)

  • I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:

Despite the fact that a moral conviction feels like a deliberate rational conclusion to a particular line of reasoning, it is neither a conscious choice nor a thought process. Certainty and similar states of ‘knowing that we know’ arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. . . .

What feels like a conscious life-affirming moral choice—my life will have meaning if I help others—will be greatly influenced by the strength of an unconscious and involuntary mental sensation that tells me that this decision is “correct.” It will be this same feeling that will tell you the “rightness” of giving food to starving children in Somalia, doing every medical test imaginable on a clearly terminal patient, or bombing an Israeli school bus. It helps to see this feeling of knowing as analogous to other bodily sensations over which we have no direct control.

It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.

The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.

EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.

My criticisms here don't apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I've seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. A neoreactionary approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.

Comment author: TomStocker 15 July 2015 10:12:50AM -1 points [-]

Interesting that the solutions you're jumping to are about defending the 'west' and beating the south / east rather than working with the south/east to make sure the best of both is shared?

Comment author: benkuhn 12 July 2015 03:23:49AM 6 points [-]

Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.

I'm sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.

More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a "charity budget" periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

Comment author: TomStocker 15 July 2015 10:06:30AM *  0 points [-]

So I think most EAs have come to the point where they realise that small trade offs and agonising over them displace other good things, so they try and find a way of setting a limit by year or whatever. But you know many people agonise and make trade offs, its just that often it isn't giving to the poor that's the counterfactual, it's saving or paying the mortgage, or buying a better holiday or school for their children or whatever. If you don't think like that, then you have everything you need?? http://www.givinggladly.com/ and http://www.jefftk.com/index have documented going on this journey of living well with generosity. Sounds like it might be worth a read :)

edit: Soz Ben, I think I put this comment in the wrong place!

In response to comment by Benquo on Action and habit
Comment author: CaptainOblivious2 22 June 2011 12:28:29AM 1 point [-]

Actually for me it's all mental. Normally I hate being hungry: that gnawing feeling in your stomach that says "FEED ME NOW". But if I'm trying to lose weight, I somehow flip my mental state such that the gnawing feeling is a GOOD thing: that's what losing weight feels like. As long as you've got that feeling, you're losing weight. However, if you eat enough that the gnawing feeling goes away, that's a bad thing: you're not losing weight any more. And god forbid you should eat enough to actually feel FULL - that's the absolute opposite of losing weight! Whatever happens, you don't want that!

Because of the mental flip, I don't feel like I'm depriving myself of something - instead I feel like I'm moving towards a goal, which is a positive feeling, not a negative one.

I wish I could tell others how to perform that mental flip, but I really wouldn't know how to start - it's one of those things you just DO.

Comment author: TomStocker 23 June 2015 08:14:04AM 0 points [-]

Leverage the insight from above, don't rule out all food some of the time, rule out some food all of the time. In other cultures outside the States, at least in many sections of society, these kinds of rules are followed. 1). No sweets, cakes, donuts, any mixes of fat and sugar (or at least never eat these on your own or when not celebrating something) 2). stick to meal times. ... then if that still doesn't work you can do things like buy smaller plates, rule out meat or dairy... there's always a rule that will fit.

Over-eating is probably more difficult for other reasons like image and identity, ideas of physical permanence, the brain chemistry.

Comment author: TomStocker 28 May 2015 09:35:48AM 0 points [-]

My hunch is that encouraging people that have to manage an unpredictable or tricky health condition to predict and note their prediction of how good or bad an activity will be for their pain / energy / / mood / whatever else would be a very useful habit that both frees people up to do things and prevents them from doing too much of what hurts. Julia, have you or anyone from CFAR looked at partnering with a pain management or other type of disease management team or setting to see how many of the rationality skills would be helpful?

Comment author: psychoman 14 May 2015 08:27:35PM 0 points [-]

I m new here, it is very helpful wrote for who started to thinking. But thinking is very complex and has very improved versions. I wont comment about my further thouths before reading other articles. Maybe there are some improved ideas.

Comment author: TomStocker 28 May 2015 09:27:06AM 0 points [-]

Welcome! There are loads of articles, so if it gets confusing, this is a decent place to start. https://intelligence.org/rationality-ai-zombies/

In response to comment by Capla on On Caring
Comment author: Lumifer 09 December 2014 06:30:00PM 2 points [-]

Maybe you need to go see squalor? I haven't, so I can't say.

I have seen squalor, and in my particular case it did not recalibrate my care-o-meter at all. YMMV, of course.

In response to comment by Lumifer on On Caring
Comment author: TomStocker 14 May 2015 01:01:45PM 0 points [-]

living in pain sent my carometer from below average to full. Seeing squalor definitely did something. I think it probably depends how you see it - did you talk to people as equals or see them as different types of people you couldn't relate to / didn't fit a certain criteria? Being surrounded by suffering from a young age doesn't seem to make people care - its being shocked by suffering after not having had much of it around that is occasionally very powerful - Like the story about the Buddha growing up in the palace then seeing sickness, death and age for the first time?

Comment author: Kindly 23 April 2015 02:34:33PM 5 points [-]

Absolutely. We're bad at anything that we can't easily imagine. Probably, for many people, intuition for "torture vs. dust specks" imagines a guy with a broken arm on one side, and a hundred people saying 'ow' on the other.

The consequences of our poor imagination for large numbers of people (i.e. scope insensitivity) are well-studied. We have trouble doing charity effectively because our intuition doesn't take the number of people saved by an intervention into account; we just picture the typical effect on a single person.

What, I wonder, are the consequence of our poor imagination for extremity of suffering? For me, the prison system comes to mind: I don't know how bad being in prison is, but it probably becomes much worse than I imagine if you're there for 50 years, and we don't think about that at all when arguing (or voting) about prison sentences.

Comment author: TomStocker 12 May 2015 12:39:47PM 1 point [-]

My feeling is that situations like being caught for doing something horrendous might or might not be subject to psychological adjustment - that many situations of suffering are subject to psychological adjustment and so might actually be not as bad as we though. But chronic intense pain, is literally unadjustable to some degree - you can adjust to being in intense suffering but that doesn't make the intense suffering go away. That's why I think its a special class of states of being - one that invokes action. What do people think?

In response to Lawful Uncertainty
Comment author: Cyan2 10 November 2008 09:27:52PM 6 points [-]

IIRC, there exist minimax strategies in some games that are stochastic. There are some games in which it is in fact best to fight randomness with randomness.

In response to comment by Cyan2 on Lawful Uncertainty
Comment author: TomStocker 12 May 2015 09:28:39AM 0 points [-]

Only when the opponent has a brain.

View more: Next