TomStocker comments on Effective Altruism from XYZ perspective - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (77)
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people's value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That's a contradiction.
I just don't think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don't know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I'm doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
What is or isn't controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say "collectivize faster, comrade?" How do we know we aren't also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people's knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don't see most EAs or rationalists operating at this level (I'm certainly not: the more I learn, the more I realize I don't know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
"I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West."
Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn't mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don't see many EA orgs asking Dalit groups for their cash or time yet.
It's not the preferences of the West that are inherently more valuable, it's the integrity of its institutions, such as rule of law, freedom of speech, etc... If the West declines, then it's going to have negative flow-through effects for the rest of the world.
I think its clearer then if you say sound institutions rather than the West?
There are other countries with sound institutions, like Singapore and Japan, but I'm not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.
EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
I wouldn't be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people's homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.
Open borders and no immigration are like Scylla and Charybdis -- neither is a particularly appealing option for a rich and aging country.
I also feel that the question "how much immigration to allow" is overrated. I consider it much less important than the question of "precisely what kind of people should we allow in". A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can't get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.
"How much immigration to allow" and "precisely what kind of people should we allow in" can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn't require being against immigration in general.
As you say, a filtered immigration population could be very valuable. For example, you could have "open borders" for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I'm pretty sure this isn't what most open borders advocates mean by "open borders," though.
The left doesn't "want" a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it's much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.