Comment author: Acty 21 July 2015 06:34:25AM 3 points [-]

Well, "providing universal healthcare and welfare will lead to a massive drop in motivation to work" is a scientific prediction. We can find out whether it is true by looking at countries where this already happens - taxes pay for good socialised healthcare and welfare programs - like the UK and the Nordics, and seeing if your prediction has come true.

The UK employment rate is 5.6%, the United States is 5.3%. Not a particularly big difference, nothing indicating that the UK's universal free healthcare has created some kind of horrifying utility drop because there's no motivation to work. We can take another example if you like. Healthcare in Iceland is universal, and Iceland's unemployment rate is 4.3% (it also has the highest life expectancy in Europe).

This is not an ideological dispute. This is a dispute of scientific fact. Does taxing people and providing universal healthcare and welfare lead to a massive drop in utility by destroying the motivation to work (and meaning that people don't work)? This experiment has already been performed - the UK and Iceland have universal healthcare and provide welfare to unemployed citizens - and, um, the results are kind of conclusive. The world hasn't ended over here. Everyone is still motivated to work. Unemployment rates are pretty similar to those in the US where welfare etc isn't very good and there's not universal healthcare. Your prediction didn't come true, so if you're a rationalist, you have to update now.

Comment author: Journeyman 21 July 2015 07:15:46AM *  1 point [-]

Scandinavia and the UK are relatively ethnically homogenous, high-trust, and productive populations. Socialized policies are going to work relatively better in these populations. Northwest European populations are not an appropriate reference class to generalize about the rest of the world, and they are often different even from other parts of Europe.

Socialized policies will have poorer results in more heterogenous populations. For example, imagine that a country has multiple tribes that don't like each other; they aren't going to like supporting each other's members through welfare. As another example, imagine that multiple populations in a country have very different economic productivity. The people who are higher in productivity aren't going to enjoy their taxes being siphoned off to support other groups who aren't pulling their weight economically. These situations are a recipe for ethnic conflict.

Icelanders may be happy with their socialized policies now, but imagine if you created a new nation with a combination of Icelanders and Greeks called Icegreekland. The Icelanders would probably be a lot more productive than the Greeks and unhappy about needing to support them through welfare. Icelanders might be more motivated to work and pay taxes if it's creating a social safety net for their own community, but less excited about working to pay taxes to support Greeks. And who can blame them?

There is plenty of valid debate about the likely consequences of socialized policies for populations other than homogenous NW European populations. Whoever told you these issues were a matter of scientific fact was misleading you. This is an excellent example of how the siren's call of politically attractive answers leads people to cut corners during their analysis so it goes in the desired direction, whether they are aware they are doing it or not.

Generalizing what works for one group as appropriate for another is a really common failure mode through history which hurts real people. See the whole "democracy in Iraq" thing as another example.

Comment author: Acty 21 July 2015 01:04:24AM *  1 point [-]

--

Comment author: Journeyman 21 July 2015 01:40:26AM 3 points [-]

You yourself are unlikely to start the French Revolution, but somehow, well-intentioned people seem to get swept up in those movements. Even teachers, doctors, and charity workers can contribute to an ideological environment that goes wrong; this doesn't mean that they started it, or that they supported it every step of the way. But they were part of it.

The French Revolution and guillotines is indeed a rarer event. But if pathological altruism can result in such large disasters, then it's quite likely that it can also backfire in less spectacular ways that are still problematic.

As you point out, many interventions to change the world risk going wrong and making things worse, but it would be a shame to completely give on making the world a better place. So what we really want is interventions that are very well-thought out, with a lot of care towards the likely consequences, taking into account the lessons of history for similar interventions.

Comment author: Acty 21 July 2015 12:44:06AM *  2 points [-]

--

Comment author: Journeyman 21 July 2015 01:18:24AM 4 points [-]

To some degree, the idea of a "Friendship and Science Party" has already been tried. The Mugwumps wanted to get scholars, scientists and learned people more involved in politics to improve its corrupt state. It sounds like a great idea on paper, but this is what happened:

So the Mugwumps believed that, by running a pipe from the limpid spring of academia to the dank sewer of American democracy, they could make the latter run clear again. What they might have considered, however, was that there was no valve in their pipe. Aiming to purify the American state, they succeeded only in corrupting the American mind.

When an intellectual community is separated from political power, as the Mugwumps were for a while in the Gilded Age, it finds itself in a strange state of grace. Bad ideas and bad people exist, but good people can recognize good ideas and good people, and a nexus of sense forms. The only way for the bad to get ahead is to copy the good, and vice pays its traditional tribute to virtue. It is at least reasonable to expect sensible ideas to outcompete insane ones in this "marketplace," because good sense is the only significant adaptive quality.

Restore the connection, and the self-serving idea, the meme with its own built-in will to power, develops a strange ability to thrive and spread. Thoughts which, if correct, provide some pretext for empowering the thinker, become remarkably adaptive. Even if they are utterly insane. As the Latin goes: vult decipi, decipiatur. Self-deception does not in any way preclude sincerity.

...

In particular, when the power loop includes science itself, science itself becomes corrupt. The crown jewel of European civilization is dragged in the gutter for another hundred million in grants, while journalism, our peeking impostor of the scales, averts her open eyes.

Science also expands to cover all areas of government policy, a task for which it is blatantly unfit. There are few controlled experiments in government. Thus, scientistic public policy, from economics ("queen of the social sciences") on down, consists of experiments that would not meet any standard of relevance in a truly scientific field.

Bad science is a device for laundering thoughts of unknown provenance without the conscious complicity of the experimenter.

According to this account, the more contact science has with politics, the more corrupted it becomes.

Comment author: Acty 20 July 2015 11:09:46PM *  1 point [-]

--

Comment author: Journeyman 21 July 2015 12:55:47AM 2 points [-]

There is historical precedent for groups advocating equality, altruism, and other humanitarian causes to do a lot of damage and start guillotining people. You would probably be horrified and step off the train before it got to that point. But it's important to understand the failure modes of egalitarian, altruistic movements.

The French Revolution, and Russian Revolution / Soviet Union ran into these failure modes where they started killing lots of people. After slavery was abolished in the US, around one quarter of the freed slaves died.

These events were all horrible disasters from a humanitarian perspective. Yet I doubt that the original French Revolutionaries planned from the start to execute the aristocracy, and then execute many of their own factions for supposedly being counter-revolutionaries. I don't think Marx ever intended for the Russian Revolution and Soviet Union to have a high death toll. I don't think the original abolitionists ever expected the bloody Civil War followed by 25% of the former slaves dying.

Perhaps, once a movement for egalitarianism and altruism got started, an ideological death spiral caused so much polarization that it was impossible to stop people from going overboard and extending the movement's mandate in a violent direction. Perhaps at first, they tried to persuade their opponents to help them towards the better new world. When persuasion failed, they tried suppression. And when suppression failed, someone proposed violence, and nobody could stop them in such a polarized environment.

Somehow, altruism can turn pathological, and well-intentioned interventions have historically resulted in disastrous side-effects or externalities. That's why some people are cynical about altruistic political attitudes.

Comment author: Lumifer 18 July 2015 04:13:18AM 4 points [-]

There are other countries with sound institutions, like Singapore and Japan, but I'm not so worried about them as I am about the West, because they have an eye towards self-preservation.

I wouldn't be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people's homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.

Open borders and no immigration are like Scylla and Charybdis -- neither is a particularly appealing option for a rich and aging country.

I also feel that the question "how much immigration to allow" is overrated. I consider it much less important than the question of "precisely what kind of people should we allow in". A desirable country has an excellent opportunity to filter a part of its future population and should use it.

Comment author: Journeyman 18 July 2015 09:33:02AM *  5 points [-]

I agree that Japan has its own problems. No solutions are particularly good if they can't get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.

"How much immigration to allow" and "precisely what kind of people should we allow in" can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn't require being against immigration in general.

As you say, a filtered immigration population could be very valuable. For example, you could have "open borders" for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I'm pretty sure this isn't what most open borders advocates mean by "open borders," though.

The left doesn't "want" a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it's much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.

Comment author: TomStocker 17 July 2015 07:07:15PM 0 points [-]

I think its clearer then if you say sound institutions rather than the West?

Comment author: Journeyman 17 July 2015 07:44:08PM 5 points [-]

There are other countries with sound institutions, like Singapore and Japan, but I'm not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.

EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.

Comment author: TomStocker 15 July 2015 10:12:50AM -1 points [-]

Interesting that the solutions you're jumping to are about defending the 'west' and beating the south / east rather than working with the south/east to make sure the best of both is shared?

Comment author: Journeyman 17 July 2015 07:12:42PM 3 points [-]

To be clear, when I speak of defending the West, I am mostly thinking of defending the West against self-inflicted problems. Nobody is talking about "beating" the global south / east. If the West declines, then it won't be in a very good position to share anything with anyone.

Comment author: TomStocker 15 July 2015 10:20:55AM 0 points [-]

"I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West."

Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn't mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don't see many EA orgs asking Dalit groups for their cash or time yet.

Comment author: Journeyman 17 July 2015 07:01:01PM 5 points [-]

It's not the preferences of the West that are inherently more valuable, it's the integrity of its institutions, such as rule of law, freedom of speech, etc... If the West declines, then it's going to have negative flow-through effects for the rest of the world.

Comment author: Telofy 14 July 2015 11:44:33AM 2 points [-]

I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.

then there are lots of EA missed opportunities lying around waiting for someone to pick them up

Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.

Followed to its logical conclusion, this outlook would result in a lot more concern about the West.

This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)

Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.

Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.

Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?

I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.

Regardless of whether you are an antirealist, not all value systems are created equal.

Of course.

Their knowledge of history, politics, and object-level social science is low. … I'm doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.

Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.

This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.

Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?

I just don't think a lot of EAs have thought their value systems through very thoroughly

Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?

How do we know we aren't also deluded by present-day politics?

I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.

Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)

Comment author: Journeyman 17 July 2015 09:10:48AM 6 points [-]

No need for you to address any particular political point I'm making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.

I'm glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who "pay it forward" (see Scott Aaronson's eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.

Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.

On open borders, economic analyses like Roodman's are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn't translate into them updating their general stance on immigration.

If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.

Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called "no-go zones" or "sensitive urban zones" ("no-go zone" is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.

These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don't think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don't see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?

Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I'm not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it's very significant for future human welfare.

Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?

I'll think about it. I think some of the sources I've cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.

Comment author: Telofy 12 July 2015 09:08:02AM 1 point [-]

As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.

which values people based on their contributions, not just their needs

VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”

I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments

I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.

However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.

Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.

Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.

From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.

You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.

However, the alternative might also be:

keeping your money in your piggy bank until more obvious opportunities emerge

That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.

However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.

Comment author: Journeyman 12 July 2015 11:01:26PM 4 points [-]

Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.

VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”

I'll take your word that many EAs also think this way, but I don't really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.

Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system.

Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?

Regardless of whether you are an antirealist, not all value systems are created equal. Many people's value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That's a contradiction.

I just don't think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don't know about, and which would cause them to update their approach if they knew about it and thought seriously about it.

Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I'm doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.

However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.

What is or isn't controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say "collectivize faster, comrade?" How do we know we aren't also deluded by present-day politics?

It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people's knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don't see most EAs or rationalists operating at this level (I'm certainly not: the more I learn, the more I realize I don't know).

The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.

View more: Prev | Next