Disclaimer: I endorse the EA movement and direct an EA/Transhumanist organization, www.IERFH.org

We finally have created the first "inside view" critique of EA.

The critique's main worry would please Hofstadter by being self-referential: Being the first, having taken too long to emerge, thus indicating that EA's (Effective Altruists) are pretending to try instead of actually trying, or else they’d have self-criticized already.

Here I will try to clash head-on with what seems to be the most important point of that critique. This will be the only point I'll address, for the sake of brevity, mnemonics and force of argument. This is a meta-contrarian apostasy, in its purpose. I'm not sure it is a view I hold, anymore than a view I think has to be out there in the open, being thought of and criticized. I am mostly indebted to this comment by Viliam_Bur, which was marinating in my mind while I read Ben Kuhn's apostasy.


Original Version Abstract

Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.

By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.

Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.

Counterargument: Tribes have internal structure, and so should the EA movement.


This includes a free reconstruction, containing nearly the whole original, of what I took to be important in Viliam's comment.

Feeling-oriented, and outcome-oriented communities

People probably need two kinds of communities -- let's call them "feelings-oriented community" and "outcome-oriented community". To many people this division has been "home" and "work" over the centuries, but that has some misleading connotations. A very popular medieval alternative was "church" and "work". Organized large scale societies have many alternatives that fill up these roles, to greater or lesser degrees. Indigenous tribes have the three realms separated, "work" has a time and a place, likewise, rituals and late afternoon discussions, chants etc... fulfill the purpose of "church".

A "feelings-oriented community" is a community of people who meet because they enjoy being together and feel safe with each other. The examples are a functional family, a church group, friends meeting in a pub, etc... One of the important properties of feeling oriented communities, that according to Dennett has not yet sunk in the naturalist community is that nothing is a precondition for belonging to the group which feels, or the sacredness taking place. You could spend the rest of your life going to church without becoming a priest, listening to the tribal leaders and shamans talk without saying a word. There are no pre-requisites to become your parent's son, or your sister's brother every time you enter the house.

An "outcome-oriented community" is a community that has an explicit goal, and people genuinely contribute to making that goal happen. The examples are a business company, an NGO, a Toastmasters meetup, an intentional household etc... To become a member of an outcome-oriented community, you have to show that you are willing and able to bring about the goal (either for itself, or in exchange of something valuable). There is some tolerance if you stop doing things well, either by ignorance or, say, bad health. But the tolerance is finite and the group can frown upon, punish, or even expel those who are not clearly helping the goal.  

What are communities good for? What is good for communities?

The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do. For example, you could have an NGO with twelve members, where two of them want to have the work done, but the remaining ten only come to socialize. Of course, even those ten will verbally support the explicit goals of the organization, but they will be much more relaxed about timing, care less about verifying the outcomes, etc. For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals. (I have seen a similar tension in a few organizations, and the usual solution seems to be the serious members forming an "organization within an organization", keeping the other ones around them for social and other purposes.)

As an evolutionary just-so story, we have a tribe composed of many different people, and within the tribe we have a hunters group, containing the best hunters. Members of the tribe are required to follow the norms of the tribe. Hunters must be efficient in their jobs. But hunters don't become a separate tribe... they go hunting for a while, and then return back to their original tribe. The tribe membership is for life, or at least for a long time; it provides safety and fulfills the emotional needs. Each hunting expedition is a short-termed event; it requires skills and determination. If a hunter breaks his legs, he can no longer be a hunter; but he still remains a member of his tribe. The hunter has now descended from the feeling and work status, to only the feeling status, this is part of expected cycles - a woman may stop working while having a child, a teenager may decide work is evil and stop working, an existentialist may pause for a year to reflect on the value of life itself in different ways - but throughout they do are not cast away from the reassuring arms of the "feeling's oriented community".

A healthy double layered movement

Viliam and I think a healthy way of living should be modeled like this; on two layers. To have a larger tribe based on shared values (rationality and altruism), and within this tribe a few working groups, both long-term (MIRI, CFAR) and short-term (organizers of the next meetup). Of course it could be a few overlapping tribes (the rationalists, the altruists), but the important thing is that you keep your social network even if you stop participating in some specific project -- otherwise we get either cultish pressure (you have to remain hard-working on our project even if you no longer feel so great about it, or you lose your whole social network) or inefficiency (people remain formally members of the project, but lately barely any work gets done, and the more active ones are warned not to rock the boat). Joining or leaving a project should not be motivated or punished socially.

This is the crux of Viliam's argument and of my disagreement with Ben's Critique: The Effective Altruist community has grown large enough that it can easily afford to have two kinds of communities inside it: The feelings-oriented EA's, whom Ben calls (unfairly in my opinion) pretending to try to be effective altruists, and the outcome-oriented EA’s, whom are Really trying to be effective altruists.

Now that is not how he put it in his critique. He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first, and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.

Intentional Agents, communities or individuals, are not monolithic

Most importantly, if you consider the argument above that Effective Altruim can’t be criticized on accounts of being one single entity, because factually, it isn’t, then I wish you to bring this intuition pump one step further: Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game. Don’t forget the demanding objection to utilitarianism, if you ask of a smoker to stop smoking because it is irrational to smoke, and he believes you, he may end up abandoning rationalism just because a small subset of him was addicted to smoking, and he just couldn't live with that much inconsistency in his self view. Likewise, if to be an utilitarian is infinitely demanding, you lose the utilitarians to “what the hell” effects.  

The same goes for Effective Altruists. Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not. I don’t expect and don’t think anyone should expect that any single individual becomes a perfect altruist. There are parts of us that just won’t let go of some thing they crave for and praise. We don’t want to lose the entire community if one individual is not effective enough, and we don’t want to lose one individual if a part of him, or a time-slice, is not satisfying the canonical expectation of the outcome-oriented community.

Rationalists already accepted a layered structure

We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.

For a less sensitive example, consider an average job: you may think about your colleagues as your friends, but if you leave the job, how many of them will you keep regular contact with? In contrast with this, a regular church just asks you to come to sunday prayers, gives you some keywords and a few relatively simple rules. If this level of participation is ideal for you, welcome, brother or sister! And if you want more, feel free to join some higher-commitment group within the church. You choose the level of your participation, and you can change it during your life. For a non-religious example, in a dance group you could just go and dance, or chose to do the new year’s presentation, or choose to find new dancers, all the way up to being the dance organizer and coordinator.

The current rationalist community has solved this problem to some extent. Your level of participation can range from being a lurker at LW, all the way up, from meetup organizer to CFAR creator to writing the next HPMOR or it’s analogue.

Viliam ends his comment by saying: It would be great to have a LW village, where some people would work on effective altruism, others would work on building artificial intelligence, yet others would develop a rationality curriculum, and some would be too busy with their personal issues to do any of this now... but everyone would know that this is a village where good and sane people live, where cool things happen, and whichever of these good and real goals I will choose to prioritize, it's still a community where I belong. Actually, it would be great to have a village where 5% or 10% of people would be the LW community. Connotatively, it's not about being away from other people, but about being with my people.

The challenge, in my view from now on is not how to make effective altruists stop pretending, but how to surround effective altruists with welcoming arms even when the subset of them that is active at that moment is not doing the right things? How can we make EA’s a loving and caring community of people who help each other, so that people feel taken care of enough that they actually have the attentional and emotional resources necessary to really go there and do the impossible.

Here are some examples of this layered system working in non-religious non-tribal settings: Lesswrong has a karma system to tell different functions within the community. It also has meetups, it also has a Study Hall, and it also has strong relations with CFAR and MIRI.

Leverage research, as community/house has active hard-core members, new hirees, people in training, and friends/relationships of people there, very different outcomes expected from each.

Transhumanists have people who only self-identify, people who attend events, people who write for H+ magazine, a board of directors, and it goes all the way up to Nick Bostrom, who spends 70 hours a week working on academic content in related topics.

The solution is not just introspection, but the maintenance of a welcoming environment at every layer of effectiveness

The Effective Altruist community does not need to get introspectively even more focused on effectiveness - at least not right now - what it needs is a designed hierarchical structure which allows it to let everyone in, and let everyone transition smoothly between different levels of commitment.

Most people will transition upward, since understanding more makes you more interested, more effective, etc… in an upward spiral. But people also need to be able to slide down for a bit. To meet their relatives for thanksgiving, to play Go with their workfriends, to dance, to pretend they don’t care about animals. To do their thing. Their internal thing which has not converted to EA like the rest of them have. This is not only okay, it is not only tolerable, it is essential for the movement’s survival.

But then how can those who are at their very best, healthy, strong, smart, and at the edge of the movement push it forward?

Here is an obvious place not to do it: Open groups on Facebook.

Open Facebook is not the place to move it forward. Some people who are recognized as being in the forefront of the movement, like Toby, Will, Holden, Beckstead, Wise and others should create an “advancing Effective Altruism” group on facebook, and there and then will be a place where no blood will be shed on the hands of neither the feeling-oriented, nor the outcome-oriented group by having to decrease the signal to noise ratio within either.

Now once we create this hierarchy within the movement (not only the groups, but the mental hierarchy, and the feeling that it is fine to be at a feeling-oriented moment, or to have feeling-oriented experiences) we will also want to increase the chance that people will move up the hierarchical ladder. As many as possible, as soon as possible, after all, the higher up you are, by definition the more likely you are to be generating good outcome. We have already started doing so. The EA Self-Help (secret) group on Facebook serves this very purpose, it helps altruists when they are feeling down, unproductive, sad, or anything actually, and we will hear you and embrace you even if you are not being particularly effective and altruistic when you get there. It is the legacy of our deceased friend, Jonatas, to all of us, because of him, we now have some understanding that people need love and companionship especially when they are down. Or we may lose all of their future good moments. The monolithic individual fallacy is a very pricy one to pay. Let us not learn the hard way by losing another member.

Conclusions


I have argued here that the main problem indicated in Ben’s writing, that effective altruists are pretending to really try, is not to be viewed in this light. Instead, I argued that the very survival of the Effective Altruist movement may rely on finding a welcoming space for something that Viliam_Bur has called feeling-oriented community, without which many people would leave the movement, by experiencing it as too demanding during their bad times, or if it strongly conflicted a particular subset of themselves they consider important. Instead I advocate for hierarchically separate communities within the movement, allowing those who are at any particular level of commitment to grow stronger and win.


The first three initial measures I suggest for this re-design of the community are:

1) Making all effective altruists aware that the EA self-help group exists for anyone who, for any reason, wants help from the community, even for non EA related affairs.

2) Creating a Closed Facebook group with only those who are advancing the discussion at its best, for instance those who wrote long posts in their own blogs about it, or obvious major figures.

3) Creating a Study Hall equivalent for EA’s to increase their feeling of belonging to a large tribe of goal-sharing people, where they can lurk even when they have nothing to say, and just do a few pomodoros.

 

This is my first long writing on Effective Altruism, and my first attempt at an apostasy, and my first explicit attempt to be meta-contrarian. I hope I may have helped shed some light on the discussion, and that my critique can be taken by all, specially Ben, to be oriented envisioning the same large scale goal that is shared by effective altruists around the world. The outlook of effective altruism is still being designed every day by all of us, and I hope my critique can be used, along with Ben’s and others, to build not only a movement that is stronger in it’s individuals emotions, as I have advocated here, but furthermore in being psychologically healthy and functional group, a whole that understands the role of its parts, and subdivides accordingly.

New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 3:26 PM

People dismissing the low-commitment group have the wrong counterfactual. It's incorrect to think "if this person wasn't lurking on LW, they'd actually do something for FAI". More accurate is "if this person wasn't lurking on LW, they wouldn't be interested at all in FAI".

A similar way to dip your toes into EA would be a highly effective way to garner support for EA. Even something as informal as a discussion board/blog would help. Make it obvious how to do more for those convinced to do more, and you're probably doing the movement a favor.

People dismissing the low-commitment group have the wrong counterfactual. It's incorrect to think "if this person wasn't lurking on LW, they'd actually do something for FAI". More accurate is "if this person wasn't lurking on LW, they wouldn't be interested at all in FAI".

This part is not obvious to me. See Robin's longer discussion at Reward or Punish?.

That is, it's certainly more pleasant to be part of a group that works by reward than by punishment. But it's not clear to me it's more effective.

I'm not making a stand on the reward vs punishment debate. I'm generalizing from my personal experience - that is, if I couldn't lurk or make posts on lesswrong, I wouldn't be interested in FAI at all. If it were the case that the minimum amount needed to participate in rationality topics that LW discusses was "read and write academic papers or participate as a volunteer or worker in MIRI", I wouldn't be making any sort of effort along this axis whatsoever.

Plus there's the whole cult-aversion thing - punishing those who buy in to EA but do not make as much effort as you want them to carries a whole host of bad connotations.

First: I agree with your broad point that more segmentation in the EA community would be helpful. I don't think we disagree as much as you think we do, and in fact I would categorize this post as part of the "being introspective about being a movement" that I'm advocating. So perhaps I'm accusing you of failure to be meta-contrarian :P

We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.

I really appreciate this point and it's something that didn't occur to me. Naively it seems strange to me that people would look to feeling-oriented activity in effective altruism, which is pretty much explicitly about disregarding feeling orientation; but on reflection, seeming strange is not much of a reason why it shouldn't be true, whereas in fact this is obviously happening.

I think you understand this, but under this framework, my objection might be something like: the feeling-oriented people think they're outcome-oriented, or have to signal outcome-orientation in order to fit in and satisfy their need for feeling-oriented interaction. (This is where my epithet of "pretending" comes in; perhaps "signalling trying" is more appropriate.) Having feeling-oriented people signalling outcome-orientation is stopping the outcome-oriented people from pushing forward the EA state of the art, because it adds epistemic inertia (feeling-oriented people have less belief pressure from truth-seeking and more from social satisficing).

it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void

I understand this. However, the apostasy was against associating with EA as a movement. If I say "it's a problem that EA doesn't do X" I mean "it's a problem that nobody within EA has caused X to come about/social norms are not set up to encourage X/etc." For whatever reason, X has failed to happen, and from the point of view of an apostate, I don't really care whether it was because EA monolithically decided not to do X or because every individual decided not to or because there was some failure of incentives going on. That's an implementation detail of the solution, not a feature of the critique.

Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game.

True. But if you see people making large life decisions that look like they're pretending to try (e.g. satisficing on donations or career choice), this should be a red flag. This isn't the kind of decision you make on one bad day (at least, I hope not).

Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not.

If people are "finding" this, they are only finding it intuitively--the fact that it's a personal thing is not always rising to the level of conscious awareness. I would be pretty OK with people finding out which aspects they aren't willing to bend and saying e.g. "oh, I satisficed on donations because of analysis paralysis", but instead what comes out is a huge debate that feels like it's truth-seeking, but actually people have a personal stake in it.

The Effective Altruist community does not need to get introspectively even more focused on effectiveness

I agree with what I think you mean, although I'm not quite sure what you mean by "effectiveness" in this case. EA needs to get more introspectively focused on directly being effective. It needs to get more introspectively focused on being a movement that can last a long time, while maintaining (acquiring?) the ability to come to non-trivial true beliefs about what is effective. This largely does not consist of us directly trying to solve the problem of "how do we, the EA movement, become better utilitarians?" but rather sub-questions like "how do we build good dialogue" and "how do we

And last, a couple technical issues (which I don't think affect either of our main points very much):

He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first (at some cost to all of us), and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.

This is wrong: I cited a lot of other evidence that effective altruists were stopping thinking too early, including not working very hard on population ethics, satisficing on cause choice, not caring about historical outside views, not caring about diversity, not making a good case for their edge over e.g. the Gates Foundation, and having an inconsistent attitude towards rigor. The organization of the conclusion made the "no critiques" argument seem more central than I intended (which was my bad), but it's definitely not the crux of the argument (and has been partially rebutted by Will MacAskill on Facebook anyway, where he brought up several instances that this has happened in private).

EDIT: Diego removed the parenthetical remark; the following no longer applies. (How do I strikethrough?)

(at some cost to all of us)

I would prefer that you turn this, which reads to me like a pot-shot, into a substantive criticism or discarded it. On Facebook you said, "...I really don't think the text does good, and I strongly fear some of its possible aftereffects," but you haven't elaborated very much. I'd very much like to hear your objections--you can private message me if you think they're sensitive.

if you see people making large life decisions that look like they're pretending to try (e.g. satisficing on donations or career choice), this should be a red flag.

I don't think this is as bad as it looks. An underrated benefit to pretending to try is that those who pretend to try still often do more good than they would if they didn't pretend at all.

Before I encountered EA, I wanted to be a college professor. After I encountered EA and was convinced by 80K-style career choice, I "pretended to try" (subconsciously, without realizing it) by finding EA arguments for why it was optimal to be a college professor (pay is decent, great opportunity to influence, etc.). Of course, this wasn't really my EA-optimal career path. But it was a whole lot better than it was before I considered EA (because I was aiming to influence when before I was not, because I was planning on donating ~30% of my expected salary when before I was not going to donate anything, etc.). Definitely not EA-optimal, but significantly better.

Additionally, many people are willing to give up some things, but not all things. Once I noticed to myself that I was merely pretending, I thought to myself that maybe I should just be comfortable ignoring EA considerations when it came to careers and make sure I did something I wanted. Noted EA superstar Julia Wise has this kind of career -- she could do much better money wise, if only she was willing to sacrifice more than she's willing to give up.

Of course, now I think I am on an EA-optimal career path that doesn't involve pretending (heading towards web development), so things did turn out ok. But only after pretending for awhile.

Yes, I noted throughout the post that pretending to actually try gets you farther than following social defaults because it rules out a bunch of ideas that obviously conflict with EA principles. I still think it's quite bad in the sense of adding epistemic inertia.

if you see people making large life decisions that look like they're pretending to try (e.g. satisficing on donations or career choice), this should be a red flag.

This seems to expose a bit of a tension between two possible goals for the EA movement. One of them is "We need more rigor and people to be trying harder!" and the other one is "We need to bring in lots of people who aren't trying quite as hard; we can have a bigger total impact by getting lots of people to do just a little bit." The second one is closer to what e.g. Peter Singer is trying to do, by setting a low baseline threshold of doing good and trying to make it the norm for as many people as possible to do good at that threshold.

Is it actually that bad to have people in the movement who are just doing good because of social pressure? If we make sure that the things we're socially pressuring people to do are actually very good things, then that could be good anyway. Even if they're just satisficing, this could be a net good, as long as we're raising the threshold of what "satisficing" is by a lot.

I guess the potential problem there is that maybe if satisficing is the norm, we'll encourage complacency, and thereby get fewer people who are actually trying really hard instead of just satisficing. Maybe it's just a balancing act.

Having feeling-oriented people signalling outcome-orientation is stopping the outcome-oriented people from pushing forward the EA state of the art, because it adds epistemic inertia (feeling-oriented people have less belief pressure from truth-seeking and more from social satisficing).

I'm not sure I understand you here. Are you saying that because feeling-oriented people will pretty much believe what they are socially pressured to believe, the outcome-oriented people will also stop truth-seeking?

On your last point: there are two equally interesting claims that fit Ben's comment: 1) above a certain proportion, or threshold, feeling oriented altruists may reinforce/feed only themselves, thus setting the bar on EA competence really, really low (yes lurker, you are as good as Toby, who donates most of his income and is counterfactually responsible for 80k and GWWC), I addressed this below.
2) Outcome-oriented people may fail to create signal among the noise (if they don't create the AEA facebook group) or succumb more frequently to drifting back into their feeeling-oriented self, getting hedons for small amounts of utilons.

2) is what I'm more concerned about. I don't really mind if there are a bunch of people being feeling-oriented if they don't damage the outcome-orientation of the outcome-oriented group at all. (But I think that counterfactual is so implausible as to be difficult to imagine in detail.) To the extend that they do, as Maia said, it's a trade-off.

Would you be as kind as to create the Advancing Effective Altruism Facebook group Ben? (or adding me to it in case the secret society of Effective Altruists is in the ninth generation, and I'm just making a fool of my self here to the arcane sages) I can help you invite people and create a description. I can't administer it, or choose who belongs or not, since I'm too worried about more primeval things, like going to Berkeley, surviving finantially, making sure www.ierfh.org remains active when I leave Brazil despite our not having any funding; and I don't know everyone within EA as, say, Carl does.

I don't think Facebook is a good forum for productive discussion even among more outcome-oriented people. See my post and Brian Tomasik's remarks for why.

Solving the problem of having a good discussion forum is hard and requires more bandwidth than I have right now, though Ozzie Gooen might know about projects heading in that direction. I think continuing to use LW discussion would be preferable to Facebook.

I don't hold the belief that Lesswrong discussion contains only people who are at the cutting edge of EA theory. I don't think you do either. That solution does not apply to a problem that you and I, more than anyone, agree is happening. We do not have an ultimate forum that buzzes top notch outcome-oriented altruists to think about specific things that others in the same class have thought of.

A moderator in such community should save all discussions, so they are formal in character and eternalized in a website. But certainly the buzzin effect which facebook and emails have (and only facebook and emails have) is a necessary component, as long as the group is restricted only to top theorists. Since no one cares about this more than you and I, I am asking you to do it, I wouldn't even know who to invite besides those I cited in the post.

If you really think that one of the main problems with EA at the moment is absence of a space for outcome-oriented individuals to rock our world, and communicate without the chaotic blog proliferation that has been going on, I can hardly think your bandwidth would be better invested elsewhere (the same goes for you, dear Effective Altruist reading this post).

I don't hold the belief that Lesswrong discussion contains only people who are at the cutting edge of EA theory. I don't think you do either.

Non sequitur. I don't hold this belief but I nevertheless think that Less Wrong would be better than Facebook for pushing forward the state of EA. Reasons include upvotes, searchability and preservation with less headache for the moderator, threading, etc.

I can hardly think your bandwidth would be better invested elsewhere

You just gave me some great reasons why your own bandwidth is better invested elsewhere. I'm surprised that you can't think of analogous ones that might apply to me right now.

Anyway, I think this is an important problem but not the important problem (as you seem to think I think), and also one that I have quite a large comparative disadvantage in solving correctly compared to other people, and other important problems that I could solve. If no one else decides it's worthy of their time I'll (a) update against it being worthwhile and (b) get around to it when I have free bandwidth.

So that other kinds of minds can comment, I'll try to be brief for now, and suggest we carry on this one on one threat in a couple days, so that others don't feel disencouraged to come up with ideas neither of us has thought of yet.

For the same reason I don't address your technical points. But I praise you for responding promptly and in an uplifting mood.

Signaling wars: History shows that signaling wars between classes are not so bad. The high class - of outcome - (in this case, say, Will MacAskill belongs to it) does not need to signal it belongs to the high class. They are known to belong. The middle class, people who are sort of effective altruists may want to signal they belong to the high class. Finally there are people who are in the feeling-oriented only class, who are lurking and don't need to signal that much, when they do, it is obvious, and not dangerous, it is like when a 17 year old decides he solved quantum physics, and writes a paper about it. The world keeps turning. So the main worry would be the middle class trying to signal being more altruistic then they are. My hope is that either they will end up influencing people anyway by their signal attempt, or else they will, unbeknownst to themselves fake it till they make it, slowly becoming more altruistic. I mean, it feels good to actually do good, so over time, given a division of labour with asymmetric diffusion of information from the outcome-oriented to the feeling-oriented, I expect the signalling wars not to be a big problem. I do however agree that it is a problem and invite others to think about it.

Large decisions don't come from the day's mood: In large, I think they don't (despite those papers about how absurdly small things can get people to subscribe to completely unrelated causes), so I agree with you. What I want to enforce is that we are composed of many tiny homunculi not only in the time dimension, but across all dimensions. Maybe 70% of someone decided for EA, but the part that didn't wants to be a photographer of nature, I think saying that person has failed the EA ethos would be throwing the baby with the bathwater. Just like it would not throwing them out of the "Advancing Effective Altruism" closed facebook group.

Thanks! Reading this again I realized that it could be connected with another idea, that the outcome-oriented communities within the larger feelings-oriented community need to be explicit. The greater the focus on outcome, the greater the need for explicit membership. Again, an analogy:

Within a church, although there is a definition of who is a believer, and there are some norms for the believers; in reality there are different levels of beliefs, some minor sins are frequent and mostly ignored in practice, although people are reminded that those are sins. There are some conditions for joining: it's not enough that you are a nice person and we like you; you must also be willing to publicly profess our beliefs. If you refuse to profess them, you are not our member. But if you join, then keep sinning, but continue to profess our beliefs, you can remain a member. Because hypocrisy is a part of human nature, and it has a useful role in the society: even the hypocrites create some moral pressure on the remaining members. Actually, maybe hypocritically professing the beliefs is the most important thing the average members do.

Then we have the professionals or paraprofessionals: priests, theology professors, monks and nuns, public speakers, etc. These are considered the core, the high-status people within the church. Although they are numerically a minority, they speak in the name of the church; an average member is not considered an official speaker of the church. Membership in these groups is verified, publicly. -- You can falsely pretend to be a Catholic, and most people will not notice. But you can't pretend to be a Catholic priest and start giving a public sermon in a Catholic church. You can write a Catholic blog, but you can't write a pastoral letter and have it read in all churches.

Creating an analogy for the Effective Altruism movement would require having a set of official speakers for the whole movement. (At this moment, any organization with costly membership could play the role.) These speakers would have to officially announce a set of basic beliefs; easy enough that any person can understand them without having a specific education. It does not need to be perfect; it needs to be simple, and barely specific enough to describe the movement. Something like: "We want to help other people. We care about the real impact of our help. We try to donate as much as we can, and to choose the charities with the largest impact." The important thing is not being too specific here. Don't mention QALY's or Africa at this phase. Every detail you add would drive a lot of people away. This is step one, where you want to have these beliefs professed as widely as possible. Later, in step two, you can explain people (who already made these beliefs part of their identity) how specifically these values can be maximized. Even then, there will be a few people who will ignore you. They are still useful for spreading the meme towards the others who may listen to you.

It is okay to have a lot of people who profess EA, but don't do it. They still spread the memes. The important thing is to make it obvious who the real speakers for the movement are. Especially to uncover people who would try to use these memes for their benefits. When someone says: "I give money to cure sick puppies," that's not a big problem. (It's like a Catholic who masturbates. There may be a sermon about this in the church once in a while, but no one is going to send an inquisition after them or excomunicate them. When people are ashamed to discuss it publicly, even if they continue doing it, the problem is pretty much solved. There are more important battles to pick.) It's just when someone tries to defend their own charity by saying: "We are the effective charity", and they are not, or if someone tries to redefine the movement, the official speakers have to make it really obvious that those people don't speak for the EA movement.

An important role of the movement is to translate membership in outcome-oriented subgroups to social status. Believers respect the priests, which rewards priests socially for their work, and encourages a small minority of the believers to become new priests in the future. Analogically, if we had thousands of people who profess EA even if most of them don't donate a penny, it is a good thing if one percent of them later decides to donate a lot, and if the one percent receives a huge social reward from the rest of the group for doing so. The lay people are a mechanism for translating work of the experts to status, they contribute to the work indirectly.

For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals.

Doesn't that mean that these people would tend to undermine the goals of the actually goal-oriented group?

If the goal is difficult and takes a lot of time, all they have to do is to ensure the group advances very very slowly by being generally inefficient, so the goal remains safely in the future. If the goal is not well-defined, they can increase the scope. But if the goal is reasonably clear and becomes easy to reach, I would not be surprised by direct sabotage; with a "good" irrational excuse, of course.

(Connotationally: Increasing the scope of the goal is not a bad thing per se. Sometimes achieving X is good, and achieving Y is even better. The question is whether the group will honestly admit that X has been achieved and now they are moving towards a new goal. Because at that moment, some members might leave the group, if they decide they have some other option Z, which is for them less important than X, but more important than Y. The group can pretend that Y is the same thing as X to prevent sudden changes of membership and changes of strategy; in other words to prevent disruptions of its social life.)

I have some examples in my mind, but as it happens in real life, there were also other factors involved, so the outcome can be explained in different ways. Seems to me that the organization as a whole usually reacts by adopting new goals. It usually has enough time to sense the "danger" of completing the existing goals. But if completing a specific subgoal would change how the organization works internally, that specific goal can be sabotaged. For example, if the subgoal is to make the organization more efficient, which would conflict with existing personal desires of some members.

As a specific example from my life, there was a science fiction magazine with volunteer editors; each of those editors had their own section. The magazine had some readership, but only a fraction of the possible market. It seemed than unless the situation improves, the magazine will not survive economically. Some sci-fi fans reported that the magazine contains a lot of things they don't care about, and ignores some topics they do; which is why they ignore the only sci-fi magazine written in their language, and instead buy magazines from other countries, even if it is more costly and inconvenient. Even worse, some of those few fans that subscribed to the magazine, reported they mostly did it for kinda patriotic duty of supporting the only sci-fi magazine in their country; but they also were dissatisfied with how the content was selected.

So the magazine had a new subgoal: research the market, find out what the sci-fi fans in our country care about, and modify the content of the magazine accordingly. Otherwise, the whole magazine is doomed. All editors agreed that this was a critical thing, and then most of them ignored the topic completely. The two or three people who cared, created a questionnaire about the contents ("please rate how much you liked the story or report X?" "is this the kind of content that makes you buy our magazine?") and suggested sending the questionnaire to our subscribers, the most loyal group of customers, pleading them to provide a necessary feedback for the magazine. Everyone agreed that this is a good thing, but when the next issue of the magazine was sent to subscribers, the questionnaire was omitted, by mistake. So the questionnaire was updated to reflect the contents of the next issue; and then it was omitted by mistake, again. Suspecting that the mistakes might not be completely accidental, the authors updated the questionnaire to be timeless; questions like "please list 3 stories or reports in this issue that you liked most, and 1 story or report that you liked least" could be inserted in any issue. So if another mistake would happen, as indeed it did, it did not mean more work to the authors; the questionnaire was still there, ready and available for any of the following issues. Finally, the questionnaire was sent to the subscribers, we received surprisingly many responses, and then... the responses were lost and no one ever saw them again. Within a year, most editors developed a false memory that the results of the questionnaire reflected that the readers prefer the contents of the magazine exactly as it is now, so the question is settled satisfactorily, and we should not waste time discussing it anymore. And the whole topic became a taboo. Here is the happy ending: The readership dropped, but the magazine later received a generous government funding, so it actually does not need readers anymore; ten years later, it still exists.

My interpretation of the whole story is that changing the contents of magazine would mean telling some editors to stop writing, because the customers did not want to read what they wrote, and instead they wanted some topics none of the current editors cared about. The harmony among the editors was more important than the survival of the magazine. Socially, the strategy was a success. At the end, the loyal editors managed to keep their sections as they liked them; and those who have rocked the boat, have gradually left the boat, so they don't matter anymore. The government money removed the only non-social feedback channel from the outside world.

I know lots of people for whom church is a outcome-oriented community. The claim that there is no precondition for joining is specific to some churches- there are, in fact, many churches which make significant demands of their members, and expel members who do not meet those demands.

The contrast between "feelings" communities and "outcome" communities seems wrong to me. Consider a 'safespace'- a community where people are allowed to say only things which do not make people of variety X feel unsafe. This is a community defined by exclusion- the tolerance is not finite, but possibly explicitly touted as being zero tolerance, and the group can frown upon, punish, or even expel those who violate the norms of the safespace. But to call a 'safespace' an outcome community instead of a feelings community seems ridiculous.

The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do.

What? Important to who? Why do we have a feelings-oriented measure here instead of an outcome-oriented measure?

A group where everyone is super serious and really cares about the issue, but they're working at it the wrong way, is less useful than a group where people are only in it for the signalling, but they know the right way to get stuff done easily, and so end up accomplishing more.

my first attempt at an apostasy

*sigh* I really, really hope that this usage of "apostasy" does not become standard in EA. An apostasy is something that obligates your community to shun or kill you. There's a reason Bostrom's post said "Remind yourself before you start that unless you later choose to do so, you will never have to show this text to anyone"

[-][anonymous]10y00

.

[This comment is no longer endorsed by its author]Reply