[note: I don't consider myself Utilitarian and sometimes apply True Scotsman to argue that no human can be, but that's mostly trolling and not my intent here. I'm not an EA in any but the most big-tent form (I try to be effective in things I do, and I am somewhat altruistic in many of my preferences). ]
I think Alice is confused about how status and group participation works. Which is fine, we all are - it's insanely complicated. But she's not even aware how confused she is, and she's making a huge typical mind fallacy in telling Bob that he can't use her preferred label "Utilitarian".
I think she's also VERY confused about sizes and structures of organization. Neither "the Effective Altruist movement" nor "rationalist community" are coherent structures in the sense she's talking about. Different sites, group homes, companies, and other specific groups CAN make decisions on who is invited and what behaviors are encouraged or discouraged. If she'd said "Bob, I won't hire you for my lab working on X because you don't seem to be serious about Y", there would be ZERO controversy. This is a useful and clear communication. When she says "I do...
I think the fact that the world where:
I can work extremely hard, doing things I don't particularly like, without burnouts, eat only healthy food without binge eating spirals, honestly enjoy doing exercises, have only meaningful rest without exausting my will power and generally be fully intellectually and emotionally consistent, completely subjugating my urges to my values...
is called the least convinient possible world - says something interesting about this whole discourse.
Honestly, the world where I'm already a god sounds extremely convinient. And pretending that we are there, demanding that we have to be there, claiming that we could've been there already if only I just tried harder doesn't sound helpful at all. Yes it's important to try to get there. One step at a time. Check whether it's possible to go faster occasionally, while being nice and careful towards yourself. But as soon as you find yourself actually having a voice in your head being mean to you because your are not as good as you wish to be, it seems that you've failed the nice and careful part.
I'm noticing it's hard to engage with this post because... well, if I observed this in a real conversation, my main hypothesis would be that Alice has a bunch of internal conflict and guilt that she's taking out on Bob, and the conversation is not really about Bob at all. (In particular, the line "That kind of seems like a you problem, not a me problem" seems like a strong indicator of this.)
So maybe I'll just register that both Alice and Bob seem confused in a bunch of ways, and if the point of the post is "here are two different ways you can be confused" then I guess that makes sense, but if the point of the post is "okay, so why is Alice wrong?" then... well, Alice herself doesn't even seem to really know what her position is, since it's constantly shifting throughout the post, so it's hard to answer that (although Holden's "maximization is perilous" post is a good start).
Relatedly: I don't think it's an accident that the first request Alice makes of Bob (donate that money rather than getting takeout tonight) is far more optimized for signalling ingroup status than for actually doing good.
If I were Bob I'd have told her to fuck off long ago and stopped letting some random person berate me for being lazy just like my parents always have. This is basically guilt-tripping, not a beneficial way of approaching any kind of motivation, and it is absolutely guaranteed to produce pushback. But then, I'm probably not your target audience, am I?
Btw just to be clear, I think Said Achmiz explained my reaction better than I, who habitually post short reddit-tier responses, can. My specific issue is that Alice seems to be acting as if it's any of her business what Bob does. It is not. Absolutely nobody likes being told they're not being ethical enough. It's why everyone hates vegans. As someone who doesn't like experiencing such judgmental demands, I would have the kneejerk emotional reaction to want to become less of an EA just to spite her. (I would not of course act on this reaction, but I would start finding EA things to be in an ugh field because they remind me of the distress caused by this interaction.)
Multiple related problems with Alice's behavior (if we treat this as a real conversation):
These aren't merely impolite, they're bad things to do, especially when combined and repeated in rapid succession. It seems like an assault on Bob's ability to orient & think for himself about himself.
Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai...
This is a disturbing claim, although I realize that the author's opinions don't coincide with those of the "Alice" character. Personally, I'm not a utilitarian, nor do I want to be a utilitarian or think that I "should" be a utilitarian[1]. I do consider myself a person who is empathetic, honest and cooperative[2]. I hope this doesn't disqualify me from the rationalist community?
In general, I'm in favor of promoting societal norms which incentivize making the world better: such norms are obviously in everyone's interest. In this sense, I'm very sympathetic to effective altruism. However, these norms should still regard altruism as supererogatory: i.e., it should be rewarded and encouraged, but it's lack should not be severely punished. The alternative is much too vulnerable to abuse.
IMO utilitarianism is not even logically coherent, due to paradoxes with infinite ethics and Pascal's mugging.
In the sense of, trying to act according to superration
Alice: I think the negative impact of my rudeness is probably smaller than the potential positive impact of getting you to act in line with the values you claim to have.
It seems to me that Bob has a moral obligation to respond in such a way as to ensure that Alice’s claim here is false, i.e. the correct response here is “lol fuck you” (and escalating from there if Alice persists). Alice’s behavior here ought not be incentivized; on the contrary, it should be severely punished. Bob is exhibiting a failure of moral rectitude, or else a failure of will, by not applying said punishment.
Word of God, as the creator of both Alice and Bob: …
Fair enough, but this is new information, not included in the post. So, all responses prior to you posting this explanatory comment can’t have taken it into account. (Perhaps you might make an addendum to the post, with this clarification? It significantly changes the context of the conversation!)
However, there is then the problem that if we assume what you’ve just added to be true, then the depicted conversation is rather odd. Why isn’t Alice focusing on these claims of Bob’s? After all, they’re the real problem! Alice should be saying:
“You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!”
And so on. But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems w...
I think this post raises important points and handles them reasonably well. I am of course celebrating that fact mostly by pointing out disagreements with it.
I wish Alice drew a sharper distinction between Bob being honest about his beliefs, Bob bringing his actions in line with his stated beliefs, and Bob doing what Alice wants. I think pushing people to be honest is prosocial by default (within limits). Pushing people to do what you want is antisocial by default, with occasional exceptions.
And Alice's methods can be bad, even if the goal is good. If I could push a button and have a community only of people on a long term growth trajectory, I would. But policing this does more harm than good, because it's hard for the police to monitor. Growth doesn't always look like what other people expect, and people need breaks. Demandng everyone present legible growth on a predictable cycle impedes growth (and pushes people to be dishonest).
My personal take here is that you should be ready to work unsustainably and miserably when the circumstances call for it, but the circumstances very rarely call for it, and those circumstances always include being very time-limited. "I'll just take ...
Alice: Our utility functions differ.
Bob: I also observe this.
Alice: I want you to change to match me: conditional on your utility function being the same as mine, my expected utility would be larger.
Bob: Yes, that follows from me being a utility maximizer
Bob: I won't change my utility function: conditional on my utility function becoming the same as yours, my expected utility as measured by my current utility function would be lower.
Alice: Yes, that follows from you being a utility maximizer
This is a very good post and nearly all the replies here are illustrating the exact issue that Bob has, which is an inability to engage in the dialectic between these two perspectives without indignation as a defense against guilt.
Most people, including myself, are more Bob than Alice, but I've had a much easier time integrating my inner Alice and engaging with Alices I meet because I rarely, if ever, feel guilt about anything. Strong guilt increases the anticipated costs of positive self-change, and makes people strengthen defense mechanisms that boil dow...
I think if someone wasn’t indignant about Alice’s ideas, but did just disagree with Alice and think she was wrong, we might see lots of comments that look something like: …
The disagreement isn’t with Alice’s ideas, it’s with Alice’s claims to have any right to impose her judgment on people who aren’t interested in hearing it. What you describe here is instead an acceptance of Alice’s premises. I’m pointing out that it’s possible to disagree with those premises entirely.
I agree that “using evidence, building models, remembering that 0 and 1 aren’t probabilities, testing our beliefs against the territory, etc.” are good habits. But they’re habits that it’s good to deploy of your own volition. If someone is trying to pressure you into doing these things—especially someone who, like Alice, quite transparently does not have your best interests in mind, and is acting in the service of ulterior motives, and who (again, like Alice) is deceptively clothing these motives in a guise of trying to help you conform to your own stated values—then the first thing you should do is tell them to fuck off (employing as much or as little tact in this as you deem fit), and only then should you consider whether and what techniques of epistemic rationality to apply to the situation.
It is a foolish, limited, and ultimately doomed sort of rationality, that ignores interpersonal conflicts when figuring out what the world is like, and what to do about it.
I am mostly like Bob (although I don't make up stuff about burnout), but I think calling myself a utilitarian is totally reasonable. By my understanding, utilitarianism is an answer to the question "what is moral behavior." It doesn't imply that I want to always decide to do the most moral behavior.
I think the existence of Bob is obviously good. Bob is in, like, the 90th percentile of human moral behavior, and if other people improved their behavior, Bob is also the kind of person who would reciprocally improve his own. If Alice wants to go around personal...
I am genuinely confused why this is on lesswrong instead of EA. What do you think the distribution of giving money is like on each place, and what do you think the distribution of responses to drowning child is like on each?
I think Bob's answer should probably be.
Look, I care somewhat about improving the world as a whole. But I also care about myself as well.
And I would recommend you don't go out of your way to antagonize and reject allies with a utility function similar enough to yours that mutual cooperation is easy.
The number of people who are a genuine Alice is rather low.
Also, bear in mind that the human brain has a built in "don't follow that logic off a cliff" circuit. This is the circuit that ensures crazy suicide cults are so rare despite the ...
No human being is a full utilitarian. Expecting them or yourself to be will bring disappointment or guilt.
But helping others can bring great joy and satisfaction.
The answer is obviously yes to Alice's question.
We should work harder in the most convenient world. The premise basically states that Bob would be happier AND do more good. He's an idiot for saying no, except to get bossy, controlling Alice off his back and not let her gaslight him into doing what she wants.
But is this that world? Probably not the least convenient/easiest. Where is it on the spectrum? What will lead to Bob's happiest life? That is the right question for Bob to ask, and it's not trivial to answer.
Alice and Bob sound to me very like the two options of my variant 8 of the red-pill-blue-pill conundrum. We can imagine Alice working as she describes for the whole of a long life, because we can imagine anything. A real Alice, I'd be interested to see in 10 years. I think there are few, very few, who can live like that. If Bob could, he'd be doing it already.
If in World A, the majority was an Alice ... not doing the job they loved ( imagine a teacher who thinks education is important, but emotionally dislikes students) , unreciprically giving away some arbitrary % of their earnings, etc...
Is that actually better than World B? A world where the majority are Bobs, sucessful at their chosen craft, giving away some amount of their earnings but maintaining a majority they are comfortable with.
I'm surprised Bob didn't make the obvious rebuttals:
Alice, why aren't you giving away 51% of your earnings? What metho
I think this line of argument works okay until this point.
Alice: ... In the least convenient possible world, where the advice to rest more is equally harmful to the advice to work harder, and most people should totally view themselves as less fundamentally unchangeable, and the movement would have better PR if we were sterner…
Okay. Let's call the initial world Earth-1, with Alice-1 talking to Bob-1. Let's call the least convenient possible world Earth-2. Earth-2 contains Alice-2 and Bob-2. They aren't having this exact conversation, because that's not ...
I came back to this post a year later because I really wanted to grapple with the idea I should be willing to sacrifice more for the cause. Alas, even in a receptive mood I don't think this post does a very good job of advocating for this position. I don't believe this fictional person weighed the evidence and came to a conclusion she is advocating for as best she can: she's clearly suffering from distorted thoughts and applying post-hoc justifications. She's clearly confused about what convenient means (having to slow down to take care of yourself is very...
“Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai - there’s a reason this is on LessWrong and not the EA forum”
As the EA community has become less intense, sometimes I’ve wondered whether there would be value in someone starting an LW or EA adjacent group that’s on the more intense part of the spectrum.
I definitely see risks associated with this (people pushing themselves too hard, fanaticism) and I probably wouldn’t want to be part of it myself, but I imagine that it could be a good fit for some people.
I have to wonder if you are posting this here in order to play Alice to our Bobs, distanced by writing it as a parable.
Hello Firinn,
I can relate to this post, even when I was never part of the EA-movement. When I was younger, I did join a climate-organization, and also had an account on kiva.org. And I would say there was a lot of guilt and confusion around my actions at that point, whilst simultaneously trying to do a lot of 'better than'-actions.
Your post is very extensive, and as such I find myself engaged by just reading one of the external links and the post itself. Therefore, my comment isn't really a comment to the whole post, but sees the post through o...
I feel like the crux of this discussion is how much we should adjust our behavior to be "less utilitarian", to preserve our utilitarian values.
The expected utility that a person created could be measured by (utility created by behavior) x (odds that they will actually follow through on their behavior), where the odds of follow-up decrease as the behavior modifications become more drastic, but the utility created if followed through increases.
People are already implicitly taking this account when evaluating what the optimal amount of radicality in act...
Alice strikes me as the poster child for the old saying about good intentions and roads to hell. Ultimately, I think she ends up causing much more harm via the toxic and negative experience those around her have than any good she can do herself.
So I basically know Alice is right, yet I mostly act like a Bob. I'm probably neither a true rationalist (I am acting on emotions instead of the truth) nor a strong effective altruist. I donate money because it makes me feel good, volunteer mostly for the fuzzies and engage with my local EA group because it's a strong community with amazing and brilliant people.
Yeah, deep down I'm a selfish human. I don't think I'll change that about myself. But EA has still enabled me to have a large positive impact trhough effective giving and that's a net positive.
Strong upvote for this post! While I'd caution against linking this sequence to the Effective Altruism forum and movement in general - because I don't think placing explicit and extremely strong moral *obligations* about action makes for a healthy, self-confident or outward looking mass movement - I would definitely encourage Firinn to write more LessWrong posts in this vein.
The LessWrong community should be very enthusiastic about more articulate narratives and discussions on exemplary actions motivated toward saving the whole entire world! Posts di...
I couldn't read this straight. Alice is being an absolute asshole to Bob. This is incredibly off-putting.
I think you could have communicated better if you had tried to make Alice remotely human.
I think I get what you are trying to do with this, but I only got it after reading comments.
Part one of what will hopefully become the aspirant sequence.
Content note: Possibly a difficult read for some people. You are encouraged to just stop reading the post if you are the kind of person who isn’t going to find it useful. Somewhat intended to be read alongside various more-reassuring posts, some of which it links to, as a counterpoint in dialogue with them. Pushes in a direction along a spectrum, and whether this is good for you will depend on where you currently are on that spectrum. Many thanks to Keller and Ozy for insightful and helpful feedback; all remaining errors are my own.
Alice is a rationalist and Effective Altruist who is extremely motivated to work hard and devote her life to positive impact. She switched away from her dream game-dev career to do higher-impact work instead, she spends her weekends volunteering (editing papers), she only eats the most ethical foods, she never tells lies and she gives 50% of her income away. She even works on AI because she abstractly believes it’s the most important cause, even though it doesn’t really emotionally connect with her the way that global health does. (Or maybe she works on animal rights for principled reasons even though she emotionally dislikes animals, or she works on global health even though she finds AI more fascinating; you can pick whichever version feels more challenging to you.)
Bob is interested in Effective Altruism, but Alice honestly makes him a little nervous. He feels he has some sort of moral obligation to make the world better, but he likes to hope that he’s fulfilled that obligation by giving 10% of his income as a well-paid software dev, because he doesn’t really want to have to give up his Netflix-watching weekends. Thinking about AI makes him feel scared and overwhelmed, so he mostly donates to AMF even though he’s vaguely aware that AI might be more important to him. (Or maybe he donates to AI because he feels it’s fascinating, even though he thinks rationally global health might have more positive impact or more evidence behind it - or he gives to animal rights because animals are cute. Up to you.)
Alice: You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
Bob: Wow, Alice. It’s none of your business what I do with my own money; that’s rude.
Alice: I think the negative impact of my rudeness is probably smaller than the potential positive impact of getting you to act in line with the values you claim to have.
Bob: That doesn’t even seem true. If everyone is rude like you, then the Effective Altruism movement will get a bad reputation, and fewer people will be willing to join. What if I get so upset by your rudeness that I decide not to donate at all?
Alice: That kind of seems like a you problem, not a me problem.
Bob: You’re the one who is being rude.
Alice: I mean, you claim to actually seriously agree with the whole Drowning Child thing. If you would avoid doing any good at all, purely because someone was rude to you, then I think you were probably lying about being convinced of Effective Altruism in the first place, and if you’re lying then it’s my business.
Bob: I’m not lying; I’m just arguing why you shouldn’t say those things in the abstract, to arbitrary people, who could respond badly. Sure, maybe they shouldn’t respond badly, but you can’t force everyone to be rational.
Alice: But I’m not going out and saying this to some abstract arbitrary person. Why shouldn’t you, personally, work harder and donate more?
Bob: I’m protecting my mental health by ensuring that I only commit an amount of money and time which is sustainable for me.
Alice: So you believe that good will actually be maximised by donating exactly the amount of money that will give you warm fuzzies, and no more, and volunteering exactly the amount of time that makes you happy, and no more?
Bob: Absolutely. If I tried to donate more time or money, I’d burn out. Then I’d do even less good. Under this view, I’m actually obligated not to donate any more than I do!
Alice: You’re morally obligated to take the actions that happen to make you maximally happy? Wow, that seems like a really convenient coincidence for you, and that seems like a great reason to really challenge that belief. Isn’t it possible that you could be slightly inconvenienced without significantly increasing your risk of burning out, or that you could do a significant amount more good while only increasing your burn-out risk by an acceptably small amount?
Bob: Who says I’m maximally happy? I’d probably be happier if I gave 0% to charity and bought a faster car, but I’m giving 10%! Nobody is perfect, and 10% is good enough. Surely you should go and criticise some of the people who are giving 0%?
Alice: I criticise them plenty, and that doesn’t mean that I can’t also criticise you; that seems like a deflection. Nobody’s perfect, but some people are coming closer than others. I can’t really define whether you’re maximally happy, but I assume you would feel some guilt about donating 0%, or you’d miss out on some warm fuzzies, or you’d miss out on the various social benefits of being part of the community.
Bob: No, I donate 10% because I want to help others and I genuinely care about positive impact, and ethical obligations, and utilitarian considerations. I just set a lower standard.
Alice: Regardless, I don’t think any of this really addresses my criticism. Donating 10% is perfectly consistent with being a total egoist who just happens to enjoy the warm fuzzies of donating some money to charity. But humans aren’t reflectively consistent, and I think if you were an actual utilitarian, you would probably believe that the ethical amount to give is higher than the amount you inherently personally want to give.
Bob: Sure, if there was a button which magically made me more ethical, and caused me to want to donate 30%, then I’d probably press it because I believe that’s the right thing to do. But the magical button doesn’t exist. I currently want to donate 10%, and I can’t make myself want to donate 30% any more than I can change my natural talents.
Alice: So your claim is that it’s okay to be lazy, or selfish, or hypocritical, because you can’t make yourself be any less of those things?
Bob: No, you’re just being rude again. I’m not lazy about doing my fair share of the dishes. I just think that, when it comes to allocating resources to altruism, you’ll burn out if you push yourself to do more good than you’re naturally inclined to.
Alice: I think if this was your true objection - your crux - then you would have probably put a lot of work in to understand burnout. Some of the hardest-working people have done that work - and never burned out. Instead, you seem to treat it like a magical worst possible outcome, which provides a universal excuse to never do anything that you don’t want to do. How good a model do you have of what causes burnout? (I notice that many people think vacations treat burnout, which is probably a sign they haven’t looked at the research.) Surely there’s not a black-and-white system where working slightly too hard will instantly disable you forever; maybe there’s a third option where you do more but you also take some anti-burnout precaution.. If I really believed I couldn’t do more without risking burnout, and that was the most important factor preventing me from fulfilling my deeply held ethical beliefs, I think I would have a complex model of what sorts of risk factors create what sort of probability of burnout, and whether there’s different kinds of burnout or different severity levels, and what I could do to guard against it.
Bob: Well, maybe that’s true. I definitely don’t want to work any harder than I currently do, so I guess I’d be motivated to believe that I’ll burn out if I do, and that could bias my thinking. But it’s still dangerous and rude to go around spouting this kind of rhetoric, because some people might have a lot of scrupulosity, and they could be really harmed by being told they’re bad people unless they work harder.
Alice: Seems like a fake justification. I’m sure some people should reverse any advice they hear, but I’m currently talking to you and I don’t think you have scrupulosity issues.
Bob: Even assuming I don’t have scrupulosity issues, if I overworked myself, I’d be setting a bad example to people who do have scrupulosity issues. I’d be contributing to bad social norms.
Alice: Weird, you don’t seem to think that I’m contributing to bad social norms by existing. Actually I think I’m a good role model for everyone else.
Bob: You’re really arrogant.
Alice: This conversation isn’t about my flaws, and also, I don’t think humility is always a virtue. For instance, you’re humble about how much you can realistically achieve, but since you haven’t really tested the question, I think it’s a vice. I actually think my mental health is pretty good, and the work that I do contributes to my positive mental health; I have a sense of purpose, a sense of camaraderie with other people in the community, I don’t really deal with any guilt because I genuinely think I’m doing the most I can do, and I like it when people look up to me.
Bob: Okay, but I can’t become you. I can only act in accordance with whatever values I really have. I wouldn’t feel really good all the time if I worked hard like you. I’d just be miserable and burn out. I can’t change fundamental facts about my motivational system.
Alice: What if we lived in the least convenient possible world? What if the techniques I use to avoid burnout - like meditating, surrounding myself with people who work similarly hard so that my brain feels it’s normal, eating a really healthy diet, coworking or getting support on tasks that I’m aversive about, practising lots of instrumental rationality techniques, frequently reminding myself that I’m living consistently with my values, avoiding guilt-based motivation, exercising regularly, seeing a therapist proactively to work on my emotional resilience, and all that - would actually completely work for you, and you’d be able to work super hard without burning out at all, and you’d be perfectly capable of changing yourself if you tried?
Bob: Just because they’d work for me, doesn’t mean they’d work for others. This is a potentially harmful sort of thing to talk about, because some fraction of people will hear this advice and overwork themselves and end up with mental health crises, and some people will think you’re a jerk and leave the movement, and some people will be unable to change themselves and will feel really guilty.
Alice: How sure are you that this isn’t also true about the opposite advice? Maybe some people work on a forks model rather than a spoons model, so they actually need to do tasks in order to improve their mental health, but they hear advice telling them to take breaks to avoid burnout - so they sit around being miserable, gaming and scrolling social media, wondering when resting is going to start improving their burnout problems, not realising that they aren’t burned out at all and they’d actually feel better if they worked harder and did rejuvenating tasks and got into a success spiral. Maybe some people are put off from the movement because they don’t think we’re hardcore enough, so they go off to do totally ineffective things like being a monk and taking a vow of silence because that feels more hardcore or real. Maybe the belief that you can’t change fundamental facts about yourself is harmful to some people with mental illnesses who feel like they’ll never be able to become happy or productive. In the least convenient possible world, where the advice to rest more is equally harmful to the advice to work harder, and most people should totally view themselves as less fundamentally unchangeable, and the movement would have better PR if we were sterner… would you work harder then?
Bob: I just kind of don’t really want to work harder.
Alice: I think we’ve arrived at the core of the problem, yes.
Bob: I don’t know what the point of this conversation was. You haven’t persuaded me to do anything differently, I don’t think you can persuade me to do anything that I don’t want to do, and you’ve kind of just made me feel bad.
Alice: Maybe I’d like you to stop claiming to be a utilitarian, when you’re totally not - you’re just an egoist who happens to have certain tuistic preferences. I might respect you more if you had the integrity to be honest about it. Maybe I think you’re wrong, and there’s some way to persuade you to be better, and I just haven’t found it yet. (Growth mindset!) Maybe I want an epistemic community that helps me with my reasoning, and calls me out when I’m engaging in bias or motivated stopping, which means I want the kinds of things I’m saying here to be normal and okay to say - otherwise people won’t say them to me. Maybe I just notice that when people make type-1 errors in the working-too-hard-and-burning-out direction they usually get the reassurance they need from the community, and when people make errors in the type-2 not-working-hard-enough direction they don’t really get the callouts they need because it’s considered rude, and I’m just pushing in the direction of editing that social norm. Maybe I’d like you to be honest about this because I’d like to surround myself with a community of people who share my values, so I’d like to be able to filter out people like you - no offence, we can still be friends, it’s just that I feel like I’d find it easier to be motivated and consistent if my brain wasn’t constantly looking at you and reminding me that I totally could have a cushy life like yours if I just stopped living my values.
Bob: Wait, are you claiming that I’m harming you, just by existing in your vague vicinity and not doing the maximum amount of good?
Alice: No, not really, maybe I’m just claiming that we have competing access needs. I mean, I don’t really know what the correct solution is. Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai - there’s a reason this is on LessWrong and not the EA forum. Maybe I’m in the minority and my needs aren’t realistically going to be met, in which case I will shrug and carry on trying to do the best that I can. Or maybe thinking about the potential positive impact on me is just the push you need to be better yourself. Maybe I don’t think you’re harming me, exactly, I just think you’re being rude - and maybe that makes it okay for me to be a little rude, too.
Bob: I want to tap out of this conversation now.