Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LessWrong 2.0

90 Vaniver 09 December 2015 06:59PM

Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!

You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.

But before we leap into action, let's review the problem.

continue reading »

The correct response to uncertainty is *not* half-speed

75 AnnaSalamon 15 January 2016 10:55PM

Related to: Half-assing it with everything you've got; Wasted motionSay it Loud.

Once upon a time (true story), I was on my way to a hotel in a new city.  I knew the hotel was many miles down this long, branchless road.  So I drove for a long while.

After a while, I began to worry I had passed the hotel.



So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

After a while, I realized: I was being silly!  If the hotel was ahead of me, I'd get there fastest if I kept going 60mph.  And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction.  And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.  

Either way, fullspeed was best.  My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward.  So, since I'm uncertain, I should go forward at half-speed!"  But averages don't actually work that way.[1]

Following this, I started noticing lots of hotels in my life (and, perhaps less tactfully, in my friends' lives).  For example:
  • I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it.  So, I sat there kind-of-writing it while also fretting about whether the task was correct.
    • (Solution:  Take a minute out to think through heuristics.  Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
  • I wasn't sure (back in early 2012) that CFAR was worthwhile.  So, I kind-of worked on it.
  • An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work.  So I kind-of hung out with her while feeling bad and distracted about my work.
  • A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
  • Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
  • It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

That is, it seems to me that often there are two different actions that would make sense under two different models, and we are uncertain which model is true... and so we find ourselves taking an intermediate of half-speed action... even when that action makes no sense under any probabilistic mixture of the two models.

You might try looking out for such examples in your life.

[1] Edited to add: The hotel example has received much nitpicking in the comments.  But: (A) the actual example was legit, I think.  Yes, stopping to think has some legitimacy, but driving slowly for a long time because uncertain does not optimize for thinking.  Similarly, it may make sense to drive slowly to stare at the buildings in some contexts... but I was on a very long empty country road, with no buildings anywhere (true historical fact), and also I was not squinting carefully at the scenery.  The thing I needed to do was to execute an efficient search pattern, with a threshold for a future time at which to switch from full-speed in some direction to full-speed in the other.  Also: (B) consider some of the other examples; "kind of working", "kind of hanging out with my friend", etc. seem to be common behaviors that are mostly not all that useful in the usual case.

Why startup founders have mood swings (and why they may have uses)

46 AnnaSalamon 09 December 2015 06:59PM

(This post was collaboratively written together with Duncan Sabien.)


Startup founders stereotypically experience some pretty serious mood swings.  One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for.  Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt.  Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.






Well, sure, you might say.  Running a startup is stressful.  Stress comes with mood swings.  


But that’s not really an explanation—it’s like saying stuff falls when you let it go.  There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.


continue reading »

Why CFAR? The view from 2015

44 PeteMichaud 23 December 2015 10:46PM

Follow-up to: 2013 and 2014.

In this post, we:

We are in the middle of our matching fundraiser; so if you’ve been considering donating to CFAR this year, now is an unusually good time.

continue reading »

The art of grieving well

41 Valentine 15 December 2015 07:55PM

[This is one post I've written in an upcoming sequence on what I call "yin". Yin, in short, is the sub-art of giving perception of truth absolutely no resistance as it updates your implicit world-model. Said differently, it's the sub-art of subconsciously seeking out and eliminating ugh fields and also eliminating the inclination to form them in the first place. This is the first piece I wrote, and I think it stands on its own, but it probably won't be the first post in the final sequence. My plan is to flesh out the sequence and then post a guide to yin giving the proper order. I'm posting the originals on my blog, and you can view the original of this post here, but my aim is to post a final sequence here on Less Wrong.]

In this post, I'm going to talk about grief. And sorrow. And the pain of loss.

I imagine this won't be easy for you, my dear reader. And I wish I could say that I'm sorry for that.

…but I'm not.

I think there's a skill to seeing horror clearly. And I think we need to learn how to see horror clearly if we want to end it.

This means that in order to point at the skill, I need to also point at real horror, to show how it works.

So, I'm not sorry that I will make you uncomfortable if I succeed at conveying my thoughts here. I imagine I have to.

Instead, I'm sorry that we live in a universe where this is necessary.

If you Google around, you'll find all kinds of lists of what to say and avoid saying to a grieving person. For reasons I'll aim to make clear later on, I want to focus for a moment on some of the things not to say. Here are a few from

  • "He is in a better place."
  • "There is a reason for everything."
  • "I know how you feel."
  • "Be strong."

I can easily imagine someone saying things like this with the best of intentions. They see someone they care about who is suffering greatly, and they want to help.

But to the person who has experienced a loss, these are very unpleasant to hear. The discomfort is often pre-verbal and can be difficult to articulate, especially when in so much pain. But a fairly common theme is something like:

"Don't heave your needs on me. I'm too tired and in too much pain to help you."

If you've never experienced agonizing loss, this might seem really confusing at first — which is why it seems tempting to say those things in the first place, I think. But try assuming that the grieving person sees the situation more clearly, and see if you can make sense of this reaction before reading on.

If you look at the bulleted statements above, there's a way of reading them that says "You're suffering. Maybe try this, to stop your suffering." There's an imposition there, telling the grieving person to add more burden to how they are in the moment. In many cases, the implicit request to stop suffering comes from the speaker's discomfort with the griever's pain, so an uncharitable (but sometimes accurate) read of those statements is "I don't like it when you hurt, so stop hurting."

Notice that the person who lost someone doesn't have to think through all this. They just see it, directly, and emotionally respond. They might not even be able to say why others' comments feel like impositions, but there's very little doubt that they do. It's just that social expectations take so much energy, and the grief is already so much to carry, that it's hard not to notice.

There's only energy for what really, actually matters.

And, it turns out, not much matters when you hurt that much.

I'd like to suggest that grieving is how we experience the process of a very, very deep part of our psyches becoming familiar with a painful truth. It doesn't happen only when someone dies. For instance, people go through a very similar process when mourning the loss of a romantic relationship, or when struck with an injury or illness that takes away something they hold dear (e.g., quadriplegia). I think we even see smaller versions of it when people break a precious and sentimental object, or when they fail to get a job or into a school they had really hoped for, or even sometimes when getting rid of a piece of clothing they've had for a few years.

In general, I think familiarization looks like tracing over all the facets of the thing in question until we intuitively expect what we find. I'm particularly fond of the example of arriving in a city for the first time: At first all I know is the part of the street right in front of where I'm staying. Then, as I wander around, I start to notice a few places I want to remember: the train station, a nice coffee shop, etc. After a while of exploring different alleyways, I might make a few connections and notice that the coffee shop is actually just around the corner from that nice restaurant I went to on my second night there. Eventually the city (or at least those parts of it) start to feel smaller to me, like the distances between familiar locations are shorter than I had first thought, and the areas I can easily think of now include several blocks rather than just parts of streets.

I'm under the impression that grief is doing a similar kind of rehearsal, but specifically of pain. When we lose someone or something precious to us, it hurts, and we have to practice anticipating the lack of the preciousness where it had been before. We have to familiarize ourselves with the absence.

When I watch myself grieve, I typically don't find myself just thinking "This person is gone." Instead, my grief wants me to call up specific images of recurring events — holding the person while watching a show, texting them a funny picture & getting a smiley back, etc. — and then add to that image a feeling of pain that might say "…and that will never happen again." My mind goes to the feeling of wanting to watch a show with that person and remembering they're not there, or knowing that if I send a text they'll never see it and won't ever respond. My mind seems to want to rehearse the pain that will happen, until it becomes familiar and known and eventually a little smaller.

I think grieving is how we experience the process of changing our emotional sense of what's true to something worse than where we started.

Unfortunately, that can feel on the inside a little like moving to the worse world, rather than recognizing that we're already here.

It looks to me like it's possible to resist grief, at least to some extent. I think people do it all the time. And I think it's an error to do so.

If I'm carrying something really heavy and it slips and drops on my foot, I'm likely to yelp. My initial instinct once I yank my foot free might be to clutch my foot and grit my teeth and swear. But in doing so, even though it seems I'm focusing on the pain, I think it's more accurate to say that I'm distracting myself from the pain. I'm too busy yelling and hopping around to really experience exactly what the pain feels like.

I could instead turn my mind to the pain, and look at it in exquisite detail. Where exactly do I feel it? Is it hot or cold? Is it throbbing or sharp or something else? What exactly is the most aversive aspect of it? This doesn't stop the experience of pain, but it does stop most of my inclination to jump and yell and get mad at myself for dropping the object in the first place.

I think the first three so-called "stages of grief" — denial, anger, and bargaining — are avoidance behaviors. They're attempts to distract oneself from the painful emotional update. Denial is like trying to focus on anything other than the hurt foot, anger is like clutching and yelling and getting mad at the situation, and bargaining is like trying to rush around and bandage the foot and clean up the blood. In each case, there's an attempt to keep the mind preoccupied so that it can't start the process of tracing the pain and letting the agonizing-but-true world come to feel true. It's as though there's a part of the psyche that believes it can prevent the horror from being real by avoiding coming to feel as though it's real.

The above might seem kind of abstract, so let me list a very few examples that I think do in fact apply to resisting grief:

  • After a breakup, someone might refuse to talk about their ex and insist that no one around them bring up their ex. They might even start dating a lot more right away (the "rebound" phenomenon, or dismissive-avoidant dating patterns). They might insist on acting like their ex doesn't exist, for months, and show flashes of intense anger when they find a lost sweater under their bed that had belonged to the ex.
  • While trying to finish a project for a major client (or an important class assignment, if a student), a person might realize that they simply don't have the time they need, and start to panic. They might pour all their time into it, even while knowing on some level that they can't finish on time, but trying desperately anyway as though to avoid looking at the inevitability of their meaningful failure.
  • The homophobia of the stereotypical gay man in denial looks to me like a kind of distraction. The painful truth for him here is that he is something he thinks it is wrong to be, so either his morals or his sense of who he is must die a little. Both are agonizing, too much for him to handle, so instead he clutches his metaphorical foot and screams.

In every case, the part of the psyche driving the behavior seems to think that it can hold the horror at bay by preventing the emotional update that the horror is real. The problem is, success requires severely distorting your ability to see what is real, and also your desire to see what's real. This is a cognitive black hole — what I sometimes call a "metacognitive blindspot" — from which it is enormously difficult to return.

This means that if we want to see reality clearly, we have to develop some kind of skill that lets us grieve well — without resistance, without flinching, without screaming to the sky with declarations of war as a distraction from our pain.

We have to be willing to look directly and unwaveringly at horror.

In 2014, my marriage died.

A friend warned me that I might go through two stages of grief: one for the loss of the relationship, and one for the loss of our hoped-for future together.

She was exactly right.

The second one hit me really abruptly. I had been feeling solemn and glum since the previous night, and while riding public transit I found myself crying. Specific imagined futures — of children, of holidays, of traveling together — would come up, as though raising the parts that hurt the most and saying "See this, and wish it farewell."

The pain was so much. I spent most of that entire week just moving around slowly, staring off into space, mostly not caring about things like email or regular meetings.

Two things really stand out for me from that experience.

First, there were still impulses to flinch away. I wanted to cry about how the pain was too much to bear and curl up in a corner — but I could tell that impulse came from a different place in my psyche than the grief did. It felt easier to do that, like I was trading some of my pain for suffering instead and could avoid being present to my own misery. I had worked enough with grief at that point to intuit that I needed to process or digest the pain, and that this slow process would go even more slowly if I tried not to experience it. It required a choice, every moment, to keep my focus on what hurt rather than on how much it hurt or how unfair things were or any other story that decreased the pain I felt in that moment. And it was tiring to make that decision continuously.

Second, there were some things I did feel were important, even in that state. At the start of this post I referenced how mourners can sometimes see others' motives more plainly than those others can. What I imagine is the same thing gave me a clear sense of how much nonsense I waste my time on — how most emails don't matter, most meetings are pointless, most curriculum design thoughts amount to rearranging deck chairs on the Titanic. I also vividly saw how much nonsense I project about who I am and what my personal story is — including the illusions I would cast on myself. Things like how I thought I needed people to admire me to feel motivated, or how I felt most powerful when championing the idea of ending aging. These stories looked embarrassingly false, and I just didn't have the energy to keep lying to myself about them.

What was left, after tearing away the dross, was simple and plain and beautiful in its nakedness. I felt like I was just me, and there were a very few things that still really mattered. And, even while drained and mourning for the lovely future that would never be, I found myself working on those core things. I could send emails, but they had to matter, and they couldn't be full of blather. They were richly honest and plain and simply directed at making the actually important things happen.

It seems to me that grieving well isn't just a matter of learning to look at horror without flinching. It also lets us see through certain kinds of illusion, where we think things matter but at some level have always known they don't.

I think skillful grief can bring us more into touch with our faculty of seeing the world plainly as we already know it to be.

I think we, as a species, dearly need to learn to see the world clearly.

A humanity that makes global warming a politicized debate, with name-calling and suspicion of data fabrication, is a humanity that does not understand what is at stake.

A world that waits until its baby boomers are doomed to die of aging before taking aging seriously has not understood the scope of the problem and is probably still approaching it with distorted thinking.

A species that has great reason to fear human-level artificial intelligence and does not pause to seriously figure out what if anything is correct to do about it (because "that's silly" or "the Terminator is just fiction") has not understood just how easily it can go horribly wrong.

Each one of these cases is bad enough — but these are just examples of the result of collectively distorted thinking. We will make mistakes this bad, and possibly worse, again and again as long as we are willing to let ourselves turn our awareness away from our own pain. As long as the world feels safer to us than it actually is, we will risk obliterating everything we care about.

There is hope for immense joy in our future. We have conquered darkness before, and I think we can do so again.

But doing so requires that we see the world clearly.

And the world has devastatingly more horror in it than most people seem willing to acknowledge.

The path of clear seeing is agonizing — but that is because of the truth, not because of the path. We are in a kind of hell, and avoiding seeing that won't make it less true.

But maybe, if we see it clearly, we can do something about it.

Grieve well, and awaken.

Why CFAR's Mission?

35 AnnaSalamon 02 January 2016 11:23PM

Related to:

Briefly put, CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world.

I'd like to explain what this mission means to me, and why I think a high-quality effort of this sort is essential, possible, and urgent.

I used a Q&A format (with imaginary Q's) to keep things readable; I would also be very glad to Skype 1-on-1 if you'd like something about CFAR to make sense, as would Pete Michaud.  You can schedule a conversation automatically with me or Pete.


Q:  Why not focus exclusively on spreading altruism?  Or else on "raising awareness" for some particular known cause?

Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.

Q:  Even given the above -- why focus extra on sanity, or true beliefs?  Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have?  (Also, have you ever met a Less Wronger?  I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)

This is an interesting one, IMO.

Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.

For example:

continue reading »

Results of a One-Year Longitudinal Study of CFAR Alumni

34 Unnamed 12 December 2015 04:39AM

By Dan from CFAR


When someone comes to a CFAR workshop, and then goes back home, what is different for them one year later? What changes are there to their life, to how they think, to how they act?

CFAR would like to have an answer to this question (as would many other people). One method that we have been using to gather relevant data is a longitudinal study, comparing participants' survey responses from shortly before their workshop with their survey responses approximately one year later. This post summarizes what we have learned thus far, based on data from 135 people who attended workshops from February 2014 to April 2015 and completed both surveys.

The survey questions can be loosely categorized into four broad areas:

  1. Well-being: On the whole, is the participant's life going better than it was before the workshop?
  2. Personality: Have there been changes on personality dimensions which seem likely to be associated with increased rationality?
  3. Behaviors: Have there been increases in rationality-related skills, habits, or other behavioral tendencies?
  4. Productivity: Is the participant working more effectively at their job or other projects?

We chose to measure these four areas because they represent part of what CFAR hopes that its workshops accomplish, they are areas where many workshop participants would like to see changes, and they are relatively tractable to measure on a survey. There are other areas where CFAR would like to have an effect, including people's epistemics and their impact on the world, which were not a focus of this study.

We relied heavily on existing measures which have been validated and used by psychology researchers, especially in the areas of well-being and personality. These measures typically are not a perfect match for what we care about, but we expected them to be sufficiently correlated with what we care about for them to be worth using.

We found significant increases in variables in all 4 areas. A partial summary:

Well-being: increases in happiness and life satisfaction, especially in the work domain (but no significant change in life satisfaction in the social domain)

Personality: increases in general self-efficacy, emotional stability, conscientiousness, and extraversion (but no significant change in growth mindset or openness to experience)

Behaviors: increased rate of acquisition of useful techniques, emotions experienced as more helpful & less of a hindrance (but no significant change on measures of cognitive biases or useful conversations)

Productivity: increases in motivation while working and effective approaches to pursuing projects (but no significant change in income or number of hours worked)

The rest of this post is organized into three main sections. The first section describes our methodology in more detail, including the reasoning behind the longitudinal design and some information on the sample. The second section gives the results of the research, including the variables that showed an effect and the ones that did not; the results are summarized in a table at the end of that section. The third section discusses four major methodological concerns—the use of self-report measures (where respondents might just give the answer that sounds good), attrition (some people who took the pre-survey did not complete the post-survey), other sources of personal growth (people might have improved over time without attending the CFAR workshop), and regression to the mean (people may have changed after the workshop simply because they came to the workshop at an unusually high or low point)—and attempts to evaluate the extent to which these four issues may have influenced the results.

continue reading »

MIRI's 2015 Winter Fundraiser!

28 So8res 09 December 2015 07:00PM

MIRI's Winter Fundraising Drive has begun! Our current progress, updated live:


Donate Now


Like our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. The drive will run until December 31st, and will help support MIRI's research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.

continue reading »

Celebrating All Who Are in Effective Altruism

21 Gleb_Tsipursky 20 January 2016 01:31AM

Elitism and Effective Altruism


Many criticize Effective Altruists as elitist. While this criticism is vastly overblown, unfortunately, it does have some basis, not only from the outside looking in but also within the movement itself, including some explicitly arguing for elitism.


Within many EA circles, there are status games and competition around doing “as much as we can,” and in many cases, even judging and shaming, usually implicit and unintended but no less real, of those whom we might term softcore EAs. These are people who identify as EAs and donate money and time to effective charities, but otherwise lead regular lives, as opposed to devoting the brunt of their resources to advance human flourishing as do hardcore EAs. To be clear, there is no definitive and hard distinction between softcore and hardcore EAs, but this is a useful heuristic to employ, as long as we keep in mind that softcore and hardcore are more like poles on a spectrum rather than binary categories.


We should help softcore EAs feel proud of what they do, and beware implying that being softcore EA is somehow deficient or simply the start of an inevitable path to being a hardcore EA. This sort of mentality has caused people I know to feel guilty and ashamed, and led to some leaving the EA movement. Remember that we all suffer from survivorship bias based on seeing those who remained, and not those who left - I specifically talked to people who left, and tried to get their takes on why they did so.


I suggest we aim to respect people wherever they are on the softcore/hardcore EA spectrum. I propose that, from a consequentialist perspective, negative attitudes toward softcore EAs are counterproductive for doing the most good for the world.


Why We Need Softcore EAs


Even if the individual contributions of softcore EAs are much less than the contributions of individual hardcore EAs, it’s irrational and anti-consequentialist to fail to acknowledge and celebrate the contributions of softcore EAs, and yet that is the status quo for the EA movement. As in any movement, the majority of EAs are not deeply committed activists, but are normal people for whom EA is a valuable but not primary identity category.


All of us were softcore EAs once - if you are a hardcore EA now, envision yourself back in those shoes. How would you have liked to have been treated? Acknowledged and celebrated or pushed to do more and more and more? How many softcore EAs around us are suffering right now due to the pressure of expectations to ratchet up their contributions?


I get it. I myself am driven by powerful emotional urges to reduce human suffering and increase human flourishing. Besides my full-time job as a professor, which takes about ~40 hours per week, I’ve been working ~50-70 hours per week for the last year and a half as the leader of an EA and rationality-themed meta-charity. As all people do, when I don’t pay attention, I fall unthinkingly into the mind projection fallacy, assuming other people think like I do and have my values, as well as my capacity for productivity and impact. I have a knee-jerk pattern as part of my emotional self to identify with and give social status to fellow hardcore EAs, and consider us an in-group, above softcore EAs.


These are natural human tendencies, but destructive ones. From a consequentialist perspective, it weakens our movement and undermines our capacity to build a better world and decrease suffering for current and future humans and other species.


More softcore EAs are vital for the movement itself to succeed. Softcore EAs can help fill talent gaps and donating to effective direct-action charities, having a strong positive impact on the outside world. Within the movement, they support the hardcore EAs emotionally through giving them a sense of belonging, safety, security, and encouragement, which are key for motivation and mental and physical health. Softcore EAs also donate to and volunteer for EA-themed meta-charities, as well as providing advice and feedback, and serving as evangelists of the movement.


Moreover, softcore EAs remind hardcore EAs of the importance of self-care and taking time off for themselves. This is something we hardcore EAs must not ignore! I’m speaking from personal experience here.


Fermi Estimates of Hardcore and Softcore Contributions


If we add up the amount of resources contributed to the movement by softcore EAs, they will likely add up to substantially more than the resources contributed by hardcore EAs. For instance, the large majority of those who took the Giving What We Can and The Life You Can Save pledges are softcore EAs, and so are all the new entrants to the EA movement, by definition.


To attach some numbers to this claim, let’s do a Fermi Estimate that uses some educated guesses to get at the actual resources each group contributes. Say that for every 100 EAs, there are 5 hardcore EAs and 95 softcore EAs. We can describe softcore EAs as contributing anywhere from 1 to 10 percent of their resources to EA causes (this is the range from The Life You Can Save pledge to the Giving What We Can pledge), so let’s guesstimate around 5 percent. Hardcore EAs we can say give an average of 50% of their resources to the movement. Using the handy Guesstimate app, here is a link to a model that shows softcore EAs contribute 480 resources, and hardcore EAs contribute 250 resources per 100 EAs. Now, these are educated guesses, and you can use the model I put together to put in your own numbers for the number of hardcore and softcore EAs per 100 EAs, and also the percent of their resources contributed. In any case, you will find that softcore EAs contribute a substantial amount of resources.


We should also compare the giving of softcore EAs to the giving of members of the general public to get a better grasp on the benefits provided to improving the world by softcore EAs. Let’s say a typical member of the general public contributes 3.5% of her resources to charitable causes, by comparison to 5% for softcore EAs. Being generous, we can estimate that the giving of non-EAs is 100 times less effective than that of EAs. Thus, using the same handy app, here is a link to a model that demonstrates the impact of giving by a typical member of the general public, 3.5, vs. the impact of giving by a softcore EA, 500. Now, the impact of giving by a hardcore EA is going to be higher, of course, 5000 as opposed to 500, but again, we have to remember that there are many more softcore EAs who give resources. You’re welcome to plug in your own numbers to get estimates if you think my suggested figures don’t match your intuitions. Regardless, you can see the high-impact nature of how a typical softcore EA compares to a typical member of the general public.


Effective Altruism, Mental Health, and Burnout: A Personal Account


About two years ago, in February 2014, my wife and I co-founded our meta-charity. In the summer of that year, she suffered a nervous breakdown due to burnout over running the organization. I had to - or to be accurate, chose to - take over both of our roles in managing the nonprofit, assuming the full burden of leadership.


In the Fall of 2014, I myself started to develop a mental disorder from the strain of doing both my professor job and running the organization, while also taking care of my wife. It started with heightened anxiety, which I did not recognize as something abnormal at the time - after all, with the love of my life recovering very slowly from a nervous breakdown and me running the organization, anxiety seemed natural. I was flinching away from my problem, not willing to recognize it and pretending it was fine, until some volunteers at the meta-charity I run – most of them softcore EAs – pointed it out to me.


I started to pay more attention to this, especially as I began to experience fatigue spells and panic attacks. With the encouragement of these volunteers, who essentially pushed me to get professional help, I began to see a therapist and take medication, which I continue to do to this day. I scaled back on the time I put into the nonprofit, from 70 hours per week on average to 50 hours per week. Well, to be honest, I occasionally put in more than 50, as I’m very emotionally motivated to help the world, but I try to restrain myself. The softcore volunteers at the meta-charity I run know about my workaholism and the danger of burnout for me, and remind me to take care of myself. I also need to remind myself constantly that doing good for the world is a marathon and not a sprint, and that in the long run, I will do much more good by taking it easy on myself.


Celebrating Everyone


As a consequentialist, my analysis, along with my personal experience, convince me that the accomplishments of softcore EAs should be celebrated as well as those of hardcore EAs.


So what can we do? We should publicly showcase the importance of softcore EAs. For example, we can encourage publications of articles that give softcore EAs the recognition they deserve, as well as those who give a large portion of their earnings and time to charity. We can invite a softcore EA to speak about her/his experiences at the 2016 EA Global. We can publish interviews with softcore EAs. Now, I’m not suggesting we should make most speakers softcore EAs, or write most articles, or conduct most interviews with softcore EAs. Overall, my take is that it’s appropriate to celebrate individual EAs proportional to their labors, and as the numbers above show, hardcore EAs individually contribute quite a bit more than softcore EAs. Yet we as a movement need to go against the current norm of not celebrating softcore EAs, and these are just some specific steps that would help us achieve this goal.


Let’s celebrate all who engage in Effective Altruism. Everyone contributes in their own way. Everyone makes the world a better place.


Acknowledgments: For their feedback on draft versions of this post, I want to thank Linch (Linchuan) Zhang, Hunter Glenn, Denis Drescher, Kathy Forth, Scott Weathers, Jay Quigley, Chris Waterguy (Watkins), Ozzie Gooen, Will Kiely, and Jo Duyvestyn. I bear sole responsibility for any oversights and errors remaining in the post, of course.


A different version of this, without the Fermi estimates, was cross-posted on the EA Forum.



EDIT: added link to post explicitly arguing for EA elitism

Bay Area Solstice 2015

19 MarieLa 17 November 2015 12:34AM

The winter solstice marks the darkest day of the year, a time to reflect on the past, present, and future. For several years and in many cities, Rationalists, Humanists, and Transhumanists have celebrated the solstice as a community, forming bonds to aid our work in the world.

Last year, more than one hundred people in the Bay Area came together to celebrate the Solstice.  This year, we will carry on the tradition. Join us for an evening of song and story in the candlelight as we follow the triumphs and hardships of humanity. 

The event itself is a community performance. There will be approximately two hours of songs and speeches, and a chance to eat and talk before and after. Death will be discussed. The themes are typically Humanist and Transhumanist, with a general audience that tends to be those who have found this site interesting, or care a lot about making our future better. There will be mild social pressure to sing along to songs.


When: December 12 at 7:00 PM - 9:00 PM

Where: Humanist Hall, 390 27th St, Oakland, CA 94612

Get tickets here. Bitcoin donation address: 1ARz9HYD45Midz9uRCA99YxDVnsuYAVPDk  

Sign up to bring food here


Feel free to message me if you'd like to talk about the direction the Solstice is taking, things you like, or things you didn't like. Also, please let me know if you'd like to volunteer.  

View more: Next