Why does Eliezer make abrasive public comments?
I don't want to ruffle any feathers, but this has been bugging me for a while and has now become relevant to a decision since MIRI is fundraising and is focused on communication instead of research. I love Eliezer's writing - the insight, the wit, the subversion. Over the years though, I've seen many comments from him that I found off-putting. Some of them, I've since decided, are probably net positive and I just happen to be in a subgroup that they don't work for (for example, I found Dying with Dignity discouraging, but saw enough comments that it had been helpful for people that I've changed my mind to think it was a net positive). However, other comments are really difficult for me to rationalize. I just saw one recently on the EA forum to the effect that EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant. I don't have any model of social dynamics by which making that statement publicly is plausibly +EV. When I see these public dunks/brags, I experience cognitive dissonance, because my model of Eliezer is someone who is intelligent, rational, and aiming at using at least their public communications to increase the chance that AI goes well. I'm confident that he must have considered this criticism before, and I'd expect him to arrive at a rational policy after consideration. And yet, I see that when I recommend "If Anyone Builds It", people's social opinions of Eliezer affect their willingness to read/consider it. I searched LW, and if it has been discussed before it is buried in all the other mentions of Eliezer. My questions are: 1. Does anyone know if there is some strategy here, or some model for why these abrasive statements are actually +EV for AI Safety? 2. Does MIRI in its communication strategy consider affective impact? Phrased differently, are there good reasons to believe that: 1. None of Eliezer's public communication is -EV for AI Safety 2. Financial support of MIRI is likely to produce more
I just tried criticizing my ingroup. Did my blood boil? No. My Scotsmen got truer. Every time I could identify a flawed behavior, it felt inappropriate to include those people in my "real ingroup". Now, if I had a more objectively defined group based on voting record or religious belief or something, then maybe I'd be able to force my brain to keep them in my ingroup, but right now, my brain flips to "sure, I'm happy to criticize those people giving us a bad name. Look, I'm criticizing my ingroup!"
I tried 2 other experiments:
1. Think about criticisms toward my ingroup that do make me angry - maybe those are the ones... (read more)