This is a post I wrote for my blog, EA Lifestyles.

like any good ea, I try to have a scout mindset. I don’t like to lie to myself or to other people. I try to be open to changing my mind.

but that kind of intellectual honesty only works if you don’t get punished for being honest. in order to think clearly about a topic, all of the potential outcomes have to be okay - otherwise you just end up with mental gymnastics trying to come up with the answer you want.

and in some cases - like existential risks from ai - none of the potential outcomes of thinking deeply about it are especially attractive.

if I look into ai safety and come to believe the world’s going to end in ten years, that would be super depressing, especially if I can’t do much to contribute. so that outcome, while tolerable, isn’t especially attractive.

you might think, “but maybe you’ll figure out that ai isn’t really a risk after all! wouldn’t that be reassuring?”

let’s think through what’s going to happen if I investigate ai safety and realize it’s not that important:

  1. a lot of arguments. people in ea want to hear ideas that are different to their own - it’s a credit to their epistemic humility! I encountered this a lot when I first got into ea in 2017. I remember being cornered at a new year’s eve party, being interrogated in detail on my views on ai safety. it convinced me the guy I was talking to cares deeply about being correct about ai, but it wasn’t a lot of fun.
  2. a lot of re-education. after hearing my views, I’m guessing people will want to share their own. at church, when my views deviated from the norm, I would usually get a lot of interested questions followed by the same 2 or 3 recommended readings. I’d expect the same thing here.
  3. a lot of judgment. I’m not a scientist. if I tried to form my own view on ai safety, it might be really stupid. or it might just be kind of weird. either way, there would probably be at least a few people who would think less of me.
  4. and maybe - MAYBE - I could convince other people to shift their resources to something else. if I were right, that would be very positive; if I were wrong, it would be very negative. but as social sciences major, my chances of being both right and persuasive on ai safety seem astronomically low.

if I wanted the best possible outcome for me personally, I’d just memorize two or three sentences from one of Ajeya or Holden’s blog posts, and quote them when I’m asked about my views. “I agree with Ajeya that in the coming 15-30 years, the world could plausibly develop “transformative AI”: AI powerful enough to bring us into a new, qualitatively different future, via an explosion in science and technology R&D” sounds pretty good and I think it would impress most of the people I talk to.

so in summary, while forming an inside view on ai might be very altruistic of me, I just can’t bring myself to do it. it would take a long time and it’s hard for me to imagine any good coming from it. the next time someone asks me what I think about ai safety at a new year’s eve party, I plan to blithely respond, “I’ve never really thought about it. Would you like another drink?”

New Comment
11 comments, sorted by Click to highlight new comments since:
[-]tlevin1417

I think it's admirable to say things like "I don't want to [do the thing that this community holds as near-gospel as a good thing to do.]" I also think the community should take it seriously that anyone feels like they're punished for being intellectually honest, and in general I'm sad that it seems like your interactions with EAs/rats about AI have been unpleasant.

That said...I do want to push back on basically everything in this post and encourage you and others in this position to spend some time seeing if you agree or disagree with the AI stuff.

  • Assuming that you think you'd look into it in a reasonable way, then you'd be much more likely to reach a doomy conclusion if it were actually true. If it were true, it would be very much in your interest — altruistically and personally — to believe it. In general, it's just pretty useful to have more information about things that could completely transform your life. If you might have a terminal illness, doesn't it make sense to find out soon so you can act appropriately even if it's totally untreatable?
  • I also think there are many things for non-technical people to do on AI risk! For example, you could start trying to work on the problem, or if you think it's just totally hopeless w/r/t your own work, you could work less hard and save less for retirement so you can spend more time and money on things you value now. 

For the "what if I decide it's not a big deal conclusion":

  • For points #1 through #3, I'm basically just surprised that you don't already experience this with the take "I don't want to learn about or talk about AI" such that it would get worse if your take was "I have a considered view that AI x-risk is low"! To be honest and a little blunt, I do judge people a bit when they have bad reasoning either for high or low levels of x-risk, but I'm pretty sure I judge them a lot more positively when they've made a good-faith effort at figuring it out.
  • For point #3 and #4, idk, Holden, Joe Carlsmith, Rob Long, and possibly I (among others) are all people who have (hopefully) contributed something valuable to the fight against AI risk with social science or humanities backgrounds, so I don't think this means you wouldn't be persuasive, and it seems incredibly valuable for the community if more people think things through and come to this opinion. The consensus that AI safety is a huge deal currently means we have hundreds of millions of dollars, hundreds of people (many of whom are anxious and/or depressed because of this consensus), and dozens of orgs focused on it. Imagine if this is wrong — we'd be inflicting so much damage!

Assuming that you think you'd look into it in a reasonable way, then you'd be much more likely to reach a doomy conclusion if it were actually true.

This is too optimistic assumption. On one hand, we have Kirsten's ability to do AI research. On the other hand, we have all the social pressure that Kirsten complains about. You seem to assume that the former is greater than the latter, which may or may not be true (no offense meant).

An analogy with religion is telling someone to make an independent research about the historical truth about Jesus. In theory, that should work. In practice... maybe that person has no special talent for doing historical research; plus there is always the knowledge in the background that arriving at the incorrect answer would cost them all their current friends anyway (which I hope does not work the same with EAs, but the people who can't stop talking about the doom now probably won't be able to stop talking about it even if Kirsten tells them "I have done my research, and I disagree").

This is exactly how I feel; thank you for articulating it so well!

My response to both paragraphs is that the relevant counterfactual is "not looking into/talking about AI risks." I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose "all their current friends" by arriving at an "incorrect" conclusion if their current friends are already fine with the person not having any view at all on AI risks.

Thanks, this is pretty persuasive and worth thinking about (so I will think about it!)

fwiw, I don't think you should feel obligated to talk about or come to an inside view on AI. 

It seems like this post has an implicit point of something like "I feel pressured to have opinions about AI and that sucks", but I'm not entirely sure of your frame on that. 

I definitely support people only thinking much about AI if they want to, and/or it feels tractably useful and psychologically safe. There's maybe an implicit competing access needs thing of "maybe many EA meetups will have a bunch of people talking about AI, and that makes those meetups feel less friendly", and if your experience is something like that, that sucks and I'm sad about it but I'm not sure what to change. I do think AI is an important topic for other people to be able to think and discuss in more detail.

But, like, I support you having a social script like "I don't think much about AI, it's not my specialization and it doesn't feel very tractable for me to think about and I find it fairly [distressing?/intense?/difficult-to-think-about?], I'm focused on [areas you're focused on]." And if someone keeps pressuring you to engage on it, idk, tell them I told them to back off. :P

Thanks very much! I basically agree with you - I'm pretty comfortable telling people I don't have an opinion, but that's unusual in my social circles, so I wanted to write up an explanation for why.

This is a good motivating example for the mental move of learning things without changing your mind. This should be an OK thing to do and to intend as a serious or even default possibility. The option not to makes changing your mind more available.

That seems perfectly reasonable! Enjoy the sunshine while you still can.

This makes sense, and it's an unusual conclusion.

There is some middle ground to thinking about it enough to have a real inside view, and thinking about it enough to have a better-than-average opinion. I think that better than average opinion would be something like "there's a pretty good chance of AI becoming really dangerous in the not too distant future. We're putting very little effort into making it safe, so it would probably be smarter to spend a lot more effort on that".

I think that's what you'd come to after a little research, because that's where I'm at after a whole bunch of research. The top minds on safety (that is, people who've actually thought about it, not just experts in other domains that run their mouths) disagree on a lot, but they almost universally agree on that much.

Edit: my point there is that you might do a modest amount of good after a very small amount of time invested. And I don't think remaining willfully ignorant is going to make you happier about AI risk. Society at large is increasingly concerned, and we'll only continue to become more concerned as AI has a larger impact year by year. So you're going to be stuck in those conversations anyway, with people pressuring you to be concerned. You might as well know something as be completely ignorant, particularly since it sounds like your current loose belief is that the risk might be very, very high.

Yes, my current personal default is just deferring to what mainstream non-EA/rat AI experts seem to be saying, which seems to be trending towards more concerned. I just prefer not to talk about it most of the time. :)