I second The Mind, seems to be close to what you're looking for as described in your other comment.
Yeah, I'll admit I am more iffy on the fiction side of this argument, Hollywood isn't really kind to the reality of anything. I was actually not aware of any of these movies or shows (except superintelligence which I completely forgot about, whoops), it does seem things are getting better in this regard. Good! I hold that climate change still has a much stronger non-fiction presence though.
Yeah, I think this gets at a crux for me, I feel intuitively that it would be beneficial for the field if the problem was widely understood to be important. Maybe climate change was a bad example due to being so politically fraught, but then again maybe not, I don't feel equipped to make a strong empirical argument for whether all that political attention has been net beneficial for the problem. I would predict that issues that get vastly more attention tend to receive many more resources (money, talent, political capital) in a way that's net positive towards efforts to solve it but I admit I am not extremely certain about this and would very much like to see more data pertaining to that.
To respond to your individual points:
The Obama administration did get work on regulating mercury pollution largely outside of public debate and poor work on CO2 pollution.
Good point, though I'd argue there's much less of a technical hurdle to understanding the risks of mercury pollution compared to that of future AI.
Getting people who care more about status competition into AI safety might harm the ability of the field to be focused on object-level issues instead of focusing on status competition.
Certainly there may be some undesirable people who would be 100% focused on status and would not contribute to the object-level problem, but I would also consider those for whom status is a partial consideration (maybe they are under pressure from family, are a politician, are a researcher using prestige as a heuristic to decide which fields to even pay attention to before deciding on their object-level merits, etc.). I'd argue that not every valuable researcher or policy advocate has the luxury or strength of character to completely ignore status and that AI safety being a field that offers some slack in that regard might serve it well.
I think there's a good chance that people with an intellectual life where they won't hear about AI safety are net harmful to being involved in AI safety.
You're probably right about this, I think the one exception might be children, who tend to have a much narrower view of available fields despite their future potential as researchers. Though I still think their maybe people of value in populations who have ever heard of AI safety but who did not bother taking a closer look due to its relative obscurity.
While that's true, why do you believe that those people have something useful to contribute to AI safety on net?
Directly? I don't. To me, getting them to understand is more about casting a wider net of awareness to get the attention of those that could make useful contributions, as well as alleviating the status concerns mentioned above.
I’m unsurprised people who first learned about cryonics from Wikipedia have an unfavourable view of it, their page on the subject takes a fairly negative slant. I vaguely recall something about an editor having it out for the field.
If you’re considering places to move outside of the the US then it’s worth knowing that north america is pretty bad when it comes to urban design and car safety. Here’s a video on car crashes in the netherlands, I recommend this guy’s channel in general for comparisons with that country, which is quite sane relative to the US and Canada: https://youtu.be/Ra_0DgnJ1uQ
I also hear japan is pretty good at urban design and safe public transport (trains especially).
Have you heard of the conlang Toki Pona? I'm not super familiar with it and it's community since I just learned about it recently but it's only got 123 root words, I've heard it claimed that you can learn it in a weekend, and (from my limited perspective) it seems quite popular in the wider conlang community.
Working as the programmer on a game, I would get bug reports from artists playing through themselves, often including their own hypothesis as to what was causing the issue. These issues were all obviously real and required immediate diagnosis, but over time (for issues with non-obvious causes) I learned to take the artists' "helpful" speculations as indicators of where not to start looking.