When I worked a FAANG research job, my experience was that it was socially punishable to bring up AI alignment research in just about any context, with exceptions as it was relevant to the team's immediate mission, for example robustness on the scale required for medical decisions (a much smaller scale than AGI ruin, but a notably larger scale, in the sense of errors being costly, than most deep learning systems in production use at the time).
I find that in some social spaces, Rationality/EA-adjacent ones in particular, it's seen as distracting, rude, and low status to emphasize a hobby horse social justice issue at the expense of whatever else is being discussed. This is straightforward when "whatever else is being discussed" is AI alignment, which the inside view privileges roughly as "more important than everything else, with vague exceptions when the mental health of high-value people who might otherwise do productive work on the topic is at stake."
On a medical research team, I took a little too long to realize that I'd implicitly bought into a shared vision of what's important. We were going to save lives! We weren't going to cure cancer–everyone falls for that trap, aiming too high. We're working on the ground, saving real people, on real timescales. Computer vision can solve the disagreement-among-experts problem in all sorts of medical classification problems, and we're here to fight that fight and win.
So you've gathered a team of AI researchers, some expert, some early-career, to finally take a powerful stab at the alignment problem. A new angle, or more funding, or the right people in the room, whatever belief of comparative advantage you have that inspires hope beyond death with dignity. And you have someone on your team who deeply cares about a complicated social issue you don't understand. Maybe this is their deepest mission, and they see this early-engineer position at your new research org as a stepping stone toward the fairness and accessibility team at Brain that's doing the real work. They do their best to contribute in the team's terms of what's valuable, and they censor themselves constantly, waiting for the right moment to make the pivotal observation that there's not a single cis woman in the room, or that the work we're doing here may be building a future that's even more hostile toward people with developmental disabilities, or this adversarial training scheme has some alarming implications when you consider that the system could learn race as a feature even if we exclude it from the dataset, or something.
I think this is a fair analogue to my situation, and I expect more broadly among people already doing AI research toward a goal other than alignment. It's
- Distracting: We have something else we're working on, and that is a deep question, and you probably could push hard enough on me to nerd snipe me with it if I don't put up barriers.
- Rude: It implies that the work we're doing here, which we all care deeply about (right?) is problematic for reasons well outside our models of who we are and what we're responsible for, and challenging that necessitates a bunch of complicated shadow work.
- Low status: Wait, are you one of those LessWrong people? I bet you're anti-woke and think James Damore shouldn't have been fired, huh? And you're so wound up in your privilege bubble that you think this AGI alarmism is more important than the struggles of real underprivileged people who we know actually exist, here, now? Got it.
I'm being slightly unfair in implying that these are literally interactions I had with real people in the industry. This is more representative of my experiences online and in other spaces with less of a backdrop of professional courtesy. At [FAANG company] these interactions were subtler.
This story is meant to provide answers to your questions 1 and 2. As far as question 3 and making a change, I'm bullish on narratives, aesthetics, anthropology and the like as genuine interventions upstream of AI safety. We're in a social equilibrium where only certain sorts of people can move into AI safety without seriously disrupting the means by which their social needs are met. There are many wonderful people in that set, but it is relatively quite small compared to the set of people who, if they were convinced to genuinely try, could contribute meaningfully.
I would guess this doesn't appear to qualify for bonus points for being reasonably low-hanging. I come from an odd place though: personally sufficiently traumatized by my experiences in AI research that in practical terms contributing there is more or less off limits for me for the time being, yet compelled by AGI ruin narratives and experienced with substantial relevant technical background. So at least for me, this is the way forward.
A conventional approach might lead one to consider that inside the LW / AI safety bubble it borders on taboo to discount the existential threat posed by unaligned AI, but this is almost an inversion of the outside world, even if limited to to 25/75 of what LW users might consider "really impressive people."
This is one gateway to one collection of problems associated with spreading awareness of AI alignment, but let's go in a different direction: somewhere more personal.
Fundamentally, it seems a mistake to frame alignment as an AI issue. While unaligned AGI appears to be rapidly approaching and we have good reasons to believe this will probably result in the extinction of our species, there is another, more important alignment problem that underlies, and somewhat parallels the AI alignment problem. Of course, this larger issue is the alignment problem as faced by humanity at large.
Humans are famously unaligned on many levels: with respect to the self, interpersonally, and micro / macro-socially. No good solution to any tier of this problem has been discovered over thousands of years of inquiry. In the 20th century, humans developed technology useful for acquiring a great deal of information about the universe beyond our world, and "coincidentally" our capability of concentrated destruction increased in effectiveness by orders of magnitude, to the scale where killing at least large portions of the species in a short time is plausible. Thus, the question of why we don't see others like us even though there appears to be ample space tended to find answers along the lines of intelligent life destroying itself. Of course, this is the result of an alignment "problem."
Dull humans forecasted that nuclear arms would end the world and slightly smarter humans suggested that we might wait for antimatter, nanotech, genetically engineered pathogens or some other high-impact dangerous technology. As we're seeing now, these problems are difficult. What appears to be less difficult is AGI.
So, even though it's not in the interest of the continuity of the species, humanity can't help but to race redundantly at breakneck pace toward this new technological capability, embodying a slightly disguised, concentrated and lethal version of one of the oldest and most fundamental problems our species has ever faced. That AI alignment is not taken more seriously could be seen as a reflection of "really impressive people" actually not having paid much mind to the alignment problems embedded in and endemic to who we are.
Should one introduce really impressive people to AI alignment? Maybe, but one must remember that magic appears unavailable and that for various reasons, it is predictably the case that most people, even "really impressive" people, will not consider the problem to be more than an abstract curiosity with even the best presentation. So to evangelize about AI alignment seems most useful as a fulfillment of one's personal / social interests rather than much of a useful tool to increase work to save the species.
Full disclosure: it's not clear that alignment is a meaningful concept, it's not clear that humans have meaningful or consistent values, it's very much not clear that continuing the human species is a good thing (at any point in our history, past, present or future) from an S-risk perspective, and it's not clear that humans have any business rationally evaluating the utility in survival and reproduction as these are goals we're apparently optimized for. So it should be the case that this post is written with less motivation to evangelize.