I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful. I might change my mind in the resulting discussion, but, I prefer that discussion to be public.

 

I think 80,000 hours should remove OpenAI from its job board, and similar EA job placement services should do the same.

(I personally believe 80k shouldn't advertise Anthropic jobs either, but I think the case for that is somewhat less clear)

I think OpenAI has demonstrated a level of manipulativeness, recklessness, and failure to prioritize meaningful existential safety work, that makes me think EA orgs should not be going out of their way to give them free resources. (It might make sense for some individuals to work there, but this shouldn't be a thing 80k or other orgs are systematically funneling talent into)

There plausibly should be some kind of path to get back into good standing with the AI Risk community, although it feels difficult to imagine how to navigate that, given how adversarial OpenAI's use of NDAs was, and how difficult that makes it to trust future commitments. 

The things that seem most significant to me:

  • They promised the superalignment team 20% of their compute-at-the-time (which AFAICT wasn't even a large fraction of their compute over the coming years), but didn't provide anywhere close to that, and then disbanded the team when Leike left.
  • Their widespread use of non-disparagement agreements, with non-disclosure clauses, which generally makes it hard to form accurate impressions about what's going on at the organization. 
  • Helen Toner's description of how Sam Altman wasn't forthright with the board. (i.e. "The board was not informed about ChatGPT in advance and learned about ChatGPT on Twitter. Altman failed to inform the board that he owned the OpenAI startup fund despite claiming to be an independent board member, giving false information about the company’s formal safety processes on multiple occasions. And relating to her research paper, that Altman in the paper’s wake started lying to other board members in order to push Toner off the board.")
  • Hearing from multiple ex-OpenAI employees that OpenAI safety culture did not seem on track to handle AGI. Some of these are public (Leike, Kokotajlo), others were in private. 

This is before getting into more openended arguments like "it sure looks to me like OpenAI substantially contributed to the world's current AI racing" and "we should generally have a quite high bar for believing that the people running a for-profit entity building transformative AI are doing good, instead of cause vast harm, or at best, being a successful for-profit company that doesn't especially warrant help from EAs.

I am generally wary of AI labs (i.e. Anthropic and Deepmind), and think EAs should be less optimistic about working at large AI orgs, even in safety roles. But, I think OpenAI has demonstrably messed up, badly enough, publicly enough, in enough ways that it feels particularly wrong to me for EA orgs to continue to give them free marketing and resources. 

I'm mentioning 80k specifically because I think their job board seemed like the largest funnel of EA talent, and because it seemed better to pick a specific org than a vague "EA should collectively do something." (see: EA should taboo "EA should"). I do think other orgs that advise people on jobs or give platforms to organizations (i.e. the organization fair at EA Global) should also delist OpenAI.

My overall take is something like: it is probably good to maintain some kind of intellectual/diplomatic/trade relationships with OpenAI, but bad to continue giving them free extra resources, or treat them as an org with good EA or AI safety standing. 

It might make sense for some individuals to work at OpenAI, but doing so in a useful way seems very high skill, and high-context – not something to funnel people towards in a low-context job board.

I also want to clarify: I'm not against 80k continuing to list articles like Working at an AI Lab, which are more about how to make the decisions, and go into a fair bit of nuance. I disagree with that article, but it seems more like "trying to lay out considerations in a helpful way" than "just straightforwardly funneling people into positions at a company." (I do think that article seems out of date and worth revising in light of new information.  I think "OpenAI seems inclined towards safety" now seems demonstrably false, or at least less true in the ways that matter. And this should update you on how true it is for the other labs, or how likely it is to remain true)

FAQ / Appendix

Some considerations and counterarguments which I've thought about, arranged as a hypothetical FAQ.

Q: It seems that, like it or not, OpenAI is a place transformative AI research is likely to happen, and having good people work there is important. 

Isn't it better to have alignment researchers working there, than not? Are you sure you're not running afoul of misguided purity instincts?

I do agree it might be necessary to work with OpenAI, even if they are reckless and negligent. I'd like to live in the world where "don't work with companies causing great harm" was a straightforward rule to follow. But we might live in a messy, complex world where some good people may need to work with harmful companies anyway. 

But: we've now had two waves of alignment people leave OpenAI. The second wave has multiple people explicitly saying things like "quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI."

The first wave, my guess is they were mostly under non-disclosure/non-disparagement agreements, and we can't take their lack of criticism as much evidence.

It looks to me, from the outside, like OpenAI is just not really structured or encultured in a way that makes it that viable for someone on the inside to help improve things much. I don't think it makes sense to continue trying to improve OpenAI's plans, at least until OpenAI has some kind of credible plan (backed up by actions) of actually seriously working on existential safety.

I think it might make sense for some individuals to go work at OpenAI anyway, who have an explicit plan for how to interface with the organizational culture. But I think this is a very high context, high skill job. (i.e. skills like "keeping your eye on the AI safety ball", "interfacing well with OpenAI staff/leadership while holding onto your own moral/strategic compass", "knowing how to prioritize research that differentially helps with existential safety, rather than mostly amounting to near-term capabilities work.") 

I don't think this is the sort of thing you should just funnel people into on a jobs board.

I think it makes a lot more sense to say "look, you had your opportunity to be taken on faith here, you failed. It is now OpenAI's job to credibly demonstrate that it is worthwhile for good people to join there trying to help, rather than for them to take that on faith."

Q: What about jobs like "security research engineer?". 

That seems straightforwardly good for OpenAI to have competent people for, and probably doesn't require a good "Safety Culture" to pay off?

The argument for this seems somewhat plausible. I still personally think it makes sense to fully delist OpenAI positions unless they've made major changes to the org (see below).  

I'm operating here from a cynical/conflict-theory-esque stance. I think OpenAI has exploited the EA community and it makes sense to engage with them from more of a cynical "conflict theory" stance. I think it makes more sense to say, collectively, "knock it off", and switch to default "apply pressure." I think if OpenAI wants to find good security people, that should be their job, not EA organizations. 

But, I don't have a really slam dunk argument that this is the right stance to take. For now, I list it as my opinion, but acknowledge there are other worldviews where it's less clear what to do.

Q: What about offering a path towards "good standing?" to OpenAI?

It seems plausibly important to me to offer some kind of roadmap back to good standing. I do kinda think regulating OpenAI from the outside isn't likely to be sufficient, because it's too hard to specify what actually matters for existential AI safety.

So, it feels important to me not to fully burn bridges. 

But, it seems pretty hard to offer any particular roadmap. We've got three different lines of OpenAI leadership breaking commitments, and being manipulative. So we're long past the point where "mere words" would reassure me.

Things that would be reassure me are costly actions that are extremely unlikely in worlds where OpenAI would (intentionally or no) lure more people in and then still turn out to, nope, just be taking advantage of them for safety-washing / regulatory capture reasons.

Such actions seem pretty unlikely by now. Most of the examples I can think to spell out seem too likely to be gameable (i.e. if OpenAI were to announce a new Superalignment-equivalent team, or commitments to participate in eval regulations, I would guess they would only do the minimum necessary to look good, rather than a real version of the thing).

An example that'd feel pretty compelling is if Sam Altman actually really, for real, left the company, that would definitely have me re-evaluating my sense of the company. (This seems like a non-starter, but, listing for completeness). 

I wouldn't put much stock in a Sam Altman apology. If Sam is still around, the most I'd hope for is some kind of realistic, real-talk, arms-length negotiation where it's common knowledge that we can't really trust each other but maybe we can make specific deals.

I'd update somewhat if Greg Brockman and other senior leadership (i.e. people who seem to actually have the respect of the capabilities and product teams), or maybe new board members, made clear statements indicating:

  • they understand: how OpenAI messed up (in terms of not keeping commitments, and the manipulativeness of non-disclosure non-disparagement agreements)
  • they take some actions that are holding Sam (and maybe themselves in some cases) accountable.
  • they take existential risk seriously on a technical level. They have real cruxes for what would change their current scaling strategy. This is integrated into org-wide decisionmaking. 

This wouldn't make me think "oh everything's fine now." But would be enough of an update that I'd need to evaluate what they actually said/did and form some new models.

Q: What if we left up job postings, but with an explicit disclaimer linking to a post saying why people should be skeptical?

This idea just occurred to me as I got to the end of the post. Overall, I think this doesn't make sense given the current state of OpenAI, but thinking about it opens up some flexibility in my mind about what might make sense, in worlds where we get some kind of costly signals or changes in leadership from OpenAI.

(My actual current guess is this sort of disclaimer makes sense for Anthropic and/or DeepMind jobs. This feels like a whole separate post though)


My actual range of guesses here are more cynical than this post focuses on. I'm focused on things that seemed easy to legibly argue for. 

I'm not sure who has decisionmaking power at 80k, or most other relevant orgs. I expect many people to feel like I'm still bending over backwards being accommodating to an org we should have lost all faith in. I don't have faith in OpenAI, but I do still worry about escalation spirals and polarization of discourse. 

When dealing with a potentially manipulative adversary, I think it's important to have backbone and boundaries and actual willingness to treat the situation adversarially. But also, it's important to leave room to update or negotiate.

But, I wanted to end with explicitly flagging the hypothesis that OpenAI is best modeled as a normal profit-maximizing org, that they basically co-opted EA into being a lukewarm ally it could exploit, when it'd have made sense to treat OpenAI more adversarially from the start (or at least be more "ready to pivot towards treating adversarially". 

I don't know that that's the right frame, but I think the recent revelations should be an update towards that frame.

New Comment
72 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Ideopunk37-19

(Cross-posted from the EA forum)

Hi, I run the 80,000 Hours job board, thanks for writing this out! 

I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.

For OpenAI in particular, we’ve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work – a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do impo... (read more)

[-]Elizabeth10750

How does 80k identify actual safety roles, vs. safety-washed capabilities roles? 

From Conor's response on EAForum, it sounds like the answer is "we trust OpenAI to tell us". In light of what we already know (safety team exodus, punitive and hidden NDAs, lack of disclosure to OpenAI's governing board), that level of trust seems completely unjustified to me. 

4Buck
I would be shocked if OpenAI employees who took the role with that job description were pushed into doing capabilities research they didn't want to do. (Obviously it's plausible that they'd choose to do capabilities research while they were already there.)
[-]habryka1915

Huh, this doesn't super match my model. I have heard of people at OpenAI being pressured a lot into making sure their safety work helps with productization. I would be surprised if they end up being pressured working directly on the scaling team, but I wouldn't end up surprised with someone being pressured into doing some better AI censorship in a way that doesn't have any relevance to AI safety and does indeed make OpenAI a lot of money.

3Buck
I disagree for the role advertised, I would be surprised by that. (I'd be less surprised if they advised on some post-training stuff that you'd think of as capabilities; I think that the "AI censorship" work is mostly done by a different team that doesn't talk to the superalignment people that much. But idk where the superoversight people have been moved in the org, maybe they'd more naturally talk more now.)
2eye96458
Can you clarify what you mean by "completely unjustified"?  For example, if OpenAI says "This role is a safety role.", then in your opinion, what is the probability that the role is a genuine safety role?

I'd define "genuine safety role" as "any qualified person will increase safety faster that capabilities in the role". I put ~0 likelihood that OAI has such a position. The best you could hope for is being a marginal support for a safety-based coup (which has already been attempted, and failed).

There's a different question of "could a strategic person advance net safety by working at OpenAI, more so than any other option?". I believe people like that exist, but they don't need 80k to tell them about OpenAI. 

3Buck
Which of the following claims are you making? * OpenAI doesn't have any roles doing AI safety research aimed at reducing catastrophic risk from egregious AI misalignment; people who think they're taking such a role will end up assigned to other tasks instead. * OpenAI does have roles where people do AI safety research aimed at reducing catastrophic risk from egregious AI misalignment, but all the research done by people in those roles sucks and the roles contribute to OpenAI having a good reputation, so taking those roles is net negative. I find the first claim pretty implausible. E.g. I think that the recent SAE paper and the recent scalable oversight paper obviously count as an attempt at AI safety research. I think that people who take roles where they expect to work on research like that basically haven't ended up unwillingly shifted to roles on e.g. safety systems, core capabilities research, or product stuff.
4Eli Tyre
I'm not Elizabeth or Ray, but there's a third option which I read the comment above to mean, and which I myself find plausible.
3Raemon
I'm not Elizabeth and probably wouldn't have worded my thoughts quite the same, but my own position regarding your first bullet point is: "When I see OpenAI list a 'safety' role, I'm like 55% confident that it has much to do with existential safety, and maybe 25% that it produces more existential safety than existential harm." 
2Buck
When you say "when I see OpenAI list a 'safety' role", are you talking about roles related to superalignment, or are you talking about all roles that have safety in the name? Obviously OpenAI has many roles that are aimed at various near-term safety stuff, and those might have safety in the name, but this isn't duplicitous in the slightest--the job descriptions (and maybe even the rest of the job titles!) explain it perfectly clearly so it's totally fine. I assume you meant something like "when I see OpenAI list a role that seems to be focused on existential safety, I'm like 55% that it has much to do with existential safety"? In that case, I think your number is too low.
2Raemon
I was thinking of things like the Alignment Research Science role. If they talked up "this is a superalignment role", I'd have an estimate higher than 55%. 
3Buck
Yeah, I think that this is disambiguated by the description of the team: So my guess is that you would call this an alignment role (except for the possibility that the team disappears because of superalignment-collapse-related drama).
7Raemon
Yeah I read those lines, and also "Want to use your engineering skills to push the frontiers of what state-of-the-art language models can accomplish", and remain skeptical. I think the way OpenAI tends to equivocate on how they use the word "alignment" (or: they use it consistently, but, not in a way that I consider obviously good. Like, I the people working on RLHF a few years ago probably contributed to ChatGPT being released earlier which I think was bad*) *I like the part where the world feels like it's actually starting to respond to AI now, but, I think that would have happened later, with more serial-time for various other research to solidify. (I think this is a broader difference in guesses about what research/approaches are good, which I'm not actually very confident about, esp. compared to habryka, but, is where I'm currently coming from)
3Eli Tyre
Tangent:  And with less serial-time for various policy plan to solidify and gain momentum.  If you think we're irreparably far behind on the technical research, and advocacy / political action is relatively more promising, you might prefer to trade years of timeline for earlier, more widespread awareness of the importance of AI, and a longer relatively long period of people pushing on policy plans.
2Elizabeth
Good question. My revised belief is that OpenAI will not sufficiently slow down production in order to boost safety. It may still produce theoretical safety work that is useful to others, and to itself if the changes are cheap to implement.  I do also expect many people assigned to safety to end up doing more work on capabilities, because the distinction is not always obvious and they will have so many reasons to err in the direction of agreeing with their boss's instructions. 
6Buck
Ok but I feel like if a job mostly involves research x-risk-motivated safety techniques and then publish them, it's very reasonable to call it an x-risk-safety research job, regardless of how likely the organization where you work is to adopt your research eventually when it builds dangerous AI.
1eye96458
"~0 likelihood" means that you are nearly certain that OAI does not have such a position (ie, your usage of "likelihood" has the same meaning as "degree of certainty" or "strength of belief")?  I'm being pedantic because I'm not a probability expert and AFAIK "likelihood" has some technical usage in probability. If you're up for answering more questions like this, then how likely do you believe it is that OAI has a position where at least 90% of people who are both, (A) qualified skill wise (eg, ML and interpretability expert), and, (B) believes that AIXR is a serious problem, would increase safety faster than capabilities in that position? This is a good point and you mentioning it updates me towards believing that you are more motivated by (1) finding out what's true regarding AIXR and (2) reducing AIXR, than something like (3) shit talking OAI. I asked a related question a few months ago, ie, if one becomes doom pilled while working as an executive at an AI lab and one strongly values survival, what should one do?
0Elizabeth
  The cheap answer here is 0, because I don't think there is any position where that level of skill and belief in AIXR has a 90% chance of increasing net safety. Ability to do meaningful work in this field is rarer than that. So the real question is how does OpenAI compare to other possibilities? To be specific, let's say being an LTFF-funded solo researcher, academia, and working at Anthropic. Working at OpenAI seems much more likely to boost capabilities than solo research and probably academia. Some of that is because they're both less likely to do anything. But that's because they face OOM less pressure to produce anything, which is an advantage in this case. LTFF is not a pressure- or fad-free zone, but they have nothing near the leverage of paying someone millions of dollars, or providing tens of hours each week surrounded by people who are also paid millions of dollars  to believe they're doing safe work.  I feel less certain about Anthropic. It doesn't have any of terrible signs OpenAI did (like the repeated safety exoduses, the board coup, and clawbacks on employee equity), but we didn't know about most of those a year ago. If we're talking about a generic skilled and concerned person, probably the most valuable thing they can do is support someone with good research vision. My impression is that these people are more abundant at Anthropic than OpenAI, especially after the latest exodus, but I could be wrong. This isn't a crux for me for the 80k board[1] but it is a crux for how much good could be done in the role. Some additional bits of my model: * I doubt OpenAI is going to tell a dedicated safetyist they're off the safety team and on direct capabilities. But the distinction is not always obvious, and employees will be very motivated to not fight OpenAI on marginal cases. * You know those people who stand too close, so you back away, and then they move closer? Your choices in that situation are to steel yourself for an intense battle, accept t
3Buck
IMO "this role is a safety role" isn't that strong evidence of the role involving research aimed at catastrophic AI risk, but the rest of the description of a particular role probably does provide pretty strong evidence.
2Eli Tyre
Hm. Can I request tabooing the phrase "genuine safety role" in favor of more detailed description of the work that's done? There's broad disagreement about which kinds of research are (or should count as) "AI safety", and what's required for that to succeed. 
1eye96458
I suspect that would provide some value, but did you mean to respond to @Elizabeth? I was just trying to use the term as a synonym for "actual safety role" as @Elizabeth used it in her original comment. This part of your comment seems accurate to me, but I'm not a domain expert.

However, we don’t conceptualize the board as endorsing organisations.

It don't matter how you conceptualize it. It matters how it looks, and it looks like an endorsement. This is not an optics concern. The problem is that people who trust you will see this and think OpenAI is a good place to work.

Non-infosec safety work

  • These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this! 

How can you still think this after the whole safety team quit? They clearly did not think these roles where any good for doing safety work.

Edit: I was wrong about the whole team quitting. But given everything, I still stand by that these jobs should not be there with out at leas a warning sign. 

 

As a AI safety community builder, I'm considering boycotting 80k (i.e. not link to you and reccomend people not to trust your advise) until you at least put warning labels on your job board. And I'll reccomend other community builders to do the same.

I do think 80k means well, but I just can't reccomend any org with this level of lack of judgment. Sorry.

As a AI safety community builder, I'm considering boycotting 80k (i.e. not link to you and reccomend people not to trust your advise) until you at least put warning labels on your job board.

Hm. I have mixed feelings about this. I'm not sure where I land overall.

I do think it is completely appropriate for Linda to recommend whichever resources she feels are appropriate, and if her integrity calls her, to boycott resources that otherwise have (in her estimation) good content.

I feel a little sad that I, at least, perceived that sentence as an escalation. There's a version of this conversation where we all discuss considerations, in public and in private, and 80k is a participant in that conversation. There's a different version where 80k immediately feels the need to be on the defensive, in something like PR mode, or where the outcome is mostly determined by the equilibrium of social-power rather than anything else.That seems overall worse, and I'm afraid that sentences like the quoted one, push in that direction.

On the other hand, I also feel some resonance with the escalation. I think "we", broadly construed, have been far to warm with OpenAI, and it seems maybe good that there's common knowledge building that a lot of people think that was a mistake, and momentum building towards doing something different going forward, including people "voting with their voices", instead of being live-and-let-live to the point of having no real position at all.

it may be too much to ask, but in my ideal world, 80k folks would feel comfy ignoring the potential escalatory emotional valence and would treat that purely as evidence about the importance of it to others. in other words, if people are demanding something, that's a time to get less defensive and more analytical, not more defensive and less analytical. It would be good PR to me for them to just think out loud about it.

[This comment is no longer endorsed by its author]Reply
4Buck
I agree that it would be better if 80k had the capacity to easily navigate this kind of thing. But given that they (like all of us) have fixed capacity, I think it still makes sense to complain about Linda making it harder for them to respond.

I also have limited capacity. 

[-]aysja128

But whether an organization can easily respond is pretty orthogonal to whether they’ve done something wrong. Like, if 80k is indeed doing something that merits a boycott, then saying so seems appropriate. There might be some debate about whether this is warranted given the facts, or even whether the facts are right, but it seems misguided to me to make the strength of an objection proportional to someone’s capacity to respond rather than to the badness of the thing they did.

-2the gears to ascension
Agreed. It's reasonable to ask others eg Linda to make this easier where possible. Eg, when discussion group behavior in response to a state of affairs, instead of making it "suggestion/command" part of speech, make it "conditional prediction" part of speech. A statement I could truthfully say: "As a AI safety community member, I predict I and others will be uncomfortable with 80k if this is where things end up settling, because of disagreeing. I could be convinced otherwise, but it would take extraordinary evidence at this point. If my opinions stay the same and 80k's also are unchanged, I expect this make me hesitant to link to and recommend 80k, and I would be unsurprised to find others behaving similarly." Behaving like that is very similar to what Linda said she intends, but seems to me to leave more room for aumann. I would suggest to 80k that they attempt to simply reinterpret what Linda as equivalent to this, if possible. Of course, it is in fact a slightly different thing than what she said.   Edit: very odd that this, but neither its parent or grandparent comment, got downvoted. What i said here feels like a pretty similar thing to what I said in the grandparent, and agrees with buck and with linda; it's my attempt to show there's a way to merge these perspectives. What about my comment diverges?

A statement I could truthfully say:

"As a AI safety community member, I predict I and others will be uncomfortable with 80k if this is where things end up settling, because of disagreeing. I could be convinced otherwise, but it would take extraordinary evidence at this point. If my opinions stay the same and 80k's also are unchanged, I expect this make me hesitant to link to and recommend 80k, and I would be unsurprised to find others behaving similarly."

But you did not say it (other than as a response to me). Why not? 

I'd be happy for you to take the discussion with 80k and try to change their behaviour. This is not the first time I told them that if they list a job, a lot of people will both take it as an endorsement, and trut 80k that this is a good job to apply for. 

As far as I can tell 80k is in complete denial on the large influence they have on many EAs, especially local EA community builders. They have a lot of trust, mainly for being around for so long. So when ever they screw up like this, it causes enormous harm. Also since EA have such a large growth rate (at any given time most EAs are new EAs), the community is bad at tracking when 80k does screw up, so they ... (read more)

1Linda Linsefors
1Linda Linsefors
Temporarily deleted since I misread Eli's comment. I might re-post

Firstly, some form of visible disclaimer may be appropriate if you want to continue listing these jobs. 

While the jobs board may not be "conceptualized" as endorsing organisations, I think some users will see jobs from OpenAI listed on the job board as at least a partial, implicit endorsement of OpenAI's mission.

Secondly, I don't think roles being directly related to safety or security should be a sufficient condition to list roles from an organisation, even if the roles are opportunities to do good work. 

I think this is easier to see if we move away from the AI Safety space. Would it be appropriate for 80,000 Hours job board advertise an Environmental Manager job from British Petroleum?

7Eli Tyre
That doesn't seem obviously absurd to me, at least.

I dislike when conversations about that are really about one topic get muddied by discussion about an analogy. For the sake of clarity, I'll use italics relate statements when talking about the AI safety jobs at capabilities companies. 

Interesting perspective. At least one other person also had a problem with that statement, so it is probably worth me expanding. 

Assume, for the sake of the argument, that the Environmental Manager's job is to assist with clean-ups after disasters, monitoring for excessive emissions and preventing environmental damage. In a vacuum these are all wonderful, somewhat-EA aligned tasks. 
Similarly the safety focused role, in a vacuum, is mitigating concrete harms from prosaic systems and, in the future, may be directly mitigating existential risk. 

However, when we zoom out and look at these jobs in the context of the larger organisations goals, things are less obviously clear. The good you do helps fuel a machine whose overall goals are harmful. 

The good that you do is profitable for the company that hires you. This isn't always a bad thing, but by allowing BP to operate in a more environmentally friendly manner you improve BP's pu... (read more)

6Eli Tyre
Yep. I agree with all of that. Which is to say that that there are considerations in both directions, and it isn't obvious which ones dominate, in both the AI and petroleum case. My overall guess is that in both cases it isn't a good policy to recommend roles like these, but don't think that either case is particularly more of a slam dunk than the other. So referencing the oil case doesn't make the AI one particularly more clear to me.

We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI


This misses aspects of what used to be 80k's position:

❝ In fact, we think it can be the best career step for some of our readers to work in labs, even in non-safety roles. That’s the core reason why we list these roles on our job board. 
– Benjamin Hilton, February 2024

❝ Top AI labs are high-performing, rapidly growing organisations. In general, one of the best ways to gain career capital is to go and work with any high-performing team — you can just learn a huge amount about getting stuff done. They also have excellent reputations more widely. So you get the credential of saying you’ve worked in a leading lab, and you’ll also gain lots of dynamic, impressive connections.
– Benjamin Hilton, June 2023 - still on website


80k was listing some non-safety related jobs:
– From my email on May 2023:

– From my comment on February 2024:

8William_S
I do think 80k should have more context on OpenAI but also any other organization that seems bad with maybe useful roles. I think people can fail to realize the organizational context if it isn't pointed out and they only read the company's PR.
1DPiepgrass
I think there may be merit in pointing EAs toward OpenAI safety-related work, because those positions will presumably be filled by someone and I would prefer it be filled by someone (i) very competent (ii) who is familiar with (and cares about) a wide range of AGI risks, and EA groups often discuss such risks. However, anyone applying at OpenAI should be aware of the previous drama before applying. The current job listings don't communicate the gravity or nuance of the issue before job-seekers push the blue button leading to OpenAI's job listing: I guess the card should be guarded, so that instead of just having a normal blue button, the user should expand some sort of 'additional details' subcard first. The user then sees some bullet points about the OpenAI drama and (preferably) expert concerns about working for OpenAI, each bullet point including a link to more details, followed by a secondary-styled button for the job application (typically, that would be a button with a white background and blue border). And of course you can do the same for any other job where the employer's interests don't seem well-aligned with humanity or otherwise don't have a good reputation. Edit: actually, for cases this important, I'd to replace 'View Job Details' with a "View Details" button that goes to a full page on 80000 Hours in order to highlight the relevant details more strongly, again with the real job link at the bottom.
[-]Zvi3527

Not only do they continue to list such jobs, they do so with no warnings that I can see regarding OpenAI's behavior, including both its actions involving safety and also towards its own employees. 

Not warning about the specific safety failures and issues is bad enough, and will lead to uninformed decisions on the most important issue of someone's life. 

Referring a person to work at OpenAI, without warning them about the issues regarding how they treat employees, is so irresponsible towards the person looking for work as to be a missing stair issue. 

I am flaberghasted that this policy has been endorsed on reflection.

6starship006
I'm surprised by this reaction. It feels like the intersection between people who have a decent shot of getting hired at OpenAI to do safety research and those who are unaware of the events at OpenAI related to safety are quite low.
1DPiepgrass
I expect there are people who are aware that there was drama but don't know much about it and should be presented with details from safety-conscious people who closely examined what happened.

I think an assumption 80k makes is something like "well if our audience thinks incredibly deeply about the Safety problem and what it would be like to work at a lab and the pressures they could be under while there, then we're no longer accountable for how this could go wrong. After all, we provided vast amounts of information on why and how people should do their own research before making such a decision"

The problem is, that is not how most people make decisions. No matter how much rational thinking is promoted, we're first and foremost emotional creatures that care about things like status. So, if 80k decides to have a podcast with the Superalignment team lead, then they're effectively promoting the work of OpenAI. That will make people want to work for OpenAI. This is an inescapable part of the Halo effect.

Lastly, 80k is explicitly targeting very young people who, no offense, probably don't have the life experience to imagine themselves in a workplace where they have to resist incredible pressures to not conform, such as not sharing interpretability insights with capabilities teams.

The whole exercise smacks of nativity and I'm very confident we'll look back and see it as an incredibly obvious mistake in hindsight.

I was around a few years ago when there were already debates about whether 80k should recommend OpenAI jobs. And that's before any of the fishy stuff leaked out, and they were stacking up cool governance commitments like becoming a capped-profit and having a merge-and-assist-clause. 

And, well, it sure seem like a mistake in hindsight how much advertising they got. 

I haven't shared this post with other relevant parties – my experience has been that private discussion of this sort of thing is more paralyzing than helpful.


Fourteen months ago, I emailed 80k staff with concerns about how they were promoting AGI lab positions on their job board. 

The exchange:

  • I offered specific reasons and action points.
  • 80k staff replied by referring to their website articles about why their position on promoting jobs at OpenAI and Anthropic was broadly justified (plus they removed one job listing). 
  • Then I pointed out what those articles were specifically missing,
  • Then staff stopped responding (except to say they were "considering prioritising additional content on trade-offs"). 

It was not a meaningful discussion.

Five months ago, I posted my concerns publicly. Again, 80k staff removed one job listing (why did they not double-check before?). Again, staff referred to their website articles as justification to keep promoting OpenAI and Anthropic safety and non-safety roles on their job board. Again, I pointed out what seemed missing or off about their justifications in those articles, with no response from staff.

It took the firing of the entire OpenAI su... (read more)

I hope that the voluminous discussion on exactly how bad each of the big AI labs are doesn't distract readers from what I consider the main chances: getting all the AI labs banned (eventually) and convincing talented young people not to put in the years of effort needed to prepare themselves to do technical AI work.

7David James
I’m curious if your argument, distilled, is: fewer people skilled in technical AI work is better? Such a claim must be examined closely! Think of it from a systems dynamics point of view. We must look at more than just one relationship. (I personally try to press people to share some kind of model that isn’t presented only in words.)
2RHollerith
Yes, I am pretty sure that the fewer additional people skilled in technical AI work, the better. In the very unlikely event that before the end, someone or some group actually comes up with a reliable plan for how to align an ASI, we certainly want a sizable number of people able to understand the plan relatively quickly (i.e., without first needing to prepare themselves through study for a year), but IMHO we already have that. "The AI project" (the community of people trying to make AIs that are as capable as possible) probably needs many thousands of additional people with technical training to achieve its goal. (And if the AI project doesn't need those additional people, that is bad news because it probably means we are all going to die sooner rather than later.) Only a few dozen or a few hundred researchers (and engineers) will probably make substantial contributions toward the goal, but neither the apprentice researchers themselves, their instructors or their employers can tell which researchers will ever make a substantial contribution, so the only way for the project to get an adequate supply of researchers is to train and employ many thousands. The project would prefer to employ even more than that. I am pretty sure it is more important to restrict the supply of researchers available to the AI project than it is to have more researchers who describe themselves as alignment researchers. It's not flat impossible that the AI-alignment project will bear fruit before the end, but is it very unlikely. In contrast, if not stopped somehow (e.g., by the arrival of helpful space aliens or some other miracle) the AI project will probably succeed at its goal. Most people pursuing careers in alignment research are probably doing more harm than good because the AI project tends to be able to use any results they come up with. MIRI is an exception to the general rule, but MIRI has chosen to stop its alignment research program on the grounds that it is hopeless. Restrict

Here is an example of a systems dynamics diagram showing some of the key feedback loops I see. We could discuss various narratives around it and what to change (add, subtract, modify).

┌───── to the degree it is perceived as unsafe ◀──────────┐                   
│          ┌──── economic factors ◀─────────┐             │                   
│        + ▼                                │             │                   
│      ┌───────┐     ┌───────────┐          │             │         ┌────────┐
│      │people │     │ effort to │      ┌───────┐    ┌─────────┐    │   AI   │
▼   -  │working│   + │make AI as │    + │  AI   │  + │potential│  + │becomes │
├─────▶│  in   │────▶│powerful as│─────▶│ power │───▶│   for   │───▶│  too   │
│      │general│     │ possible  │      └───────┘    │unsafe AI│    │powerful│
│      │  AI   │     └───────────┘          │        └─────────┘    └────────┘
│      └───────┘                            │                                 
│          │ net movement                   │ e.g. use AI to reason
│        + ▼                                │      about AI safety
│     ┌────────┐                          + ▼                                 
│     │ pe
... (read more)

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

1jacobjacob
Poor Review Bot, why do you get so downvoted? :(

Because it's obviously annoying and burning the commons. Imagine if I made a bot that posted the same comment on every post of less wrong, surely that wouldn't be acceptable behavior.

4habryka
It's relatively normal for forums/subreddits to have bots that serve specific functions and post similar comments on posts when they meet certain conditions (like most subreddits I use have some collection of bots, whether it's a bot that looks up the text of any magic card mentioned, or a bot that automatically reposts the moderation guidelines when there are too many comments, etc.)
5Raemon
tbh I typically find those bots annoying too.
2habryka
Depends on the Subreddit, but definitely agree that they can be pretty annoying.
3Joseph Miller
Could the prediction market for each post be integrated more elegantly into the UI, rather than posted as a comment?
2habryka
Yeah, I've been planning on doing something like that. Just every custom UI element tends to introduce complexities in how it interfaces with all adjacent UI elements, but I think we'll likely do something like that in the long run.
2RHollerith
You don't want to make it a new element of the menu that appears when the user clicks on the 3 vertical dots in the upper right corner of a comment?
2RHollerith
If the ReviewBot comments were collapsed without my having to manually collapse them, they would probably cease to bother me.
9habryka
Yeah, it's on my to-do list for next week to revamp these messages, I think they aren't working as is.
2RHollerith
But the automatic pinned comments I see on Reddit have a purpose that is plausibly essential to the subreddit's having any value to any users at all (usually to remind participants of a plausibly-essential rule that was violated constantly before the introduction of the pinned comment) whereas the annual review (and betting about it on Manifold Markets) are not plausibly essential to LW's having any value at all (although it is a common human cognitive bias for someone who has put 100s of hours of hard work into something to end up believing it is much more important than it actually is). The reason spam got named "spam" in the early 1990s is that it tends to be a repetition of the same text over and over (similar to how the word "spam" is repeated over and over in a particular Monty Python skit).
2kave
I think if you made a bot that posted the same comment on every post except for, say, a link to a high-quality audio narration of the post, it would probably be acceptable behaviour. EDIT: Though my true rejection is more like, I wouldn't rule out the site admins making an auto commenter that reminded people of argumentative norms or something like that. Of course, it seems likely that whatever end the auto commenter was supposed to serve would be better served using a different UI element than a comment (as also seems true here), but it's not something I would say we should never try. I think as site admins we should be trying to serve something like the overall health and vision of the site, and not just locally the user's level of annoyance, though I do think the user's level of annoyance is a relevant thing to take into account! There's something a little loopy here that's hard to reason about. People might be annoyed because a comment burns the commons. But I think there's a difference in opinion about whether it's burning or contributing to the commons. And then, I imagine, those who think it's burning the commons want to offer their annoyance as proof of the burn. But there's a circularity there I don't know quite how to think through.
2jacobjacob
[censored_meme.png] I like review bot and think it's good
4kave
mod note: this comment used to have a gigantic image of Rockwell's Freedom of Speech, which I removed.
4Elizabeth
context note: Jacob is also a mod/works for LessWrong, kave isn't doing this to random users. 
3kave
I probably would have done something similar to a random user, though probably with a more transparent writeup, and/or trying harder to shrink the image or something. I'll note that [censored_meme.png] is something Jacob added back in after I removed the image, not something I edited in.
2Elizabeth
huh. was it the particular meme (brave dude telling the truth), the size, or some third thing?
2kave
The size
2Raemon
Size.
[+][comment deleted]20