OpenAI defected. As a non-profit, OpenAI recruited researchers on the promise of "safe" AGI. Then it pushed out the safety researchers and turned into a for-profit.

Why act in response?

  • OpenAI is recklessly scaling AI. Besides accelerating "progress" toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles alone, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone's personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.
  • Safety researchers used to say they could improve things on the inside. They didn’t want to ruin the goodwill. That option is no longer available.
  • The rational step is to ratchet up pressure from the outside. If we stand by while OpenAI violates its charter, it signals that their execs can get away with it. Worse, it signals that we don’t care.
  • OpenAI is in a weaker position than it looks on the surface. OpenAI is seen as the industry leader, yet projected to lose $5 billion this year. Microsoft’s CEO changed their mind on injecting that amount after the board fired  Sam. So now OpenAI is dependent on investment firms to pump in cash for compute every < 10 months. OpenAI is constantly nearing a financial cliff. If concerned communities would seriously collaborate to make OpenAI stop obvious harms caused by their model releases, or to hold the business liable, OpenAI will fail.
  • OpenAI’s forced downsizing would reset expectations across the industry. Since 2012, the gap between expenditures in deep learning and actual revenue has ballooned to over half a trillion dollars a year. If even OpenAI, as the industry’s leading large model developer, had to fire engineers and cut compute to save its failing business model, it could well trigger an AI crash. As media articles turn cynical, investors will sell stakes to get ahead of the crowd, and the disinvested start-ups will go bankrupt. During this period of industry weakness, our communities can pass and actually enforce many laws to restrict harmful scaling.

 

OpenAI’s activities are harmful. Let’s be public and honest in our response.

  • If you start a campaign, please communicate how you are targeting OpenAI's harmful activities. That way, we maintain the moral high ground in the eyes of the public.
  • Avoid smear campaigns for this reason. OpenAI can outspend and outhire us if they decide to counter-campaign. But the public distrusts Sam & co for failing to be open, and for their repeated dishonest claims. We can stand our ground by taking care to be open and honest.
  • The flipside is to not downplay your critiques in public because you’re worried of sounding extreme. Many people are fed up with OpenAI, and you can be honest about it too.

 

Examples of what I see as honest actions:

  • Explain why you're concerned in public.
  • Publish a technical demonstration of a GPT model dysfunctioning.
  • Inform government decision-makers, eg. through messaging campaigns and meetings with politicians.
  • Start a lawsuit, or send complaints about OAI’s overreaches to state-attorney generals and regulators.
  • Donate to an org advocating on behalf of communities.

 

Others are taking actions already. Please act in line with your care, and contribute what you can. 

New Comment
18 comments, sorted by Click to highlight new comments since:

 If we stand by while OpenAI violates its charter, it signals that their execs can get away with it. Worse, it signals that we don’t care.

what signals you send to OAI execs seems not relevant.

in the case where they really can't get away with it, e.g. where the state will really arrest them, then sending them signals / influencing their information state is not what causes that outcome.

if your advocacy causes the world to change such that "they can't get away with it" becomes true, this also does not route through influencing their information state.

OpenAI is seen as the industry leader, yet projected to lose $5 billion this year

i don't see why this would lead them to downsize, if "the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year"

what signals you send to OAI execs seems not relevant.

Right, I don’t occupy myself much with what the execs think. I do worry about stretching the “Overton window” for concerned/influential stakeholders broadly. Like, if no-one (not even AI Safety folk) acts to prevent OpenAI from continuing to violate its charter, then everyone kinda gets used to it being this way and maybe assumes it can’t be helped or is actually okay.

i don't see why this would lead them to downsize, if "the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year"

Note that with ‘investments’, I meant injections of funds to cover business capital expenditures in general, including just to keep running their models. My phrasing here is a little confusing, but couldn’t find another concise way to put it yet.

The reason why OpenAI and other large-AI-model companies would cease to gain investments, is similar to why dotcom companies ceased to gain investments (even though a few like Amazon went on to be trillion-dollar companies). Because investors become skeptical about their prospect of the companies reaching break even and about whether they would still be able to offload their stake later (to even more investors willing to sink in their capital).

Let me rephrase that sentence to ‘industry expenditures in deep learning’. 

The situation where AI is a good tool for manipulating public opinion, and the leading AI company has a bad reputation, seems unstable. Maybe AI just needs to get a little better, and then AI-written arguments in favor of AI will win public opinion decisively? This could "lock in" our trajectory even worse than now, and could happen long before AGI.

[-]nc40

This fail-state is particularly worrying to me, although it is not obvious whether there is enough time for such an effect to actually intervene on the future outcome.

I worry about that, and I worry about the first AI-based consumer products that are really fun (like video games are) because pleasure is the motivation in most motivated cognition (at least in Western liberal societies).

OpenAI is recklessly scaling AI. Besides accelerating "progress" toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles only, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone's personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.

Could you justify that it causes increasing harms? My intuition is that OpenAI is currently net-positive without taking into account future risks. It's just an intuition, however, I have not spent time thinking about it and writing down numbers.

(I agree it's net-negative overall.)

I'd agree the OpenAI product line is net positive (though not super hung up on that). Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.

Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.


Very much agreeing with this.

Appreciating your inquisitive question!

One way to think about it:

For OpenAI to scale more toward “AGI”, the corporation needs more data, more automatable work, more profitable uses for working machines, and more hardware to run those machines. 

If you look at how OpenAI has been increasing those four variables, you can notice that there are harms associated with each. This tends to result in increasing harms.

One obvious example:  if they increase hardware, this also increases pollution (from mining, producing, installing, and running the hardware).

Note that the above is not a claim that the harms outweigh the benefits. But if OpenAI & co continue down their current trajectory, I expect that most communities would look back and say that the harms to what they care about in their lives were not worth it.

I wrote a guide to broader AI harms meant to emotionally resonate for laypeople here.

[-][anonymous]31
  • Explain why you're concerned in public.

 

I'm concerned about OpenAI's behavior in context of their stated trajectory towards level 5 intelligence - running an organization. If the model for a successful organization lies in the dissonance between actions intended to foster goodwill (open research/open source/non-profit/safety concerned/benefit all of humanity) but those virtuous paradigms are all instrumental rather than intrinsic, requiring NDAs/financial pressure/lobbying to be whitewashed, scaling that up with AGI (which would have more intimate and expansive data, greater persuasiveness, more emotional detachment, less moral hesitation) seems clearly problematic.

[-][anonymous]40

The next goodwill-inducing paradigm that has outlived its utility seems to be the concept of "AGI":

From here:

Oddly, that could be the key to getting out from under its contract with Microsoft. The contract contains a clause that says that if OpenAI builds artificial general intelligence, or A.G.I. — roughly speaking, a machine that matches the power of the human brain — Microsoft loses access to OpenAI’s technologies.

The clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations. Under the terms of the contract, the OpenAI board could decide when A.G.I. has arrived.

Despite being founded on the precept of developing AGI, structuring the company and many major contracts around the idea, while never precisely defining it - there now seems to be deliberate distancing, as evidenced here. Notably Sam's recent vision of the future "The Intelligence Age" does not mention AGI.

I expect more tweets like this from OpenAI employees in the coming weeks/months, expressing doubts about the notion of AGI, often taking care to say that the causal motivations are altruistic/epistemic.

I categorically disagree with Eliezer's tweet that "OpenAI fired everyone with a conscience", and all of this might not be egregious as far as corporate sleights-of-hand/dissonance go - but scaled up recursively, eg. when extended to principles relating to alignment/warning shots/surveillance/misinformation/weapons, this does not bode well.

Resonating with you here!  Yes, I think autonomous corporations (and other organisations) would result in society-wide extraction, destabilisation and totalitarianism.

[-][anonymous]21

Thanks! I should have been more clear that the trajectory toward level 5 (with all human virtue/trust being hackable for instrumental gains) itself is concerning, not just the eventual leap when it gets there.

[-][anonymous]20

OpenAI’s Sora video generator was temporarily leaked

>> We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.

I hope OpenAI recognizes that the "bad vibes" generated from this perpetual sleight-of-hand, NDA-protected fostering of zero-sum dynamics, playing Moloch's game - from pitting artists against each other for scraps to pitting the US against China - will affect public perception, recruitment, uptake and valuation more than they might be currently anticipating as the general "vibe" becomes common knowledge. It also increases existential risk for humanity by decreasing the chances of future superintelligence loving human beings, (something Altman recently said he'd want) by biasing the corpus of internet/human consciousness, possibly introducing some downstream incoherence as AIs will be ordered to reflect well on OpenAI.

OpenAI's stated mission is to ensure "artificial general intelligence benefits all of humanity". It might be a good exercise for someone at OpenAI to clarify (to whatever extent feasible, for increasing semantic coherance in service of more self-aware future LLMs, if nothing else) what "all of" and "humanity" entail here, as AGI effects unfold over the next few years. I had tried to ask Richard Ngo, a former OpenAI employee, in a different context, and I'd really appreciate suggestions if a better framing might help.

Just found a podcast on OpenAI’s bad financial situation.

It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).

https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/

[+][comment deleted]10
[+][comment deleted]00