If we stand by while OpenAI violates its charter, it signals that their execs can get away with it. Worse, it signals that we don’t care.
what signals you send to OAI execs seems not relevant.
in the case where they really can't get away with it, e.g. where the state will really arrest them, then sending them signals / influencing their information state is not what causes that outcome.
if your advocacy causes the world to change such that "they can't get away with it" becomes true, this also does not route through influencing their information state.
OpenAI is seen as the industry leader, yet projected to lose $5 billion this year
i don't see why this would lead them to downsize, if "the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year"
what signals you send to OAI execs seems not relevant.
Right, I don’t occupy myself much with what the execs think. I do worry about stretching the “Overton window” for concerned/influential stakeholders broadly. Like, if no-one (not even AI Safety folk) acts to prevent OpenAI from continuing to violate its charter, then everyone kinda gets used to it being this way and maybe assumes it can’t be helped or is actually okay.
i don't see why this would lead them to downsize, if "the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year"
Note that with ‘investments’, I meant injections of funds to cover business capital expenditures in general, including just to keep running their models. My phrasing here is a little confusing, but couldn’t find another concise way to put it yet.
The reason why OpenAI and other large-AI-model companies would cease to gain investments, is similar to why dotcom companies ceased to gain investments (even though a few like Amazon went on to be trillion-dollar companies). Because investors become skeptical about their prospect of the companies reaching break even and about whether they would still be able to offload their stake later (to even more investors willing to sink in their capital).
The situation where AI is a good tool for manipulating public opinion, and the leading AI company has a bad reputation, seems unstable. Maybe AI just needs to get a little better, and then AI-written arguments in favor of AI will win public opinion decisively? This could "lock in" our trajectory even worse than now, and could happen long before AGI.
This fail-state is particularly worrying to me, although it is not obvious whether there is enough time for such an effect to actually intervene on the future outcome.
OpenAI is recklessly scaling AI. Besides accelerating "progress" toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles only, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone's personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.
Could you justify that it causes increasing harms? My intuition is that OpenAI is currently net-positive without taking into account future risks. It's just an intuition, however, I have not spent time thinking about it and writing down numbers.
(I agree it's net-negative overall.)
I'd agree the OpenAI product line is net positive (though not super hung up on that). Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.
Sam Altman demonstrating what kind of actions you can get away with in front of everyone's eyes seems problematic.
Very much agreeing with this.
Appreciating your inquisitive question!
One way to think about it:
For OpenAI to scale more toward “AGI”, the corporation needs more data, more automatable work, more profitable uses for working machines, and more hardware to run those machines.
If you look at how OpenAI has been increasing those four variables, you can notice that there are harms associated with each. This tends to result in increasing harms.
One obvious example: if they increase hardware, this also increases pollution (from mining, producing, installing, and running the hardware).
Note that the above is not a claim that the harms outweigh the benefits. But if OpenAI & co continue down their current trajectory, I expect that most communities would look back and say that the harms to what they care about in their lives were not worth it.
I wrote a guide to broader AI harms meant to emotionally resonate for laypeople here.
OpenAI defected. As a non-profit, OpenAI recruited researchers on the promise of "safe" AGI. Then it pushed out the safety researchers and turned into a for-profit.
Why act in response?
OpenAI’s activities are harmful. Let’s be public and honest in our response.
Examples of what I see as honest actions:
Others are taking actions already. Please act in line with your care, and contribute what you can.