I have canceled my OpenAI subscription in protest over OpenAI's lack of ethics.

In particular, I object to:

  • threats to confiscate departing employees' equity unless those employees signed a life-long non-disparagement contract
  • Sam Altman's pattern of lying about important topics

I'm trying to hold AI companies to higher standards than I use for typical companies, due to the risk that AI companies will exert unusual power.

A boycott of OpenAI subscriptions seems unlikely to gain enough attention to meaningfully influence OpenAI. Where I hope to make a difference is by discouraging competent researchers from joining OpenAI unless they clearly reform (e.g. by firing Altman). A few good researchers choosing not to work at OpenAI could make the difference between OpenAI being the leader in AI 5 years from now versus being, say, a distant 3rd place.

A year ago, I thought that OpenAI equity would be a great investment, but that I had no hope of buying any. But the value of equity is heavily dependent on trust that a company will treat equity holders fairly. The legal system helps somewhat with that, but it can be expensive to rely on the legal system. OpenAI's equity is nonstandard in ways that should create some unusual uncertainty. Potential employees ought to question whether there's much connection between OpenAI's future profits and what equity holders will get.

How does OpenAI's behavior compare to other leading AI companies?

I'm unsure whether Elon Musk's xAI deserves a boycott, partly because I'm unsure whether it's a serious company. Musk has a history of breaking contracts that bears some similarity to OpenAI's attitude. Musk also bears some responsibility for SpaceX requiring non-disparagement agreements.

Google has shown some signs of being evil. As far as I can tell, DeepMind has been relatively ethical. I've heard clear praise of Demis Hassabis's character from Aubrey de Grey, who knew Hassabis back in the 1990s. Probably parts of Google ought to be boycotted, but I encourage good researchers to work at DeepMind.

Anthropic seems to be a good deal more ethical than OpenAI. I feel comfortable paying them for a subscription to Claude Opus. My evidence concerning their ethics is too weak to say more than that.

P.S. Some of the better sources to start with for evidence against Sam Altman / OpenAI:

But if you're thinking of working at OpenAI, please look at more than just those sources.

New Comment
28 comments, sorted by Click to highlight new comments since:

I think individual LWers boycotting an AI company/product generally has much less negative effect on the company's revenue or reputation than negative effect on the user. Use the most powerful AI tools.

Or: offsetting a ChatGPT subscription by donating to AI safety orgs would cost <$1/month.

[Belief strongly held but not justified here]

[-]habryka2623

Lol, my first reaction to this post was "funding Zach Stein-Perlman seems like a much more effective way to slow down OpenAI than to boycott their products". I am not sure you need funding, but I genuinely think that if someone was thinking about boycotting their Chat-GPT subscription, they should just donate $100 to you, and that would be much better for the world. 

Use the most powerful AI tools.

FWIW, Claude 3.5 Sonnet was released today. Appears to outperform GPT-4o on most (but not all) benchmarks.

It currently looks like the free version of ChatGPT is good enough that I wouldn't get much benefit from a subscription. I have little idea how long this will remain true.

Yeah, and it's not obvious that 4o is currently the best chatbot. I just object to the boycott-without-cost-benefit-analysis.

Or: offsetting a ChatGPT subscription by donating to AI safety orgs would cost <$1/month.

Based on what? 

How much money would (e.g.) LTFF have to get to balance out OpenAI getting $20 (minus the cost to OpenAI of providing ChatGPT Plus — but we can assume the marginal cost is zero), in terms of AI risk? I claim <$1. I'd happily push a button to give LTFF $1 and OpenAI $20.

[Belief not justified here]

[+][comment deleted]20
[+][comment deleted]20

I've switched to Claude when Opus came out. If the delta between whatever OpenAI has and the next best model from a more ethical company is small enough then it seems worth it to me to switch.

When Opus came out and started scoring about as well on most benchmarks I decided the penalty was small enough.

In my opinion, it's reasonable to change which companies you want to do business with, but it would be more helpful to write letters to politicians in favor of reasonable AI regulation (e.g. SB 1047, with suggested amendments if you have concerns about the current draft). I think it's bad if the public has to play the game of trying to pick which AI developer seems the most responsible, better to try to change the rules of the game so that isn't necessary.

Also it's generally helpful to write about which labs seem more responsible/less responsible (which you are doing here), what you think labs should do instead of current practices. Bonus points for designing ways to test which deployed models are more safe and reliable, e.g. writing some prompts to use as litmus tests.

I've heard clear praise of Demis Hassabis's character from Aubrey de Grey, who knew Hassabis back in the 1990s

Is this publicly available anywhere? I'd love to read it.

[-]A.H.10

I Googled things like 'Aubrey de Grey on Demis Hassabis' for 5 minutes and couldn't find anything matching this description. The closest I could find was this interview with de Grey where he says:

I actually know a lot of people who are at the cutting edge of AI research. I actually know Demis Hassabis, the guy who runs DeepMind, from when he was an undergraduate at Cambridge several years after me. We’ve kept in touch and try to connect every so often.

He says they know each other and keep in touch but its not really a character reference. 

(I'm not claiming that de Grey hasn't praised Hassabis' character. Just that if he did, a brief search doesn't yield a publicly available record of this)

Thank you for sharing — I basically share your concerns about OpenAI, and it's good to talk about it openly.

I'd be really excited about a large, coordinated, time-bound boycott of OpenAI products that is (1) led by a well-known organization or individual with a recruitment and media outreach strategy and (2) accompanied by a set of specific grievances like the one you provide. 

I think that something like this would (1) mitigate some of the costs that @Zach Stein-Perlman alludes to since it's time-bound (say only for a month), and (2) retain the majority of the benefits, since I think the signaling value of any boycott (temporary or permanent) will far, far exceed the material value in ever-so-slightly withholding revenue from OpenAI. 

I don't mean to imply that this is opposed to what you're doing — your personal boycott basically makes sense to me, plus I suspect (but correct me if I'm wrong) that you'd also be equally excited about what I describe above. I just wanted to voice this in case others feel similarly or someone who would be well-suited to organizing such a boycott reads this.

Note that, by the grapevine, sometimes serving inference requests might loose OpenAI money due to them subsidising it. Not sure how this relates to boycott incentives. 

If serving those inference requests at a loss did not net benefit OA, then OA would not serve them. So it doesn't matter for the purpose of a boycott - unless you believe you know their business a lot better than they do, and can ensure you only make inference requests that are a genuine net loss to them and not a net benefit.

I have cancelled my subscription as well. I don't have much to add to the discussion, but I think signalling participation in the boycott will help conditional on the boycott having positive value.

I've moved to Claude right now.

I did it too.

I'm glad this post came out and made me try Claude. I now find it mostly better than ChatGPT, and with the introduction of projects, all the features I need are there.

If all you're using is ChatGPT, then now's a good time to cancel the subscription because GPT-4o seems to be similarly powerful as GPT-4, and GPT-4o is available for free.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

Recently, they made a deal with News Corp.

Then, last week, gave a seat on the board to the ex-NSA lead (I can understand why from a business perspective, though from a user perspective this is pretty damning, even assuming that most/all large AI companies are forced to cooperate with a/the government).

I'm afraid that on this trajectory, OpenAI will be a right-wing hard-government aligned tool of surveillance (though it probably is, already)

If I put my tin-foil hat on - could OpenAI force it's product to subtly alter responses for an ulterior purpose eg subtly make them more right-wing, or anti-Chile/choose any country/person/etc -

or a way to infiltrate the language with "Newspeak" aka 1984.

Will we be looking back in 10 years time from within a dystopia thinking "remember how it started"?

If so, I hope I'll be living on a no-internet farm-commune.

This applies doubly if you're in a high-leverage position, which could mean a position of "power" or just near to an ambivalent "powerful" person. If your boss is vaguely thinking of buying a LLM subscription for their team, a quick "By the way, OpenAI isn't a great company, maybe we should consider [XYZ] instead..." is a good idea.

This should also go through a cost-benefit analysis, but I think it's more likely to pass than the typical individual user.

[-]O O15

Use their API directly. I don’t do this to boycott them in particular but the API cost of your typical chat usage is far lower than the subscription cost.

Thanks for the information.
Consider though that for many people the price of the subscription is motivated by convenience of access and use.

It took me a second to see how your comment was related to the post so here it is for others: 
Given this information, using the API preserves most of the benefits of access to SOTA AI (assuming away the convenience value) while destroying most of the value for OpenAI, which makes this a very effective intervention compared to cancelling the subscription entirely.

[-]O O10

There’s an API playground which is essentially a chat interface. It’s highly convenient.

I was thinking of doing this but the ChatGPT web app seems to have many features that are only available there and add a lot of value such as Code Interpreter, PDF uploads, DALL-E, and using custom GPTs so I still use ChatGPT Plus.