Epistemic Status: This post is an opinion I have had for some time and discussed with a few friends. Even though it’s been written up very hastily, I want to put it out there - because of its particular relevance today and also because I think the issue has not been discussed enough.

One of the tiny brain-takes a vegan sometimes encounters when talking to people unfamiliar with veganism is: Why don’t you just buy the meat? The animal is dead anyway. If you then roll your eyes and retort that you should at least try not to actively increase demand (to prevent future animal deaths), a reasonably smart person might reply - or might have replied a couple of years ago - that the number of vegans is so small anyway that their consumption behavior doesn't really have any influence on the market. 

But this can change over time. In Germany, where I live, the number of vegans doubled between 2016 and 2020[1]. Meat consumption has been steadily declining for several years[2], while the market for plant-based products has almost doubled just between 2018 and 2020[3]

Following this analogy, I wonder: Why has there been so little discussion (at least that I know of) about whether we as a community should boycott LLM-based products? Especially as we seem to agree that race dynamics are bad and having more time to do alignment research would be good?

What I mean by a boycott

Some examples of what I mean by that: Don't sign up for Bing! Don't use ChatGPT and don't sign up for ChatGPT Plus! Or if you have to, use it as little as possible. Or adopt it as late as possible. If you can't be a vegan, then be a flexitarian and reduce your consumption of animal products as much as possible. Don't promote animal products, i.e. don't post artwork generated by diffusion models on your 10k follower twitter account. In general, be pragmatic about it. 

Perhaps the consumer behavior of a bunch of EAs will have little to no impact on the market. I tried to find some detailed market research on ChatGPT with no luck - but it seems plausible to me that tech-savvy people like those overrepresented in the EA community make up part of the target demographic, so a boycott might have a disproportionately large effect. And if the number of people aware of AI risk grows and a boycott becomes the norm, this effect could increase over the years. 

There is a related but distinctive argument that a boycott - if visible enough - could create bad press for AI companies. This happened last year when a number of artists shared images protesting AI-generated art on the platform ArtStation[4]. ArtStation took them down, causing even more negative publicity.

Now is a good time to start a boycott

I would argue that the best time to start such a boycott would probably have been a couple of years ago (e.g. 2017, when DeepL was launched, or 2021, when GitHub Copilot was launched, or 2022, in the hype year of text-to-image models) and the second best time is now.

Why? Because at this moment the norms regarding the usage of LLMs in professional settings have not fully crystallized. Anecdotally, I know some people (including myself) who have been among the more hesitant adopters of ChatGPT. The mere fact that the servers were often down when I tried to use it contributed to a feeling of annoyance. And then there are large sections of the population, including older generations, who might be a bit more skeptical about/ slower to adopt AI, but have a lot of decision-making power. As a result, not exhausting the possibilities of all available LLM applications does not lead to a strong disadvantage as yet. For example, an applicant for a PhD position this year might not yet compete exclusively with applicants who use LLMs to augment their research proposals. And the committee members are not yet used to an inflated quality standard. I think it is worth trying to delay the establishment of these new norms.

A short engagement with possible counterarguments

Besides the argument that the EA community is just too tiny to have any influence on market developments at all, I can think of two other counterarguments. One is that EAs might use LLMs for good; either directly (e.g. for research) or indirectly to empower them to do impactful things later on (for example, an EA person who augments their research proposal with ChatGPT might get accepted and continue to do impactful alignment research in their PhD! Yay!) I think it might be true that usage will become inevitable to compete with unfazed AI enthusiasts in the near future. For now, though, I think we should try to make sure we are not falling prey to motivated reasoning when arguing why we should definitely be using every shiny new toy that gets released as soon and as much as possible and for whatever task. It might just be exciting, or more convenient, or we don't want to feel left behind. But maybe we could try to avoid the latter by using LLM applications consciously and sparingly - and sometimes just reading a blogpost on prompting strategies instead.

Some might also argue that timelines are too short anyway. After all, veganism and its effect on the market of animal products has only gradually gained momentum - and we may not have the time to build that. My answer to this is: maybe that's true, but let's just try anyway? There's not much to lose (yet). 


 

In sum, this post reflects my (slightly polemicized) opinion right now, and I can well imagine it changing. However, I think it would be useful for us collectively and privately to think about the utility and feasibility of boycotts before the next wave of LLM-powered products hits us. 

 

  1. ^
  2. ^
  3. ^
  4. ^
New Comment
5 comments, sorted by Click to highlight new comments since:

My assumption is that such boycott would create selective pressure against the Boycotters, and in favor of LLM enthusiasts, thus, making the Boycotters first irrelevant Luddites, then culturally extinct.

This is similar to how people who boycott social media for valid reasons essentially became outcasts and took these valid reasons with them, weakening their movement.

Boycotting AI is essentially a self-terminating meme, the harder you boycott, the less likely is the Boycott Meme to spread. Its the equivalent to trying to boycott literacy with newspaper articles decrying the danger of the written word.

I think the difference with veganism is that vegans argue that there's no downside to being vegan (the argument is that vegan food is still tasty, healthy, and affordable) and there's very few high-income jobs that would be harder to get as a vegan (maybe CEO of Tyson Foods?). In an alternate world where cooking meat-based-meals is one of the highest-paying and highest-status jobs, compromising your ability to do it by refusing to eat meat might be less effective than eating enough meat to stay good at your job while using your free time and income to work for systemic change.

Using ChatGPT etc gives people such an advantage in (some) jobs and is easy to use "secretly" that it seems highly unlikely that a significant amount of people would boycott it.

My guess is that at most maybe 1-10% of a population would actually adhere to a boycott, and those who do would be in a much worse position to work on AI Safety and other important matters.

It is strange to propose a boycott without saying why. Why are you against people using AI generation tools?

Personally, I find the text that ChatGPT generates useless and unpleasant to read, and have downvoted it on suspicion several times on LessWrong already. (It doesn't much matter whether my suspicions were correct, the text quality was downvoteworthy anyway for its leaden vagueness and platitudinousness, and high-school essay structure.)

BTW, ArtStation is no longer removing "No to AI" images: at least, I see a bunch of them there. It now has a policy that AI generated content is allowed on the ArtStation marketplace but must be tagged as such, and users uploading their art can tag it to indicate that it is not to be used as training data for any AI.

Boycotting LLMs reduces the financial benefit of doing research that is (EDIT: maybe) upstream to AGI in the tech tree.