The more we can get the public (and investors) to realize the failures of the Bing chatbot launch, the more the market will turn against Microsoft and make further launches fiscally riskier.

It seems like Microsoft has rushed the launch to pwn Google. If we make this a big enough corporate mistake, then Microsoft (and likely Google) will be a bit more hesitant in the future.

Less Wrong (and EA) has many thousands of people deeply concerned about AI progress. Why aren't we collectively signal-boosting content that is critical of the Bing chat bot and Microsoft's strategy?

What do I mean by signal-boosting?

A simple example:

The most read anti-Bing content is likely this NYT article (described in this Less Wrong post), yet it only has 2.7k comments. Less Wrong + EA could probably drive those numbers up to 20k if we actually cared/tried/wanted to. How much more would NYT signal-boost this article to their readers if it breaks the record for most comments? (One of the most-read articles in 2022 was Will Smith Apologizes to Chris Rock After Academy Condemns His Slap and it has 6.3k comments.)

A slightly less obvious example: 

https://www.tiktok.com/@benthamite is the most popular EA-aligned TikTok I know of. Why aren't we helping Lacey and Ben (and any other EA/Rat/AI safety-aligned TikTokers like https://www.tiktok.com/@robertmilesai) make dozens of videos a day about Bing chat and boosting them in any way possible (paying them to be shared as ads as well?) to reach hundreds of thousands of people.


Eliezer apparently agrees with the general idea and has signal-boosted some content on Twitter. This tweet has 65 retweets and 569 likes yet it has 100k views.

What would the views look like if there were 2k retweets and 2k likes 10 minutes after he posted it?

How many other pieces of content could we easily signal-boost? Other articles, podcasts, YouTube channels, tweets, TikToks, etc.

I honestly believe that thousands of dedicated and smart individuals all collectively trying to signal-boost a story across all media channels could significantly shape the public's perception of the launch.

So this leaves us with a simple question. Why aren't we? 

Some guesses off the top of my head:

  1. People are lazy and don't think their single comment on an NYT will do anything (cause it won't) so they don't do anything.
  2. People would rather test out the Bing chatbot or read tweets or tweet about it because it's more interesting/fun than actually trying to do something hard and boring like shape public perception.
  3. Trying to shape public perception is hard/risky, if you do it wrong you could somehow boost Google or Microsoft to try even harder/move even quicker to cover their failures or gain a lead. (I don't believe this.)
  4. For some reason, Rats and EAs aren't as coordinated as they should be. It's nearly impossible for coordinated time-sensitive collective action to take place even though everyone agrees that AI capabilities should be slowed or more controlled
    1. Related: diffusion of responsibility. No one person or organization is in charge of things like this so everyone kind of just does the standard thing of retweeting a few funny tweets. Where is the organizing person/body that can coordinate this? It seems fairly easy/cheap to do.
  5. Revealed preferences? Do people actually not think this is a big deal? What are we actually concretely doing right now in the real world as a community? (In my view, basically nothing.) What would Eliezer's crazy genius rat characters in his glowfics be doing? Haphazardly tweeting examples of crazy Bing chat responses for fun/memes or actually trying hard to figure out what we can do and actually do in the real world?
  6. Probably a whole bunch more, share your thoughts in the comments
New Comment
13 comments, sorted by Click to highlight new comments since:

There is a fuzzy line between "let's slow down AI capabilities" and "lets explicitly, adversarially, sabotage AI research". While I am all for the former, I don't support the latter; it creates worlds in which AI safety and capabilities groups are pitted head to head, and capabilities orgs explicitly become more incentivized to ignore safety proposals. These aren't worlds I personally wish to be in.

While I understand the motivation behind this message, I think the actions described in this post cross that fuzzy boundary, and pushes way too far towards that style of adversarial messaging

I see your point, and I agree. But I'm not advocating for sabotaging research.

I'm talking about admonishing a corporation for cutting corners and rushing a launch that turned out to be net negative.

Did you retweet this tweet like Eliezer did?https://twitter.com/thegautamkamath/status/1626290010113679360

If not, is it because you didn't want to publicly sabotage research?

Do you agree or disagree with this twitter thread? https://twitter.com/nearcyan/status/1627175580088119296?t=s4eBML752QGbJpiKySlzAQ&s=19

Why aren't we?

  1. By temperament and training, we don't do strategic propaganda campaigns even when there's a naïvely-appealing consequentialist case for it.

We argue that Microsoft is doing a bad thing—but we also discuss specific flaws in those arguments, even when that would undermine the simplified propaganda message. We do this because we think arguments aren't soldiers; we're trying to have a discourse oriented towards actually figuring out what's going on in the world, instead of shaping public perception.

Is this dumb? I'm not going to take a position on that in this comment. Holden Karnofsky has some thoughts about messages to spread.

Are you saying that you're unsure if the launch of the chatbot was net positive?

I'm not talking about propaganda. I'm literally saying "signal boost the accurate content that's already out there showing that Microsoft rushed the launch of their AI chatbot making it creepy, aggressive, and misaligned. Showing that it's harder to do right than they thought"

Eleizer (and others) retweeted content admonishing Microsoft, I'm just saying we should be doing more of that.

Maybe this is a better way of putting it: I agree that the Bing launch is very bad precedent and it's important to warn people about the problems with that, but the whole way you're talking about this seems really spammy and low-integrity, which would be fine and great if being spammy would save the world, but there are generalizable reasons why that doesn't work. People can tell when the comments section is being flooded by an organized interest group, and that correctly makes them trust the comments less.

Here's an idea (but maybe you can think of something better): if you want to help spread messages about this, maybe instead of telling "us" (the EAs, the rats) to become a signal-boosting PR war army (which is not what we're built for), write up your own explanation of why the Bing launch was bad, in your own voice, that covers an angle of the situation that hasn't already been covered by someone else. Sincerity is more credible than just turning up the volume. I will eagerly retweet it (when I get back from my February Twitter hiatus).

Are you saying that you're unsure if the launch of the chatbot was net positive?

No. (I agree that it was bad, and that pointing out that it was bad is good.) The intent of "I'm not going to take a position on that" was to keep the grandparent narrowly-scoped; you asked, "Why aren't we?" and I explained a reason why Less Wrong readers are disinclined in general to strategically take action to "shape the public's perception", as contrasted to laying out our actual reasoning without trying to control other people's decisions.

not talking about propaganda

In the sense of "A concerted set of messages aimed at influencing the opinions or behavior of large numbers of people".

Downvoted for recommending that readers operate at simulacrum level 2.

I felt I was saying "Simulacrum Level 1: Attempt to describe the world accurately."

The AI was rushed, misaligned, and not a good launch for its users. More people need to know that. It's literally already how NYT and others are describing it (accurately) I'm just suggesting signal boosting that content.

[-]lc61

Horrible, horrible idea.

Well, I blogged about it at least a little bit. Possibly too elementary a level for most Less Wrong readers, though.

Toby and Elon did today what I was literally suggesting: https://twitter.com/tobyordoxford/status/1627414519784910849

@starship006, @Zack_M_Davis, @lc, @Nate Showell do you all disagree with Toby's tweet?

Should the EA and Rationality movement not signal-boost Toby's tweet?
 

Elon further signal boosts Toby's post

Ord's Tweet seems good. Individual Twitter users re-Tweeting it because they, personally, want to share the information also seems good; I don't see why it matters whether they're part of "the EA and Rationality movement", whatever that is. (Personally, I'm avoiding Twitter this month.)