Some good reporting here, got quotes from a lot of people. Here are the key paragraphs, in my opinion:

In the past year or so, top AI researchers from Google have left to launch start-ups around large language models, including Character.AI, Cohere, Adept, Inflection.AI and Inworld AI, in addition to search start-ups using similar models to develop a chat interface, such as Neeva, run by former Google executive Sridhar Ramaswamy.

Character.AI founder Noam Shazeer, who helped invent the transformer and other core machine learning architecture, said the flywheel effect of user data has been invaluable. The first time he applied user feedback to Character.AI, which allows anyone to generate chatbots based on short descriptions of real people or imaginary figures, engagement rose by more than 30 percent.

(skipping ahead)

“If Google doesn’t get their act together and start shipping[1], they will go down in history as the company who nurtured and trained an entire generation of machine learning researchers and engineers who went on to deploy the technology at other companies,” tweeted David Ha, a renowned research scientist who recently left Google Brain for the open source text-to-image start-up Stable Diffusion.

See also this similar article from the NYT a few days ago.

A narrative may be forming that Google and Meta have made a historic mistake in holding back their AI products.

 

An aside:

As an example of narrative formation, see this tweet with over two million views in which Meta's chief AI scientist argues that ChatGPT is not innovative.[2] The response in both the replies and on Hacker News is mixed, but the strongest throughline I see is the belief that he's merely technically correct, and that Meta has committed a fundamental business error by hesitating on/undervaluing productization. 

Frustratingly, I'm not even convinced that's true! One thing I was very surprised to learn from the WaPo article was that Meta released their own free chatbot three months before OpenAI, and it's still up!

If you interact with that chatbot, you'll see exactly why it didn't take off. Its output is really quite bad compared to ChatGPT.[3]

  1. ^

    In the software business, there's a famous bit of wisdom from Steve Jobs: "Real artists ship." In other words, success means delivering products to customers. If you burn time trying to perfect your product before release, you'll be eaten alive by your competition. One of the more famous negative examples is Netscape.

  2. ^

    In response to being quoted in this article which frames his comments more disparagingly.  

  3. ^

    For example, it refused to write me a poem, it gave a nonsensical answer when asked if it liked string cheese, and when I asked it to tell me how to sort a list of strings in Python, it gave me a weird answer about Python's computational complexity when sorting a list that's within a known set. Meanwhile, ChatGPT wrote me a nice poem, explained it cannot like string cheese because it's a language model, and gave me a detailed explanation with an example showing how to use Python's ".sort()" method.

New Comment
5 comments, sorted by Click to highlight new comments since:

Character.AI founder Noam Shazeer, who helped invent the transformer and other core machine learning architecture, said the flywheel effect of user data has been invaluable. The first time he applied user feedback to Character.AI, which allows anyone to generate chatbots based on short descriptions of real people or imaginary figures, engagement rose by more than 30 percent.

uhoh.

I want to highlight that there is a business incentive directly against safety for the user here. it requires human judgement to simply ... not do bad thing too much, with business built on doing bad thing.

I really liked the "aside" on this.

I think it's also worth noting that WaPo is a de-facto Amazon subsidiary (technically 100% owned by Bezos, Amazon's chairman). I'm not sure what the implications are here, just that it's worth noting.

In the past year or so, top AI researchers from Google have left to launch start-ups around large language models [...]

When Google laid off 12,000 employees last week, CEO Sundar Pichai wrote that the company had undertaken a rigorous review to focus on its highest priorities, twice referencing its early investments in AI.

These things may be unconnected (and the article suggests that the AI researchers left Google well before Google started laying off employees), but I wonder how much the economical problems contributed to the new startups appearing.

I imagined that Google has tons of money and not many good ideas, so the best strategy for them is to buy everyone who seems capable of doing something useful. Even if they couldn't use such people, at least they would deny them to a potential competition. I wasn't paying attention lately, seems like the situation has changed dramatically. I kinda assumed that no matter what happens, Google is too rich to be significantly impacted.

So I wonder how much of the motivation for the AI researchers was "I am sitting on a potential gold mine" and how much of it was "Google is going to lay off many people soon, I better have a Plan B".

Or maybe there is no connection, and the same thing would have happened also in a parallel timeline where Google has enough money to hire anyone (but it still might make sense to quit, build a startup, and then sell the startup to Google).

Do we know how the compute that BlenderBot uses compares to ChatGPT?

Is ChatGPTs advantage due to using more compute or due to the underlying system being more efficient?

Facebooks models use maybe 1/4 the compute (rough guess) and have more implementation issues and worse finetuning