Today, we’re announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop reliable and high-performing foundation models.
(Thread continues from there with more details -- seems like a notable major development!)
Yeah—I'm very on board with this. I think people tend to put way too much weight and pay way too much attention to nice-sounding PR rather than just focusing on concrete evidence, past actions, hard commitments, etc. If you focus on nice-sounding PR, then GenericEvilCo can very cheaply gain your favor by manufacturing that for you, but actually making concrete commitments is much more expensive.
So yes, I think your opinion of Anthropic should mostly be priors + hard evidence. If you learned that there was an AI lab that had taken in a $4B investment from Amazon and had also committed to the LTBT governance structure and Responsible Scaling Policy, what would you then think about that company, updating on no other evidence? Ditto for OpenAI, Google DeepMind—I think you should judge them each in approximately the same way. You'll end up relying on your priors a lot if you do this, but you'll also be able to operate much more safely in an epistemic environment where some of the major players might be trying to game your approval.