Would be interested to hear from Anthropic leadership about how this is expected to interact with previous commitments about putting decision making power in the hands of their Long-Term Benefit Trust.
I get that they're in some sense just another minority investor but a trillion-dollar company having Anthropic be a central plank in their AI strategy with a multi-billion investment and a load of levers to make things difficult for the company (via AWS) is a step up in the level of pressure to aggressively commercialise.
From the announcement, they said (https://twitter.com/AnthropicAI/status/1706202970755649658):
As part of the investment, Amazon will take a minority stake in Anthropic. Our corporate governance remains unchanged and we’ll continue to be overseen by the Long Term Benefit Trust, in accordance with our Responsible Scaling Policy.
Cheers, I did see that and wondered whether still to post the comment but I do think that having a gigantic company owning a large chunk and presumably a lot of leverage over the company is a new form of pressure so it'd be reassuring to have some discussion of how to manage that relationship.
Didn't Google previously own a large share? So now there are 2 gigantic companies owning a large share, which makes me think each has much less leverage, as Anthropic could get further funding from the other.
Yeah, I agree that that's a reasonable concern, but I'm not sure what they could possibly discuss about it publicly. If the public, legible, legal structure hasn't changed, and the concern is that the implicit dynamics might have shifted in some illegible way, what could they say publicly that would address that? Any sort of "Trust us, we're super good at managing illegible implicit power dynamics." would presumably carry no information, no?
That it is so difficult for Anthropic to reassure people stems from the contrast between Anthropic's responsibility focused mission statements and the hard reality of them receiving billions in dollars of profit motivated investment.
It is rational to draw conclusions by weighting a companies actions more heavily than their PR.
It is rational to draw conclusions by weighting a companies actions more heavily than their PR.
Yeah—I'm very on board with this. I think people tend to put way too much weight and pay way too much attention to nice-sounding PR rather than just focusing on concrete evidence, past actions, hard commitments, etc. If you focus on nice-sounding PR, then GenericEvilCo can very cheaply gain your favor by manufacturing that for you, but actually making concrete commitments is much more expensive.
So yes, I think your opinion of Anthropic should mostly be priors + hard evidence. If you learned that there was an AI lab that had taken in a $4B investment from Amazon and had also committed to the LTBT governance structure and Responsible Scaling Policy, what would you then think about that company, updating on no other evidence? Ditto for OpenAI, Google DeepMind—I think you should judge them each in approximately the same way. You'll end up relying on your priors a lot if you do this, but you'll also be able to operate much more safely in an epistemic environment where some of the major players might be trying to game your approval.
At first blush this appears to guarantee Anthropic access to enough compute for the next couple of training iterations, at least. I infer the larger training runs are back on.
(Thread continues from there with more details -- seems like a notable major development!)