Also worth considering is that how much an "institution" holds a view on average may not matter nearly as much as how the powerful decision makers within or above that institution feel.
Other reasons:
Biases towards claiming agreement with one’s own beliefs
If the institution is widely trusted, respected, high status, etc., as well as powerful, then if Alice convinces you that the institution supports her beliefs, then you might be inclined to give more credence to Alice's beliefs. That would serve Alice's political agenda.
Weaker biases towards claiming disagreement with one’s own beliefs
If the institution is widely hated—for example al-Qaeda, the CIA, the KGB—or considered low status, crazy, and so on, then if Alice convinces you that the institution opposes her beliefs, that might make you more sympathetic to her, make you distrust arguments against her beliefs, and/or defuse preexisting arguments that support for Alice's position comes mostly from these evil/crazy institutions.
Introduction: some contemporary AI governance context
It’s a confusing time in AI governance. Several countries’ governments recently changed hands. DeepSeek and other technical developments have called into question certain assumptions about the strategic landscape. Political discourse has swung dramatically away from catastrophic risk and toward framings of innovation and national competitiveness.
Meanwhile, the new governments have issued statements of policy, and AI companies (mostly) continue to publish or update their risk evaluation and mitigation approaches. Interpreting these words and actions has become an important art for AI governance practitioners: does the phrase “human flourishing” in the new executive order signal concern about superintelligence, or just that we should focus on AI’s economic and medical potential and not “hand-wring” about safety? How seriously should we take the many references to safety in the UK’s AI Opportunities Action Plan, given the unreserved AI optimism in the announcement? Does Meta’s emphasis on “unique” risks take into account whether a model’s weights are openly released? The answers matter not only for predicting future actions but also for influencing them: it’s useful to know an institution’s relative appetite for different kinds of suggestions, e.g. more export controls versus maintaining Commerce’s reporting requirements.
So, many people who work in AI governance spend a lot of time trying to read between the lines of these public statements, talking to their contacts at these institutions, and comparing their assessment of the evidence with others’. This means they can wind up with a lot of non-public information — and often, they also have lots of context that casual observers (or people who are doing heads-down technical work in the Bay) might not.
All of that is to say: if you hear someone express a view about how an institution is thinking about AI (or many other topics), you might be tempted to update your own view towards theirs, especially if they have expertise or non-public information. And, of course, this is sometimes the correct response.
But this post argues that you should take these claims with a grain of salt. The rest of the post shifts to a much higher level of abstraction than the above, in part because I don’t want to “put anyone on blast,” and in part because this is a general phenomenon. Note that lots of these are generic reasons to doubt claims you can’t independently verify, but some of them are specific to powerful institutions.
Biases towards claiming agreement with one’s own beliefs
Let’s say you hear Alice say that a powerful institution (like a political party, important company, government, etc.) agrees with her position on a controversial topic more than you might think.
If you have reason to think that Alice knows more about that institution than you do, or just has some information that you don’t have, you might be inclined to believe Alice and update your views accordingly: maybe that institution is actually more sympathetic to Alice’s views than you realized!
This might be true, of course. But I’d like to point out a few reasons to be skeptical of this claim.
Weaker biases towards claiming disagreement with one’s own beliefs
Now imagine that you hear Bob, who agrees with Alice’s view, make the opposite claim: actually, the institution disagrees with us!
Not all of the same factors above apply – and I think, on net, these effects are stronger for those claiming agreement than disagreement, roughly in proportion to how powerful the institution is. But some of them still do, at least for some permutation:
Conclusion
I wouldn’t totally dismiss either claim, especially if Alice/Bob do have some private information, even if I knew that they had many of these biases. Claims like theirs are a valuable source of evidence. But I would take both claims (especially Alice’s) with a grain of salt, and if the strength of these claims were relevant for an important decision, I’d consider whether and to what extent these biases might be at play. This means giving a bit more weight to my own prior views of the institution and my own interpretations of the evidence, albeit only to the extent that I think biases like the above apply less to me than to the source of the claim.