Does it say anything about AI risk that is about the real risks? (Have not clicked the links, the text above did not indicate to me one way or another).
The report mentioned "harm to the global financial system [and to global supply chains]" somewhere as examples, which I found noteworthy for being very large scale harms and therefore plausibly requiring AI systems that the AI x-risk community is most worried about.
I'm not sure if the core NIST standards go into catastrophic misalignment risk, but Barrett et al.'s supplemental guidance on the NIST standards does. I was a reviewer on that work, and I think they have more coming (see link in my first comment on this post for their first part).
Been in the works for awhile. Good to know it's officially out, thanks.
FLI also released a statement on NIST's framework: