I had a pretty great discussion with social psychologist and philosopher Lance Bush recently about the orthogonality thesis, which ended up turning into a broader analysis of Nick Bostrom's argument for AI doom as presented in Superintelligence, and some related issues.
While the video is intended for a general audience interested in philosophy, and assumes no background in AI or AI safety, in retrospect I think it was possibly the clearest and most rigorous interview or essay I've done on this topic. In particular I'm much more proud of this interview than I am of our recent Counting arguments provide no evidence for AI doom post.
As far as the orthogonality thesis, relevant context is:
My overall take is that in Nora's/Bush's decomposition, the orthogonality thesis corresponds to "trivial".
However, I would prefer to instead call it "extremely-obvious-from-my-perspective" as indeed some people seem to disagree with this. (Yes, it's very obvious that ASI pursing arbitrary goals is logically possible! The thesis is intended to be obvious! The strong version as defined by Yudkowsky (there need be nothing especially complicated or twisted about an agent pursuing an arbitrary goal (if that goal isn't massively complex)) is also pretty obvious IMO.)
I agree that people seem to quote the orthogonality thesis as making stronger claims than it actually directly claims (e.g. misalignment is likely which is not at all implied by the thesis). And that awkwardly people seem to redefine the term in various ways (as noted in Yudkowsky's tweet linked above). So this creates a Motte and Bailey in practice, but this doesn't mean the thesis is wrong. (Edit: Also, I don't recall cases where Yudkowsky or Bostrom did this Motte and Bailey without further argument, but I wouldn't be very surprised to see it, particular for Bostrom.)
The Ortogonality Thesis is often used in a way that "smuggles in" the idea that an AI will necessarily have a stable goal, even though goals can be very varied. But similar reasoning shows that any combination of goal (in)stability and goallessness is possible, as well. mindspace contains agents with fixed goals, randomnly drifting goals, corrigble (externally controlable goals) , as well as non-agentive minds with no goals.