I had a pretty great discussion with social psychologist and philosopher Lance Bush recently about the orthogonality thesis, which ended up turning into a broader analysis of Nick Bostrom's argument for AI doom as presented in Superintelligence, and some related issues.
While the video is intended for a general audience interested in philosophy, and assumes no background in AI or AI safety, in retrospect I think it was possibly the clearest and most rigorous interview or essay I've done on this topic. In particular I'm much more proud of this interview than I am of our recent Counting arguments provide no evidence for AI doom post.
The thing I'll say on the orthogonality thesis is that I think it's actually fairly obvious, but only because it makes extremely weak claims, in that it's logically possible for AI to be misaligned, and the critical mistake is assuming that possibility translates into non-negligible likelihood.
It's useful for history purposes, but is not helpful at all for alignment, as it fails to answer essential questions.