I had a pretty great discussion with social psychologist and philosopher Lance Bush recently about the orthogonality thesis, which ended up turning into a broader analysis of Nick Bostrom's argument for AI doom as presented in Superintelligence, and some related issues.
While the video is intended for a general audience interested in philosophy, and assumes no background in AI or AI safety, in retrospect I think it was possibly the clearest and most rigorous interview or essay I've done on this topic. In particular I'm much more proud of this interview than I am of our recent Counting arguments provide no evidence for AI doom post.
Intuitions about simplicity in regimes where speed is unimportant (e.g. turing machines with minimal speed bound) != intuitions from the solomonoff prior being malign due to the emergence of life within these turing machines.
It seems important to not equivocate between these.
(Sorry for the terse response, hopefully this makes sense.)