I had a pretty great discussion with social psychologist and philosopher Lance Bush recently about the orthogonality thesis, which ended up turning into a broader analysis of Nick Bostrom's argument for AI doom as presented in Superintelligence, and some related issues.
While the video is intended for a general audience interested in philosophy, and assumes no background in AI or AI safety, in retrospect I think it was possibly the clearest and most rigorous interview or essay I've done on this topic. In particular I'm much more proud of this interview than I am of our recent Counting arguments provide no evidence for AI doom post.
Could you give an example of knowledge and skills not being value neutral?
(No need to do so if you're just talking about the value of information depending on the values one has, which is unsurprising. But it sounds like you might be making a more substantial point?)