The idea of an "aligned superintelligence" seems misguided
For a while I’ve been reading LessWrong, and there’s something that doesn’t make much sense to me, which is the idea that it is possible to align an AGI/superintelligence at all. I understand that it’s probably not even a majority of discussion on LW that is optimistic about the prospect...