JoshuaZ comments on (misleading title removed) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (8)
Note that superintelligence doesn't by itself provide much of a risk. It is extreme superintelligence, together with variants of the orthogonality thesis and an intelligence that is able to rapidly achieve its superintelligence. The first two of these seem to be much easier to convince people of than the third, which shouldn't be that surprising because the third is really the most questionable. (At the same time there seems to be a hard core of people who absolutely won't budge on orthogonality. I disagree with such people on such fundamental intuitions and other issues that I'm not sure I can model well what they are thinking.)
The orthogonality thesis, in the form "you can't get an ought from an is", is widely accepted or at least widely considered a popular position in public discourse.
It is true that slow superintelligence is less risky, but that argument isn't explicitly made in this letter.