You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaZ comments on (misleading title removed) - Less Wrong Discussion

-2 Post author: The_Jaded_One 28 January 2015 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread.

Comment author: JoshuaZ 29 January 2015 12:26:24AM 1 point [-]

One question that keeps kicking around in my mind is that if someone's true but unstated objection to the problem of AI risk is that superintelligence will never happen, how do you change their mind?

Note that superintelligence doesn't by itself provide much of a risk. It is extreme superintelligence, together with variants of the orthogonality thesis and an intelligence that is able to rapidly achieve its superintelligence. The first two of these seem to be much easier to convince people of than the third, which shouldn't be that surprising because the third is really the most questionable. (At the same time there seems to be a hard core of people who absolutely won't budge on orthogonality. I disagree with such people on such fundamental intuitions and other issues that I'm not sure I can model well what they are thinking.)

Comment author: The_Jaded_One 29 January 2015 07:42:11AM 1 point [-]

The orthogonality thesis, in the form "you can't get an ought from an is", is widely accepted or at least widely considered a popular position in public discourse.

It is true that slow superintelligence is less risky, but that argument isn't explicitly made in this letter.