You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SoerenE comments on Estimating the probability of human extinction - Less Wrong Discussion

5 Post author: philosophytorres 17 February 2016 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: SoerenE 19 February 2016 08:07:58AM *  -1 points [-]

Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.

Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.

I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with "superintelligence" for now.