You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaZ comments on Using the Copernican mediocrity principle to estimate the timing of AI arrival - Less Wrong Discussion

2 Post author: turchin 04 November 2015 11:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread.

Comment author: JoshuaZ 06 November 2015 07:35:38PM 0 points [-]

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.

I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.

Comment author: turchin 06 November 2015 09:30:03PM 1 point [-]

Many of them may be predicted using the same logic. For example, we may try to estimate next time nuclear weapons will be used in war, based on a fact that they were used once in 1945. It results in 75 per cent probability for next 105 years. see also a comment below.