JoshuaZ comments on Using the Copernican mediocrity principle to estimate the timing of AI arrival - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
I'm not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.
Many of them may be predicted using the same logic. For example, we may try to estimate next time nuclear weapons will be used in war, based on a fact that they were used once in 1945. It results in 75 per cent probability for next 105 years. see also a comment below.