You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

turchin comments on Using the Copernican mediocrity principle to estimate the timing of AI arrival - Less Wrong Discussion

2 Post author: turchin 04 November 2015 11:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: gjm 04 November 2015 12:06:09PM *  6 points [-]

I suggest that rather than putting "AI is possible" and "exponential growth of research will continue" in as assumptions, it would be better to adjust the conclusion: 95% probability that by 2035 the exponential growth of human AI research will have stopped. This could be (1) because it produced a strongly superhuman AI and declared its job complete, or (2) because we found good reason to believe that AI is actually impossible, or (3) because we found other more exciting things to work on, or (4) because there weren't enough resources to keep the exponential growth going, or (etc.).

I think this framing is better because it emphasizes that there are lots of ways for exponential growth in AI research to stop [EDITED to add: or to slow substantially] other than achieving all the goals of such research.

Comment author: turchin 04 November 2015 12:14:02PM 0 points [-]

Yes, but we need to add "Humanity goes extinct before this date" which is also possible. ((( Sufficiently large catastrophe could prevent AI creation, like supervirus or nuclear war.

Comment author: gjm 04 November 2015 01:34:59PM 0 points [-]

That would be another way for exponential growth in human AI research to stop, yes. You can think of it as one of the options under "(etc.)", or as a special case of "not enough resources".