turchin comments on Using the Copernican mediocrity principle to estimate the timing of AI arrival - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
I suggest that rather than putting "AI is possible" and "exponential growth of research will continue" in as assumptions, it would be better to adjust the conclusion: 95% probability that by 2035 the exponential growth of human AI research will have stopped. This could be (1) because it produced a strongly superhuman AI and declared its job complete, or (2) because we found good reason to believe that AI is actually impossible, or (3) because we found other more exciting things to work on, or (4) because there weren't enough resources to keep the exponential growth going, or (etc.).
I think this framing is better because it emphasizes that there are lots of ways for exponential growth in AI research to stop [EDITED to add: or to slow substantially] other than achieving all the goals of such research.
Yes, but we need to add "Humanity goes extinct before this date" which is also possible. ((( Sufficiently large catastrophe could prevent AI creation, like supervirus or nuclear war.
That would be another way for exponential growth in human AI research to stop, yes. You can think of it as one of the options under "(etc.)", or as a special case of "not enough resources".