Vladimir_Nesov comments on Intelligence Explosion analysis draft: types of digital intelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (24)
This sounds strange, as it feels to suggest that you find "global totalitarianism that prevents the further progress of science" more likely than "that scientific progress may soon stop because there will be nothing left to discover", both of which seem extremely improbable and thus hard to compare.
Maybe cite a deadly engineered pandemic as a better example with a short inferential distance (or even a civilization-destroying nuclear war, which seems unlikely, but more plausible than air-tight totalitarianism).
I don't know that it's their improbability that makes them hard to compare so much as the relative distance from "today" either event may be that's relevant.