Vladimir_Nesov comments on Intelligence Explosion analysis draft: types of digital intelligence - Less Wrong

2 Post author: lukeprog 14 November 2011 11:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 15 November 2011 02:05:32AM *  6 points [-]

By “disruptions to scientific progress” we have in mind “external” disruptions like catastrophe or a global totalitarianism that prevents the further progress of science (Caplan, 2008). We do not mean to include, for example, Horgan’s (1997) hypothesis that scientific progress may soon stop because there will be nothing left to discover that can be discovered, which we find unlikely.

This sounds strange, as it feels to suggest that you find "global totalitarianism that prevents the further progress of science" more likely than "that scientific progress may soon stop because there will be nothing left to discover", both of which seem extremely improbable and thus hard to compare.

Maybe cite a deadly engineered pandemic as a better example with a short inferential distance (or even a civilization-destroying nuclear war, which seems unlikely, but more plausible than air-tight totalitarianism).

Comment author: Logos01 15 November 2011 11:02:27AM 0 points [-]

This sounds strange, as it feels to suggest that you find "global totalitarianism that prevents the further progress of science" more likely than "that scientific progress may soon stop because there will be nothing left to discover", both of which seem extremely improbable and thus hard to compare.

I don't know that it's their improbability that makes them hard to compare so much as the relative distance from "today" either event may be that's relevant.