Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

gwern comments on Total Nano Domination - Less Wrong

11 Post author: Eliezer_Yudkowsky 27 November 2008 09:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

Sort By: Old

You are viewing a single comment's thread.

Comment author: gwern 30 November 2008 06:53:27PM 1 point [-]

> Of course, the true upper limit might be much higher than current human intelligence But if there exists any upper bound, it should influence the "FOOM"-scenario. Then 30 minutes head start would only mean arriving at the upper bound 30 minutes earlier.

Rasmus Faber: plausible upper limits for the ability of intelligent beings include such things as destroying galaxies and creating private universes.

What stops an Ultimate Intelligence from simply turning the Earth (and each competitor) into a black hole in those 30 minutes of nigh-omnipotence? Even a very weak intelligence could do things like just analyze the OS being used by the rival researchers and break in. Did they keep no backups? Oops; game over, man, game over. Did they keep backups? Great, but now the intelligence has just bought itself a good fraction of an hour (it just takes time to transfer large amounts of data). Maybe even more, depending on how untried and manual their backup system is. And so on.