jsteinhardt comments on Singularity Non-Fiction Compilation to be Written - Less Wrong

15 Post author: MichaelVassar 28 November 2010 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: jsteinhardt 28 November 2010 11:35:08PM 8 points [-]

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions 'straight from Cloud Cuckooland'?

This seems like a pretty leading statement, since it (a) pre-supposes that an intelligence explosion will happen, and (b) puts someone up against Turing and Hawking if they disagree about the likely x-risk factor.

Comment author: ata 29 November 2010 01:58:32AM 5 points [-]

Have Turing or Hawking even talked about AI as an existential risk? I thought that sort of thing was after Turing's time, and I vaguely recall Hawking saying something to the effect that he thought AI was possible and carried risks, but not to the extent of specifically claiming that it may be a serious threat to humanity's survival.

Comment author: wedrifid 29 November 2010 01:19:08AM 1 point [-]

It doesn't quite (a), although there is ambiguity there that could be removed if desired. (It obviously does (b)).