Johnicholas comments on A Less Wrong singularity article? - Less Wrong

28 Post author: Kaj_Sotala 17 November 2009 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: Johnicholas 22 November 2009 04:51:19PM *  1 point [-]

The topic as I understand it is how the "default future" espoused by SIAI and EY focuses too much on things that look something like HAL or Prime Intellect (and their risks and benefits), and not enough on entities that display super-human capacities in only some arenas (and their risks and benefits).

In particular, an entity that is powerful in some ways and weak in other ways could reduce existential risks without becoming an existential risk.