Wei_Dai comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong

24 Post author: Wei_Dai 06 July 2011 09:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 06 July 2011 11:32:46PM 2 points [-]

In this subscenario, does the AGI eventually become superintelligent? If so, don't we still need a reason why it doesn't disassemble humans at that point, which might be A, B, C or D?

Comment author: hairyfigment 07 July 2011 07:44:37PM 1 point [-]

XiXiDu seemed to place importance on the possibility of "expert systems" that don't count as AGI beating the general intelligence in some area. Since we were discussing risk to humanity, I take this to include the unstated premise that defense could somehow become about as easy as offense if not easier. (Tell us if that seems wrong, Xi.)

Comment author: Morendil 07 July 2011 07:47:25AM 1 point [-]

I guess that is D I'm thinking of.