Morendil comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong

24 Post author: Wei_Dai 06 July 2011 09:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread.

Comment author: Morendil 06 July 2011 10:55:08PM 5 points [-]

There's a subscenario of c.ii that I think is worth considering: there turns out to be some good theoretical reason why even an AGI with access to and full-stack understanding of its own source code cannot FOOM - a limit of some sort on the rate of self-improvement. (Or is this already covered by D?)

Comment author: Wei_Dai 06 July 2011 11:32:46PM 2 points [-]

In this subscenario, does the AGI eventually become superintelligent? If so, don't we still need a reason why it doesn't disassemble humans at that point, which might be A, B, C or D?

Comment author: hairyfigment 07 July 2011 07:44:37PM 1 point [-]

XiXiDu seemed to place importance on the possibility of "expert systems" that don't count as AGI beating the general intelligence in some area. Since we were discussing risk to humanity, I take this to include the unstated premise that defense could somehow become about as easy as offense if not easier. (Tell us if that seems wrong, Xi.)

Comment author: Morendil 07 July 2011 07:47:25AM 1 point [-]

I guess that is D I'm thinking of.