Morendil comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
In this subscenario, does the AGI eventually become superintelligent? If so, don't we still need a reason why it doesn't disassemble humans at that point, which might be A, B, C or D?
I guess that is D I'm thinking of.