Vladimir_Nesov comments on How to get that Friendly Singularity: a minority view - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
This is reasonable - but what is odd to me is the world-conquering part. The justifications that I've seen for creating a singleton soon (e.g. either we have a singleton or we have unfriendly superintelligence) seem insufficient.
How certain are you that there is no third alternative? Suppose that you created an entity which is superhuman in some respects (a task that has already been done already done many times over) and asked it to find third alternatives. Wouldn't this be a safer, saner, more moral and more feasible task than conquering the world and installing a singleton?
Note that "entity" isn't necessarily a pure software agent - it could be a computer/human team, or even an organization consisting only of humans interacting in particular ways - both of these kinds of entity already exist, and are more capable than humans in some respects.
By the way, the critical distinction is that with AGI, you are automating the whole decision-making cycle, while other kinds of tools only improve on some portion of the cycle under human control or anyway with humans somewhere in the algorithm.