wedrifid comments on How to get that Friendly Singularity: a minority view - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (69)
This is reasonable - but what is odd to me is the world-conquering part. The justifications that I've seen for creating a singleton soon (e.g. either we have a singleton or we have unfriendly superintelligence) seem insufficient.
How certain are you that there is no third alternative? Suppose that you created an entity which is superhuman in some respects (a task that has already been done already done many times over) and asked it to find third alternatives. Wouldn't this be a safer, saner, more moral and more feasible task than conquering the world and installing a singleton?
Note that "entity" isn't necessarily a pure software agent - it could be a computer/human team, or even an organization consisting only of humans interacting in particular ways - both of these kinds of entity already exist, and are more capable than humans in some respects.
If a superintelligence is able to find a way to reliably prevent either the emergence of a rival, preventable existential risk or actions sufficiently undesirable then by all means it can do that instead.