wedrifid comments on Recursively Self-Improving Human Intelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
No. Even assuming an overwhelming intelligence superiority it would not be possible to subdue a competing superintelligence within any physics remotely like that which we know. Except, of course, if you catch it before it is aware of your existence.
Given the capability to reach speeds of a high percentage of that of light and consume most of the resources from a star system for future expansion the speed of light will give a hard minimum limit on how much of the cosmic commons you can consume before the smarter AI can catch you.
The problem then is that having more than one superintelligence - without the ability to cooperate - will guarantee the squandering of a lot of the resources that could otherwise have been spent on fun.