You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on Recursively Self-Improving Human Intelligence - Less Wrong Discussion

11 Post author: curiousepic 17 February 2011 09:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 20 February 2011 05:52:47PM 1 point [-]

So any alien uFAI who was able to acquire more raw resources, at the time our FAI reaches the upper bound for intelligence, could subdue our FAI by brute force?

No. Even assuming an overwhelming intelligence superiority it would not be possible to subdue a competing superintelligence within any physics remotely like that which we know. Except, of course, if you catch it before it is aware of your existence.

Given the capability to reach speeds of a high percentage of that of light and consume most of the resources from a star system for future expansion the speed of light will give a hard minimum limit on how much of the cosmic commons you can consume before the smarter AI can catch you.

The problem then is that having more than one superintelligence - without the ability to cooperate - will guarantee the squandering of a lot of the resources that could otherwise have been spent on fun.