You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on Recursively Self-Improving Human Intelligence - Less Wrong Discussion

11 Post author: curiousepic 17 February 2011 09:55PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 18 February 2011 09:58:08AM *  3 points [-]

We're probably not at the peak of the fitness landscape for mammalian intelligence, but I'd be surprised if we weren't reasonably close to a local maximum;

This makes me wonder, if we are to create an FAI and there are other alien uFAI out there, will they be more intelligent because they had more time or is there an overall general intelligence limit? I suppose if there is a total limit to self-improvement for any kind of general intelligence, then all that matters is the acquisition of resources? So any alien uFAI who was able to acquire more raw resources, at the time our FAI reaches the upper bound for intelligence, could subdue our FAI by brute force?

Comment author: wedrifid 20 February 2011 05:52:47PM 1 point [-]

So any alien uFAI who was able to acquire more raw resources, at the time our FAI reaches the upper bound for intelligence, could subdue our FAI by brute force?

No. Even assuming an overwhelming intelligence superiority it would not be possible to subdue a competing superintelligence within any physics remotely like that which we know. Except, of course, if you catch it before it is aware of your existence.

Given the capability to reach speeds of a high percentage of that of light and consume most of the resources from a star system for future expansion the speed of light will give a hard minimum limit on how much of the cosmic commons you can consume before the smarter AI can catch you.

The problem then is that having more than one superintelligence - without the ability to cooperate - will guarantee the squandering of a lot of the resources that could otherwise have been spent on fun.