Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!
If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0, considering that it would have the necessary additional constraints?
If this is the case, then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built. Are there upper limits on intelligence, or would there be diminishing returns as intellence grows?
No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.
However, one of the problems needed to be solved for FAI (stable self-modification) could certainly make an FAI's rate of self-improvement faster than a comparative AI which has not solved that problem. There are other questions that need to be answered there (does the AI realize that modifications will go wrong and therefore not self-modify? If it's smart enough to notice the problem, won't it's first step be to solve it?),... (read more)