No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.
I believe you have this backwards - the OP is asking whether a FAI would be worse at learning than an UFAI, because of additional constraints on its improvement. If so:
then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built.
Of course one of the first actions of a FAI would be to prevent any UFAI from being built at all.
I assumed otherwise because of :
If the rate of learning of an AGI is t then is it correct to assume that the rate of learning of a FAI would be t+x where x > 0,
Which says the FAI is learning faster. But that would make more sense of the last paragraph.
I may have a habit of assuming that the more precise formulation of a statement is the intended/correct interpretation, which, while great in academia and with applied math, may not be optimal here.
Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!