Let's imagine you solve FAI tomorrow, but not AGI. (I see it as highly improbable that anyone will meaningfully solve FAI before solving AGI, but let's explore that optimistic scenario.) Meanwhile, various folks and institutions out there are ahead of you in AGI research by however much time you've spent on FAI. At least one of them won't care about FAI.
I have a hard time imagining any outcome from that scenario that doesn't involve you wishing you'd been working on AGI and gotten there first. How do you imagine the outcome?
"worked on" != "solved"
(In addition, MIRI claims that a FAI could be easier to implement than an AGI in general- i.e. that if you solve the philosophical difficulties regarding FAI, this also makes it easier to create an AGI in general. For example, MIRI's specific most-likely scenario for the creation of an AGI is a sub-human AI that self-modifies to become smarter very quickly; MIRI's research on modeling self-modification, while aimed at solving one specific problem that stands in the way of Friendliness, also has potential applications towards understanding self-modification in general.)
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.