There is an obvious comparison to porn here, even though you disclaim 'not catgirls'.
Anyhow I think the merit of such a thing depends on a) value calculus of optimization, and b) amount of time occupied.
a)
b)
Providing that optimization is in the general directions shown above, this doesn't seem to be a significant X-risk. Otherwise it is.
This leaves aside the question of whether the FAI would find this an efficient use of their time (I'd argue that a superintelligent/augmented human with a firm belief in humanity and grasp of human values would appreciate the value of this, but am not so sure about a FAI, even a strongly friendly AI. It may be that there are higher level optimizations that can be performed to other systems that can get everyone interacting more healthily [for example, reducing income differential))
There is an obvious comparison to porn here, even though you disclaim 'not catgirls'.
You're aware that 'catgirls' is local jargon for "non-conscious facsimiles" and therefore the concern here is orthogonal to porn?
Optimization should be for a healthy relationship, not for 'satisfaction' of either party (see CelestAI in Friendship is Optimal for an example of how not to do this)
If you don't mind, please elaborate on what part of "healthy relationship" you think can't be cashed out in preference satisfaction (including meta-prefere...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.