Yes, I see people worrying about AI romantic partners and sexbots, and I think that's probably mostly silly. I think a good well-designed AI friend or fantasy-partner would be helpful for a lot of people. Especially if it actively encouraged them to be social with others, and gave them useful feedback on their social skills. Unlike with the alignment problem, I don't think that there's anything fundamental about the situation which pushes against it being healthy for people. I mean yes, you could certainly build a unhealthy version with a baked-in emotionally manipulative monetization scheme, and I expect that that would be bad for people. But a good version could be quite good. I feel hopeful that Inflection's AI companion will turn out well, for instance.
I think this is yet another case where it's helpful to some, harmful to some, mixed to many (causes some harms and some benefits), and neutral (unused or no impact) to the vast majority. This range of impacts exists for most technologies and popular activities.
Since this is fairly new (well, romantic fiction and mental masturbation isn't, but this depth of interactivity is), it makes sense to study both the distribution of uses, and the impact of the extremes.
I think it's FINE for people to worry about it - it's not silly on it's face (though many of the worriers are silly, just as they were when it was D&D or rock music or regular porn they worried about). I don't think your "especially if" matters to whether to worry - there are lots of cases of benefit from most worrisome changes, and that doesn't obviously mean we shouldn't limit or control those things.
Survey of users of Replika, an AI chatbot companion. 23% of users reported that it stimulated rather than displaced interactions with real humans, while 8% reported displacement. 30 participants (3%) spontaneously reported that it stopped them from attempting suicide.
Some excerpts: