I'm currently unconvinced either way on this matter. However, enough arguments have been raised that I think this is worth the time of every reader to think a good deal about.
http://nothingismere.com/2014/11/12/inhuman-altruism-inferential-gap-or-motivational-gap/
This article heavily implies that every LessWronger is a preference utilitarian, and values the wellbeing, happiness, and non-suffering of ever sentient (i.e. non-p-zombie) being. Neither of that is fully true for me, and as this ad-hoc survey - https://www.facebook.com/yudkowsky/posts/10152860272949228 - seems to suggest, I may not be alone in that. Namely, I'm actually pretty much OK with animal suffering. I generally don't empathize all that much, but there a lot of even completely selfish reasons to be nice to humans, whereas it's not really the case for animals. As for non-human intelligent beings - I'll figure that once I meet them, or the probability of such encounter gets somewhat realistic; currently there's too much ambiguity about them.
I was mainly talking about LessWrongers who care about others (for not-purely-selfish reasons). This is a much milder demand than preference utilitarianism. I'm surprised to hear you don't care about others' well-being -- not even on a system 2 level, setting aside whether you feel swept up in a passionate urge to prevent suffering.
Let me see if I can better understand your position by asking a few questions. Assuming no selfish benefits accrued to you, would you sacrifice a small amount of your own happiness to prevent the torture of an atom-by-atom replica of you?