In one sense, this is trivial. I have to take you into account when I do something to you, just like I have to take rocks into account when I do something to them. You're part of a state of the world. (It may be the case that after taking rocks into account, it doesn't affect my decision in any way. But my decision can still be formulated as taking rocks into account.)
In another sense, whether I should take your well-being into account depends on my values. If I'm Clippy, then I shouldn't. If I'm me, then I should.
Otherwise you are using morality to mean hedonism.
Hedonism makes action-guiding claims about what you should do, so it's a form of morality, but it doesn't by itself mean that I shouldn't take you into account - it only means that I should take your well-being into account instrumentally, to the degree it gives me pleasure. Also, the fulfillment of one's values is not synonymous with hedonism. A being incapable of experiencing pleasure, such as some form of Clippy, has values but acting to fulfill them would not be hedonism.
Whether or not or you morally-should take me into account does not depend on your values, it depends on what the correct theory of morality is. "Should" is not an unambiguous term with a free variable for " to whom". It is an ambiguous term, and morally-should is not hedonistically-should, is not practically-should....etc.
On Wei_Dai's complexity of values post, Toby Ord writes:
The kind of moral realist positions that apply Occam's razor to moral beliefs are a lot more extreme than most philosophers in the cited survey would sign up to, methinks. One such position that I used to have some degree of belief in is:
Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory morphism, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.
But most modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts", for varying definitions of "fact" that typically fade away into meaninglessness on closer examination, and actually make the same empirical predictions as antirealism.
Suppose you take up Eliezer's "realist" position. Arrangements of spacetime, matter and energy can be "good" in the sense that Eliezer has a "long-list" style definition of goodness up his sleeve, one that decides even contested object-level moral questions like whether abortion should be allowed or not, and then tests any arrangement of spacetime, matter and energy and notes to what extent it fits the criteria in Eliezer's long list, and then decrees goodness or not (possibly with a scalar rather than binary value).
This kind of "moral realism" behaves, to all extents and purposes, like antirealism.
I might compare the situation to Eliezer's blegg post: it may be that moral philosophers have a mental category for "fact" that seems to be allowed to have a value even once all of the empirically grounded surrounding concepts have been fixed. These might be concepts such as "would aliens also think this thing?", "Can it be discovered by an independent agent who hasn't communicated with you?", "Do we apply Occam's razor?", etc.
Moral beliefs might work better when they have a Grand Badge Of Authority attached to them. Once all the empirically falsifiable candidates for the Grand Badge Of Authority have been falsified, the only one left is the ungrounded category marker itself, and some people like to stick this on their object level morals and call themselves "realists".
Personally, I prefer to call a spade a spade, but I don't want to get into an argument about the value of an ungrounded category marker. Suffice it to say that for any practical matter, the only parts of the map we should argue about are parts that map-onto a part of the territory.