I suppose you could define "should" that way, but it's not an adequate unpacking of what humans actually talk about when they talk about morality.
Agreed 100% with this.
Of course, it doesn't follow that what humans talk about when we talk about morality has the properties we talk about it having, or even that it exists at all, any more than analogous things follow about what humans talk about when we talk about Santa Claus or YHWH.
you happen to care about(/prefer/terminally value) being moral.
To say that "I happen to care about being moral" implies that it could be some other way... that I might have happened to care about something other than being moral.
That is, it implies that instead of caring about "the life of [my] friends and [my] family and [my] Significant Other and [my]self" and etc. and etc. and etc., the superposition of which is morality (according to EY), I might have cared about... well, I don't know, really. This account of morality is sufficiently unbounded that it's unclear what it excludes that's within the range of potential human values at all.
I mean, sure, it excludes sorting pebbles into prime-numbered heaps, for example. But for me to say "instead of caring about morality, I might have cared about sorting pebbles into prime-numbered heaps" is kind of misleading, since the truth is I was never going to care about it; it isn't the sort of thing people care about. People aren't Pebblesorters (at least, absent brain damage).
And it seems as though, if pebblesorting were the kind of thing that people sometimes cared about, then the account of morality being given would necessarily say "Well, pebblesorting is part of the complex structure of human value, and morality is that structure, and therefore caring about pebblesorting is part of caring about morality."
If this account of morality doesn't exclude anything that people might actually care about, and it seems like it doesn't, then "I happen to care about being moral" is a misleading thing to say. It was never possible that I might care about anything else.
Well, psychopaths don't seem to care about morality so much. So we can at least point to morality as a particular cluster among things people care about.
In You Provably Can't Trust Yourself, Eliezer tried to figured out why his audience didn't understand his meta-ethics sequence even after they had followed him through philosophy of language and quantum physics. Meta-ethics is my specialty, and I can't figure out what Eliezer's meta-ethical position is. And at least at this point, professionals like Robin Hanson and Toby Ord couldn't figure it out, either.
Part of the problem is that because Eliezer has gotten little value from professional philosophy, he writes about morality in a highly idiosyncratic way, using terms that would require reading hundreds of posts to understand. I might understand Eliezer's meta-ethics better if he would just cough up his positions on standard meta-ethical debates like cognitivism, motivation, the sources of normativity, moral epistemology, and so on. Nick Beckstead recently told me he thinks Eliezer's meta-ethical views are similar to those of Michael Smith, but I'm not seeing it.
If you think you can help me (and others) understand Eliezer's meta-ethical theory, please leave a comment!
Update: This comment by Richard Chappell made sense of Eliezer's meta-ethics for me.