I'm leaning towards a hedonistic view, though, and one reason for this has to do with my view on personal identity. I don't think the concept makes any sense.
I consider nearly all arguments of the form "X is not a coherent concept, therefore we ought not to care about it" to be invalid. I don't mean to give offense, but such arguments seem to me to be a form of pretending to be wise. This is especially true if X has predictive power, if knowing something is X can cause you to correctly anticipate your experiences. And you have to admit, knowing someone is the same person as someone you've encountered before makes you more likely to be able to predict their behavior.
Arguments that challenge the coherency of a concept typically function by asking a number of questions that our intuitions about the concept cannot answer readily, creating a sense of dumbfoundment. They then do not bother to think further about the questions and try to answer them, instead taking the inability to answer the question readily as evidence of incoherence. These arguments also frequently appeal to the fallacy of the gray, assuming that because there is no clear-cut border between two concepts or things that no distinction between them must exist.
This fact was brought home to me when I came across discussions of racism that argued that racism was wrong because "race" was not a coherent concept. The argument initially appealed to me because it hit a few applause lights, such as "racism is bad," and "racists are morons." However, I became increasingly bothered because I was knowledgeable about biology and genetics, and could easily see several simple ways to modify the concept of "race" into something coherent. It also seemed to me that the reason racism was wrong was that preference violation and suffering were bad regardless of the race of the person experiencing them, not because racists were guilty of incoherent reasoning. I realized that the argument was a terrible one, and could only be persuasive if one was predisposed to hate racism for other reasons.
The concept of personal identity makes plenty of sense. I've read Parfit too, and read all the questions about the nature of personal identity. I then proceeded to actually answer those questions and developed a much better understanding of what exactly it is I value when I say I value personal identity. To put it (extremely) shortly:
There are entities that have preferences about the future. These preferences include preferences about how the entity itself will change in the future (Parfit makes a similar point when he discusses "global preferences"). These preferences constitute a "personal identity." If an entity changes we don't need to make any references to the changed entity being the "same person" as a past entity. We simply take into account whether the change is desirable or not. I write much more about the subject here.
I don't think my present self has any privileged (normative) authority over my future selves
I don't necessarily think that either. That's why I want to make sure that the future self that I turn into remains similar to my present self in certain ways, especially in his preferences. That way the issue won't ever come up.
because when I just think in terms of consciousness-moments
This might be your first mistake. We aren't just consciousness moments. We're utility functions, memories, personalities and sets of values. Our consciousness-moments are just the tip of the iceberg. That's one reason why it's still immoral to violate a person's preferences when they're unconscious, their values still exist somewhere in their brain, even when they're not conscious.
I find it counterintuitive why preferences (as opposed to suffering) would be what is relevant.
It seems obvious to me that they're both relevant.
I consider nearly all arguments of the form "X is not a coherent concept, therefore we ought not to care about it" to be invalid.
I agree, I'm not saying you ought not care about it. My reasoning is different: I claim that people's intuitive notion of personal identity is nonsense, in a similar way as the concept of free will is nonsense. There is no numerically identical thing existing over time, because there is no way such a notion could make sense in the first place. Now, once someone realises this, he/she can either choose to group all th...
Summary: The term 'effective altuist' invites confusion between 'the right thing to do' and 'the thing that most efficiently promotes welfare.' I think this creeping utilitarianism is a bad thing, and should at least be made explicit. This is not to accuse anyone of deliberate deception.
Over the last year or so, the term 'Effective Altruist' has come into use. I self-identified as one on the LW survey, so I speak as a friend. However, I think there is a very big danger with the terminology.
The term 'Effective Altruist' was born out of the need for a label for those people who were willing to dedicate their lives to making the world a better place in rational ways, even if that meant doing counter-intuitive things, like working as an Alaskan truck driver. The previous term, 'really super awesome hardcore people', was indeed a little inelegant.
However, 'Effective Altruist' has a major problem: it refers to altruism, not ethics. Altruism may be a part of ethics (though the etymology of the term gives some concern), but it is not all there is to ethics. Value is complex. Helping people is good, but so is truth, and justice, and freedom, and beauty, and loyalty, and fairness, and honor, and fraternity, and tradition, and many other things.
A charity that very efficiently promoted beauty and justice, but only inefficiently produced happiness, would probably not be considered an EA organization. A while ago I suggested to [one of the leaders of the Center for Effective Altruism] the creation of a charity to promote promise-keeping. I didn't claim such a charity would be an optimal way of promoting happiness, and to them, this was sufficient to show 1) that it was not EA - and hence 2) inferior to EA things.
Such thinking involves either a equivocation or a concealed premise. If 'EA' is interpreted literally, so 'the primary/driving goal is to help others', then something not being EA is insufficient for it to not be the best thing you could do - there is more to ethics and the good, than altruism and promoting welfare. Failure to promote one dimension of the good doesn't mean you're not the optimal way of promoting their sum. On the other hand, if 'EA' is interpreted broadly, as being concerned with 'happiness, health, justice, fairness and/or other values', then merely failing to promote welfare/happiness does not mean a cause is not EA. Much EA discussion, like on the popular facebook group, equivocates between these two meanings.*
...Unless one thought that helping people was all their was to ethics, in which case this is not equivocation. As virtually all of CEA's leaders are utilitarians, it is plausible that is was the concealed premise in their argument. In this case, there is no equivocation, but a different logical fallacy, that of an omitted premise, has been committed. And we should be just as wary as in the case of equivocation.
Unfortunately, utilitarianism is false, or at least not obviously true. Something can be the morally best thing to do, while not being EA. Just because some utilitarians have popularized a term which cleverly equivocates between "promotes welfare" and "is the best thing" does not mean we should be taken in. Every fashionable ideology likes to blurr the lines between its goals and its methods (is Socialism about helping the working man or about state ownership of industry? is libertarianism about freedom or low taxes?) in order to make people who agree with the goals forget that there might be other means of achieving them.
There are two options: recognize 'EA' as referring to only a subset of morality, or recognize as 'EA' actions and organizations that are ethical through ways other than producing welfare/happiness.
* Yes, one might say that promoting X's honor thereby helped X, and thus there was no distinction. However, I think people who make this argument in theory are unlikely to observe it in practice - I doubt that there will be an EA organisation dedicated to pure retribution, even if it was both extremely cheap to promote and a part of ethics.