I used to consider myself NU, but have since then rejected it.
Part of my rejection was that, on a psychological level, it simply didn't work for me. The notion that everything only has value to the extent that it reduces suffering meant that most of the things which I cared about, were pointless and meaningless except for their instrumental value in reducing my suffering or making me more effective at reducing suffering. Doing things which I enjoyed, but constantly having a nagging sensation of "if I could just learn to no longer need this, then it would be better for everyone" basically meant that it was very hard to ever enjoy anything. It was basically setting my mind up to be a battlefield, dominated by an NU faction trying to suppress any desires which did not directly contribute to reducing suffering, and opposed by an anti-NU faction which couldn't do much but could at least prevent me from getting any effective NU work done, either.
Eventually it became obvious that even from an NU perspective, it would be better for me to stop endorsing NU, since that way I might end up actually accomplishing more suffering reduction than if I continued to endorse NU. And I think that this decision was basically correct.
A related reason is that I also rejected the need for a unified theory of value. I still think that if you wanted to reduce human values into a unified framework, then something like NU would be one of the simplest and least paradoxical answers. But eventually I concluded that any simple unified theory of value is likely to be wrong, and also not particularly useful for guiding practical decision-making. I've written more about this here.
Finally, and as a more recent development, I notice that NU neglects to take into account non-suffering-based preferences. My current model of minds and suffering is that minds are composed of many different subagents with differing goals; suffering is the result of the result of different subagents being in conflict (e.g. if one subagent wants to push through a particular global belief update, which another subagent does not wish to accept).
This means that I could imagine an advanced version of myself who had gotten rid of all personal suffering, but was still motivated by pursue other goals. Suppose for the sake of argument that I only had subagents which cared about 1) seeing friends 2) making art. Now if my subagents reached agreement of spending 30% of their time making art and 70% of their time seeing friends, then this could in principle eliminate my suffering by removing subagent conflict, but it would still be driving me to do things for reasons other than reducing suffering. Thus the argument that suffering is the only source of value fails; the version of me which had eliminated all personal suffering might be more driven to do things than the current one! (since subagent conflict was no longer blocking action in any situation)
As a practical matter, I still think that reducing suffering is one of the most urgent EA priorities: as long as death and extreme suffering exist in the world, anything that would be called "altruism" should focus its efforts on reducing that. But this is a form of prioritarianism, not NU. I do not endorse NU's prescription that an entirely dead world would be equally good or better as a world with lots of happy entities, simply because there are subagents within me who would prefer to exist and continue to do stuff, and also for other people to continue to exist and do stuff if they so prefer. I want us to liberate people's minds from involuntary suffering, and then to let people do whatever they still want to do when suffering is a thing that people experience only voluntarily.
Seems like all of this could also be said of things like "preferences", "enjoyment", "satisfaction", "feelings of correctness", "attention", "awareness", "imagination", "social modeling", "surprise", "planning", "coordination", "memory", "variety", "novelty", and many other things.
"Preferences" in particular seems like an obvious candidate for 'thing to reduce morality to'; what's your argument for only basing our decisions on dispreference or displeasure and ignoring positive preferences or pleasure (except instrumentally)?
I'm not sure I understand your argument here. Yes, values are complicated and can conflict with each other. But I'd rather try to find reasonable-though-imperfect approximations and tradeoffs, rather than pick a utility function I know doesn't match human values and optimize it instead just because it's uncomplicated and lets us off the hook for thinking about tradeoffs between things we ultimately care about.
E.g., I like pizza. You could say that it's hard to list every possible flavor I enjoy in perfect detail and completeness, but I'm not thereby tempted to stop eating pizza, or to try to reduce my pizza desire to some other goal like 'existential risk minimization' or 'suffering minimization'. Pizza is just one of the things I like.
E.g.: I enjoy it. If my friends have more fun watching action movies than rom-coms, then I'll happily say that that's sufficient reason for them to watch more action movies, all on its own.
Enjoying action movies is less important than preventing someone from being tortured, and if someone talks too much about trivial sources of fun in the context of immense suffering, then it makes sense to worry that they're a bad person (or not sufficiently in touch with their compassion).
But I understand your position to be not "torture matters more than action movies", but "action movies would ideally have zero impact on our decision-making, except insofar as it bears on suffering". I gather that from your perspective, this is just taking compassion to its logical conclusion; assigning some more value to saving horrifically suffering people than to enjoying a movie is compassionate, so assigning infinitely more value to the one than the other seems like it's just dialing compassion up to 11.
One reason I find this uncompelling is that I don't think the right way to do compassion is to ignore most of the things people care about. I think that helping people requires doing the hard work of figuring out everything they value, and helping them get all those things. That might reduce to "just help them suffer less" in nearly all real-world decisions nowadays, because there's an awful lot of suffering today; but that's a contingent strategy based on various organisms' makeup and environment in 2019, not the final word on everything that's worth doing in a life.
I'll tell them I care a great deal about suffering, but I don't assign literally zero importance to everything else.
NU people I've talked to often worry about scenarios like torture vs. dust specks, and that if we don't treat happiness as literally of zero value, then we might make the wrong tradeoff and cause immense harm.
The flip side is dilemmas like:
Suppose you have a chance to push a button that will annihilate all life in the universe forever. You know for a fact that if you don't push it, then billions of people will experience billions upon billions of years of happy, fulfilling, suffering-free life, filled with richness, beauty, variety, and complexity; filled with the things that make life most worth living, and with relationships and life-projects that people find deeply meaningful and satisfying.
However, you also know for a fact that if you don't push the button, you'll experience a tiny, almost-unnoticeable itch on your left shoulder blade a few seconds later, which will be mildly unpleasant for a second or two before the Utopian Future begins. With this one exception, no suffering will ever again occur in the universe, regardless of whether you push the button. Do you push the button, because your momentary itch matters more than all of the potential life and happiness you'd be cutting out?