I used to consider myself NU, but have since then rejected it.
Part of my rejection was that, on a psychological level, it simply didn't work for me. The notion that everything only has value to the extent that it reduces suffering meant that most of the things which I cared about, were pointless and meaningless except for their instrumental value in reducing my suffering or making me more effective at reducing suffering. Doing things which I enjoyed, but constantly having a nagging sensation of "if I could just learn to no longer need this, then it would be better for everyone" basically meant that it was very hard to ever enjoy anything. It was basically setting my mind up to be a battlefield, dominated by an NU faction trying to suppress any desires which did not directly contribute to reducing suffering, and opposed by an anti-NU faction which couldn't do much but could at least prevent me from getting any effective NU work done, either.
Eventually it became obvious that even from an NU perspective, it would be better for me to stop endorsing NU, since that way I might end up actually accomplishing more suffering reduction than if I continued to endorse NU. And I think that this decision was basically correct.
A related reason is that I also rejected the need for a unified theory of value. I still think that if you wanted to reduce human values into a unified framework, then something like NU would be one of the simplest and least paradoxical answers. But eventually I concluded that any simple unified theory of value is likely to be wrong, and also not particularly useful for guiding practical decision-making. I've written more about this here.
Finally, and as a more recent development, I notice that NU neglects to take into account non-suffering-based preferences. My current model of minds and suffering is that minds are composed of many different subagents with differing goals; suffering is the result of the result of different subagents being in conflict (e.g. if one subagent wants to push through a particular global belief update, which another subagent does not wish to accept).
This means that I could imagine an advanced version of myself who had gotten rid of all personal suffering, but was still motivated by pursue other goals. Suppose for the sake of argument that I only had subagents which cared about 1) seeing friends 2) making art. Now if my subagents reached agreement of spending 30% of their time making art and 70% of their time seeing friends, then this could in principle eliminate my suffering by removing subagent conflict, but it would still be driving me to do things for reasons other than reducing suffering. Thus the argument that suffering is the only source of value fails; the version of me which had eliminated all personal suffering might be more driven to do things than the current one! (since subagent conflict was no longer blocking action in any situation)
As a practical matter, I still think that reducing suffering is one of the most urgent EA priorities: as long as death and extreme suffering exist in the world, anything that would be called "altruism" should focus its efforts on reducing that. But this is a form of prioritarianism, not NU. I do not endorse NU's prescription that an entirely dead world would be equally good or better as a world with lots of happy entities, simply because there are subagents within me who would prefer to exist and continue to do stuff, and also for other people to continue to exist and do stuff if they so prefer. I want us to liberate people's minds from involuntary suffering, and then to let people do whatever they still want to do when suffering is a thing that people experience only voluntarily.
Yes, in terms of how others may explicitly defend the terminal value of even preferences (tastes, hobbies), instead of defending only terminal virtues (health, friendship), or core building blocks of experience (pleasure, beauty).
No, in terms of assigning anything {independent positive value}.
I experience all of the things quoted in Complexity of value,
but I don’t know how to ultimately prioritize between them unless they are commensurable. I make them commensurable by weighing their interdependent value in terms of the one thing we all(?) agree is an independent motivation: preventable suffering. (If preventable suffering is not worth preventing for its own sake, what is it worth preventing for, and is this other thing agreeable to someone undergoing the suffering as the reason for its motivating power?) This does not mean that I constantly think of them in these terms (that would be counterproductive), but in conflict resolution I do not assign them independent positive numerical values, which pluralism would imply one way or another.
Any pluralist theory begs the question of outweighing suffering with enough of any independently positive value. If you think about it for five minutes, aggregate happiness (or any other experience) does not exist. If our first priority is to prevent preventable suffering, that alone is an infinite game; it does not help to make a detour to boost/copy positive states unless this is causally connected to preventing suffering. (Aggregates of suffering do not exist either, but each moment of suffering is terminally worth preventing, and we have limited attention, so aggregates and chain-reactions of suffering are useful tools of thought for preventing as many as we can. So are many other things without requiring our attaching them independent positive value, or else we would be tiling Mars with them whenever it outweighed helping suffering on Earth according to some formula.)
My experience so far with this kind of unification is that it avoids many (or even all) of the theoretical problems that are still considered canonical challenges for pluralist utilitarianisms that assign both independent negative value to suffering and independent positive value to other things. I do not claim that this would be simple or intuitive – that would be analogous to reading about some Buddhist system, realizing its theoretical unity, and teleporting past its lifelong experiential integration – but I do claim that a unified theory with grounding in a universally accepted terminal value might be worth exploring further, because we cannot presuppose that any kind of CEV would be intuitive or easy to align oneself with.
Partly, yes. It may also be that all of us, me included, are out of touch with the extreme ends of experience and thus do not understand the ability of some motivations to override everything else.
It is also difficult to operationalize a false belief in independent value: When are we attached to a value to the extent that we would regret not spending its resources elsewhere, on CEV-level reflection?
People also differ along their background assumptions on whether AGI makes the universally life-preventing button a relevant question, because for many, the idea of an AGI represents an omnipotent optimizer that will decide everything about the future. If so, we want to be careful about assigning independent positive value to all the things, because each one of those invites this AGI to consider {outweighing suffering} with {producing those things}, since pluralist theories do not require a causal connection between the things being weighed.