I used to consider myself NU, but have since then rejected it.
Part of my rejection was that, on a psychological level, it simply didn't work for me. The notion that everything only has value to the extent that it reduces suffering meant that most of the things which I cared about, were pointless and meaningless except for their instrumental value in reducing my suffering or making me more effective at reducing suffering. Doing things which I enjoyed, but constantly having a nagging sensation of "if I could just learn to no longer need this, then it would be better for everyone" basically meant that it was very hard to ever enjoy anything. It was basically setting my mind up to be a battlefield, dominated by an NU faction trying to suppress any desires which did not directly contribute to reducing suffering, and opposed by an anti-NU faction which couldn't do much but could at least prevent me from getting any effective NU work done, either.
Eventually it became obvious that even from an NU perspective, it would be better for me to stop endorsing NU, since that way I might end up actually accomplishing more suffering reduction than if I continued to endorse NU. And I think that this decision was basically correct.
A related reason is that I also rejected the need for a unified theory of value. I still think that if you wanted to reduce human values into a unified framework, then something like NU would be one of the simplest and least paradoxical answers. But eventually I concluded that any simple unified theory of value is likely to be wrong, and also not particularly useful for guiding practical decision-making. I've written more about this here.
Finally, and as a more recent development, I notice that NU neglects to take into account non-suffering-based preferences. My current model of minds and suffering is that minds are composed of many different subagents with differing goals; suffering is the result of the result of different subagents being in conflict (e.g. if one subagent wants to push through a particular global belief update, which another subagent does not wish to accept).
This means that I could imagine an advanced version of myself who had gotten rid of all personal suffering, but was still motivated by pursue other goals. Suppose for the sake of argument that I only had subagents which cared about 1) seeing friends 2) making art. Now if my subagents reached agreement of spending 30% of their time making art and 70% of their time seeing friends, then this could in principle eliminate my suffering by removing subagent conflict, but it would still be driving me to do things for reasons other than reducing suffering. Thus the argument that suffering is the only source of value fails; the version of me which had eliminated all personal suffering might be more driven to do things than the current one! (since subagent conflict was no longer blocking action in any situation)
As a practical matter, I still think that reducing suffering is one of the most urgent EA priorities: as long as death and extreme suffering exist in the world, anything that would be called "altruism" should focus its efforts on reducing that. But this is a form of prioritarianism, not NU. I do not endorse NU's prescription that an entirely dead world would be equally good or better as a world with lots of happy entities, simply because there are subagents within me who would prefer to exist and continue to do stuff, and also for other people to continue to exist and do stuff if they so prefer. I want us to liberate people's minds from involuntary suffering, and then to let people do whatever they still want to do when suffering is a thing that people experience only voluntarily.
Ethical theories don't need to be simple. I used to have the belief that ethical theories ought to be simple/elegant/non-arbitrary for us to have a shot at them being the correct theory, a theory that intelligent civilizations with different evolutionary histories would all converge on. This made me think that NU might be that correct theory. Now I’m confident that this sort of thinking was confused: I think there is no reason to expect that intelligent civilizations with different evolutionary histories would converge on the same values, or that there is one correct set of ethics that they "should" converge on if they were approaching the matter "correctly". So, looking back, my older intuition feels confused now in a similar way as ordering the simplest food in a restaurant in expectation of anticipating what others would order if they also thought that the goal was that everyone orders the same thing. Now I just want to order the "food" that satisfies my personal criteria (and these criteria do happen to include placing value on non-arbitrariness/simplicity/elegance, but I’m a bit less single-minded about it).
Your way of unifying psychological motivations down to suffering reduction is an "externalist" account of why decisions are made, which is different from the internal story people tell themselves. Why think all people who tell different stories are mistaken about their own reasons? The point "it is a straw man argument that NUs don’t value life or positive states“ is unconvincing, as others have already pointed out. I actually share your view that a lot of things people do might in some way trace back to a motivating quality in feelings of dissatisfaction, but (1) there are exceptions to that (e.g., sometimes I do things on auto-pilot and not out of an internal sense of urgency/need, and sometimes I feel agenty and do things in the world to achieve my reflected life goals rather than tend to my own momentary well-being), and (2) that doesn’t mean that whichever parts of our minds we most identify with need to accept suffering reduction as the ultimate justification of their actions. For instance, let’s say you could prove that a true proximate cause why a person refused to enter Nozick’s experience machine was that, when they contemplated the decision, they felt really bad about the prospect of learning that their own life goals are shallower and more self-centered than they would have thought, and *therefore* they refuse the offer. Your account would say: "They made this choice driven by the avoidance of bad feelings, which just shows that ultimately they should accept the offer, or choose whichever offer reduces more suffering all-things-considered.“ Okay yeah, that's one story to tell. But the person in question tells herself the story that she made this choice because she has strong aspirations about what type of person she wants to be. Why would your externally-imported justification be more valid (for this person's life) than her own internal justification?