This is clear and well-written, and makes sense to me. I don't think any of it conflicts with my statement, though (if you mean to correct rather than expand upon). My original statement is just a more general version of your more detailed divisions: in each case, "should" argues for a course of action, given an objective. The objective is often implicit, and sometimes you must infer or guess it.
"You shouldn't steal those cookies [...if you want to be moral]." More formally stated, perhaps something like: "not doing this will be morally correct; do not do it if you want to be a moral person."
"You should do X [...if you want to have fun]." More formally: "Doing X will be fun; do it if fun is desired."
I misinterpreted your comment as a question, that's all.
I've been working on metaethics/CEV research for a couple months now (publishing mostly prerequisite material) and figured I'd share some of the sources I've been using.
CEV sources.
Motivation. CEV extrapolates human motivations/desires/values/volition. As such, it will help to understand how human motivation works.
Extrapolation. Is it plausible to think that some kind of extrapolation of human motivations will converge on a single motivational set? How would extrapolation work, exactly?
Metaethics. Should we use CEV, or something else? What does 'should' mean?
Building the utility function. How can a seed AI be built? How can it read what to value?
Preserving the utility function. How can the motivations we put into a superintelligence be preserved over time and self-modifcation?
Reflective decision theory. Current decision theories tell us little about software agents that make decisions to modify their own decision-making mechanisms.
Additional suggestions welcome. I'll try to keep this page up-to-date.