I am reminded of Gurdjieff's division of a man into the thinking, feeling, and moving centres, which in the above scheme would be the cognitive, emotional, and intuitive functions.
I only have a small acquaintance with Gurdjieff's system, but I believe his answer would be along the following lines. In an ordinary man (G and those who transmitted his words wrote at a time when "man", so used, meant "person"), these three functions, these three selves, all act at odds with each other, and that he (ditto) has no real "I", cannot truly "do" anything. Only when the three centres act harmoniously together can he obtain a real self, and become able to do.
In his teachings he gave various methods and exercises by which one might work to achieve that. But he also stressed that it is a very difficult thing, and that scarcely anyone has even the opportunity to undertake this work, still less the inclination.
I think this hits the nail on the head. When we do an internal query for what "what do I want?", we get some unprincipled mixture of these things (depending heavily on context, priming, etc), and our instinctual reaction is to paper over this variation and adamantly insist that what we get from internal queries must be drawn from a natural kind.
So, the dictionary definition (SEP) would be something like "objectively good/parsimonious/effective ways of carving up reality."
There's also the implication that when we use kinds in reasoning, things of the same kind should share most or all important properties for the task at hand. There's also sort of the implication that humans naively think of the world as made out of natural kinds on an ontologically basic level.
I'm saying that even if people don't believe in disembodied souls, when they ask "what do I want?" they think they're getting an answer back that is objectively a good/parsimonious/effective way of talking. That there is some thing, not necessarily a soul but at least a pattern, that is being accessed by different ways of asking "what do I want?", which can't give us inconsistent answers because it's all one thing.
Note that cognition comes with no goals attached. Emotions is where the drive to do anything comes from. You allude to it in "contemplation of a possible choice or outcome". My suspicion is that having a component not available to accurate introspection is essential to functioning. An agent (well, an algorithm) that can completely model itself (or, rather, model its identical copy, to mitigate the usual decision theory paradoxes), may end up in a state where there is no intrinsic meaning to anything, a nirvana of sorts, and no reason to act.
Epistemic Status: Rough outline of ideas
The first step in achieving a goal is knowing what you want. But in order to answer this question, we first need to know what it means to want. Do we care more about your desires at the point of the decision or at the point of the experience? Do we care more about what you want emotionally or what you want cognitively? Ultimately, there's no real meaning to the word "want" inscribed in the universe, there's just a whole bunch of related meanings clustered under the same term. I'll try to break this down, but unfortunately I can only sketch a rough non-empirically based model as this post really just needs to be written by a neuroscientist or psychologist.
Our brain seems to consist of three main subsystems. Firstly, we have emotional systems like pleasure, pain, desire, disinterest, rightness and wrongness. These trigger at various times: contemplation of a possible choice or outcome, after locking in a decision, after learning about the outcome, whilst experiencing the outcome and in self-reflection afterwards. We may switch from positive to negative and back again at different stages and this may be because our preferences have changed or it may just be a consistent conflict between our preferences. For example, we may feel inspired when considering a physical challenge, hate every moment of the experience and then feel a real sense of achievement when we are done.
Secondly, we have the cognitive components. These can be explicit goals ("I must achieve X"), things to be avoided ("I must not fail"), moral imperatives ("I have an obligation to protect X") and decisions ("I will prioritise my long term happiness over temporary pleasures"). Again, these evaluations may change during an experience or afterwards and it's not always clear whether this is us updating our preferences, keeping our preference and learning more information or some of our preferences being fake and merely for social signalling.
Thirdly, we have intuition or a sense that certain actions will be good or bad. This may oppose our explicit cognitive components, for example, a man who is consciously trying to become a lawyer, but subconsciously sabotaging themselves because they think they'd hate it. This is separate from the emotional system as we can have an intuition that we'll hate something without feeling dread about the possibility of it occurring.
Further complications arise. Firstly, it isn't clear how separate the cognitive, intuitive and emotional components really are. Perhaps some cognitions or intuitions intrinsically have certain emotions attached to them Secondly, those who believe in qualia will want the actual qualia to be treated as a seperate component from the information processing components of emotions. Thirdly, our emotional subsystems can respond differently to the same idea framed in a different way and who can say which framing is neutral.
Lastly, we can use these systems to evaluate each other and they may come to different conclusions. For example, our cognitive system might think it is logical to accept one hour of pain for two of equivalent pleasure, but our emotional system may strongly reject such a proposition due to some kind of loss aversion. So then we end up trying to adjudicate meta-level issues such as whether we care more about what our cognitive system thinks or our emotional system, but we can't decide this without choosing a system to decide. And if, for example, we choose our cognitive system as meta-level decider and tells us that we should prefer it on the object level too, then that hardly seems like a fair way of resolving this internal disagreement.
So there's a sense in which these question are meaningless, but I expect that most people also feel very strongly that we should resolve it a particular way and that seems somewhat confusing. I'll finish by acknowledge that I haven't quite reached the point where I've completely dissolved the issue to my own satisfaction, but at the same time, being aware of all these different subsystems seems like significant progress towards the answer.