Suppose someone draws a "personal identity" line to exclude this future sunrise-witnessing person. Then if you claim that, by not anticipating, they are degrading the accuracy of the sunrise-witness's beliefs, they might reply that you are begging the question.
I have a closely related objection/clarification. I agree with the main thrust of Rob's post, but this part:
Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you"...
Rather, I assume xlr8harder cares about more substantive questions like: (1) If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? (2) Should I anticipate experiencing what my upload experiences? (3) If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure?
..strikes me as confused or at least confusing.
Take your chemistry/physics tests example. What does "I anticipate the experience of a sense of accomplishment in answering the chemistry test" mean? Well for one thing, it certainly indicates that you believe the experience is likely to happen (to someone). For another, it often means that you believe it will happen to you - but that invites the semantic question that Rob says this isn't about. For a third - and I propose that this is a key point that makes us feel there is a "substantive" question here - it indicates that you empathize with this future person who does well on the test.
But I don't see how empathizing or not-empathizing can be assessed for accuracy. It can be consistent or inconsistent with the things one cares about, which I suppose makes it subject to rational evaluation, but that looks different from accuracy/inaccuracy.
I'm not at all convinced by the claim that <valence is a roughly linear function over included concepts>, if I may paraphrase. After laying out a counterexample, you seem to be constructing a separate family of concepts that better fits a linear model. But (a) this is post-hoc and potentially ad-hoc, and (b) you've given us little reason to expect that there will always be such a family of concepts. It would help if you could outline how a privileged set of concepts arises for a given person, that will explain their valences.
Also, your definition of "innate drives" works for the purpose of collecting all valences into a category explained by one basket of root causes. But it's a diverse basket. I think you're missing the opportunity to make a distinction -- Wanting vs. Liking Revisited -- which is useful for understanding human motivations.
When dealing with theology, you need to be careful about invoking common sense. According to https://www.thegospelcoalition.org/themelios/article/tensions-in-calvins-idea-of-predestination/ , Calvin held that God's destiny for a human being is decided eternally, not within time and prior to that person's prayer, hard work, etc.
The money (or heaven) is already in the box. Omega (or God) can not change the outcome.
What makes this kind of reasoning work in the real (natural) world is the growth of entropy involved in putting money in boxes, deciding to do so, or thinking about whether the money is there. If we're taking theology seriously though - or maybe even when we posit an "Omega" with magical sounding powers - we need to wonder whether the usual rules still apply.
I view your final point as crucial. I would put an additional twist on it, though. During the approach to AGI, if takeoff is even a little bit slow, the effective goals of the system can change. For example, most corporations arguably don't pursue profit exclusively even though they may be officially bound to. They favor executives, board members, and key employees in ways both subtle and obvious. But explicitly programming those goals into an SGD algorithm is probably too blatant to get away with.
In addition to your cases that fail to be explained by the four modes, I submit that Leonard Cohen's song itself also fails to fit. Roughly speaking, one thread of meaning in these verses is that "(approximately) everybody knows the dice are loaded, but they don't raise a fuss because they know if they do, they'll be subjected to an even more unfavorable game." And likewise for the lost war. A second thread of meaning is that, as pjeby pointed out, people want to be at peace with unpleasant things they can't personally change. It's not about trapping the listener into agreeing with the propositions that everyone supposedly knows. Cohen's protagonist just takes it that the listener already agrees, and uses that to explain his own reaction to the betrayal he feels.
Like Paradiddle, I worry about the methodology, but my worry is different. It's not just the conclusions that are suspect in my view: it's the data. In particular, this --
Some people seemed to have multiple views on what consciousness is, in which cases I talked to them longer until they became fairly committed to one main idea.
-- is a serious problem. You are basically forcing your subjects to treat a cluster in thingspace as if it must be definable by a single property or process. Or perhaps they perceive you as urging them to pick a most important property. If I had to pick a single most important property of consciousness, I'd pick affect (responses 4, 5 and 6), but that doesn't mean I think affect exhausts consciousness. Analogously, if you ask me for the single most important thing about a car, I'll tell you that it gets one from point A to point B; but this doesn't mean that's my definition of "car".
This is not to deny that "consciousness" is ambiguous! I agree that it is. I'm not sure whether that's all that problematic, however. There are good reasons for everyday English speakers to group related aspects together. And when philosophers or neuroscientists try to answer questions about consciousness, in its various aspects which raise different questions, they typically clue you in as to which aspects they are addressing.
this [that there is no ground truth as to what you experience] is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists.
I beg to differ. The thrust of Dennett's statement is easily interpreted as the truth of a description being partially constituted by the subject's acceptance of the description. E.g., in one of the snippets/bits you cite, "I seem to see a pink ring." If the subject said "I seem to see a reddish oval", perhaps that would have been true. But compare:
My freely drinking tea rather than coffee is partially constituted by saying to my host "tea, please." Yet there is still an actual event of my freely drinking tea. Even though if I had said "coffee, please" I probably would have drunk coffee instead.
We are getting into a zone where it is hard to tell what is a verbal issue and what is a substantive one. (And in my view, that's because the distinction is inherently fuzzy.) But that's life.
Fair point about the experience itself vs its description. But note that all the controversy is about the descriptions. "Qualia" is a descriptor, "sensation" is a descriptor, etc. Even "illusionists" about qualia don't deny that people experience things.
Given the disagreement over what "causality" is, I suspect that different CDT's might have different tolerances for adding precommitment without spoiling the point of CDT. For an example of a definition of causality that makes interesting impacts on decision theory, see Douglas Kutach, Causation and its Basis in Fundamental Physics. There's a nice review here. Defining "causation" Kutach's way would allow both making and keeping precommitments to count as causing good results. It would also at least partly collapse the divergence between CDT and EDT. Maybe completely - I haven't thought that through yet.