It is a fact that I care, we agree.
Perhaps the issue is that I believe I should not care -- that if I was more rational, I would not care.
That my values are based on a misunderstanding of reality, just as the title of this post.
In particular, my values seem to be pinned on ideas that are not true -- that states of the universe matter, objectively rather than just subjectively, and that I exist forever/always.
This "pinning" doesn't seem to be that critical -- life goes on, and I eat a turkey sandwich when I get hungry. But it seems unfortunate that I should understand cerebrally (to the extent that I am capable) that my values are based on an illusion, but that my biology demands that I keep on as though my values were based on something real. To be very dramatic, it is like some concept of my 'self' is trapped in this non-nonsensical machine that keeps on eating and enjoying and caring like Sisyphus.
Put this way, it just sounds like a disconnect in the way our hardware and software evolved -- my brain has evolved to think about how to satisfying certain goals supplied by biology, which often includes the meta-problem of prioritizing and evaluating these goals. The biology doesn't care if the answer returned is 'mu' in the recursion, and furthermore doesn't care if I'm at a step in this evolution where checking-out of the simulation-I'm-in seems just as reasonable an answer as any other course of action.
Fortunately, my organism just ignores those nihilistic opines. (Perhaps this ignoring also evolved, socially or more fundamentally in the hardware, as well.) I say fortunately, because I have other goals besides Tarski, or finding resolutions to these value conundrums.
In particular, my values seem to be pinned on ideas that are not true -- that states of the universe matter, objectively rather than just subjectively, and that I exist forever/always.
Well, if they are, and if I understand what you mean by "pinned on," then we should expect the strength of those values to weaken as you stop investing in those ideas.
I can't tell from your discussion whether you don't find this to be true (in which case I would question what makes you think the values are pinned on the ideas in the first place), or whether you'r...
Let's say Bob's terminal value is to travel back in time and ride a dinosaur.
It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.
There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.
So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.
But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?
PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?