Let's say Bob's terminal value is to travel back in time and ride a dinosaur.
It is instrumentally rational for Bob to study physics so he can learn how to build a time machine. As he learns more physics, Bob realizes that his terminal value is not only utterly impossible but meaningless. By definition, someone in Bob's past riding a dinosaur is not a future evolution of the present Bob.
There are a number of ways to create the subjective experience of having gone into the past and ridden a dinosaur. But to Bob, it's not the same because he wanted both the subjective experience and the knowledge that it corresponded to objective fact. Without the latter, he might as well have just watched a movie or played a video game.
So if we took the original, innocent-of-physics Bob and somehow calculated his coherent extrapolated volition, we would end up with a Bob who has given up on time travel. The original Bob would not want to be this Bob.
But, how do we know that _anything_ we value won't similarly dissolve under sufficiently thorough deconstruction? Let's suppose for a minute that all "human values" are dangling units; that everything we want is as possible and makes as much sense as wanting to hear the sound of blue or taste the flavor of a prime number. What is the rational course of action in such a situation?
PS: If your response resembles "keep attempting to XXX anyway", please explain what privileges XXX over any number of other alternatives other than your current preference. Are you using some kind of pre-commitment strategy to a subset of your current goals? Do you now wish you had used the same strategy to precommit to goals you had when you were a toddler?
You're right that a meaningless goal cannot be pursued, but nor can you be said to even attempt to pursue it - i.e., the pursuit of a meaningless goal is itself a meaningless activity. Bob can't put any effort into his goal of time travel, he can only confusedly do things he mistakenly thinks of as "pursuing the goal of time travel", because pursuing the goal of time travel isn't a possible activity. What Bob has learned is that he wasn't pursuing the goal of time travel to begin with. He was altogether wrong about having a terminal value of travelling back in time and riding a dinosaur because there's no such thing.
That seems obviously wrong to me. There's nothing at all preventing me from designing an invisible-pink-unicorn maximizer, even if invisible pink unicorns are impossible. For that matter, if we allow counterfactuals, an invisible-pink-unicorn maximizer still looks like an intelligence designed to maximize unicorns - in the counterfactual universe where unicorns exist, the intelligence takes actions that tend to maximize unicorns.