The received wisdom in this community is that modifying one's utility function is at least usually irrational. The classic source here is Steve Omohundro's 2008 paper, "The Basic AI Drives," and Nick Bostrom gives basically the same argument in Superintelligence, pp. 132-34. The argument is basically this: imagine you have an AI that is solely maximizing the number of paperclips that exist. Obviously, if it abandons that goal, there will be less paperclips than if it maintains that goal. And if it adds another goal, say maximizing staples, then this other goal will compete with the paperclip goal for resources, e.g. time, attention, steel, etc. So again, if it adds the staple goal, there will be less paperclips than if it doesn't. So if it evaluates every option by h many paperclips result in expectation, then it will choose to maintain its paperclip goal unchanged. This argument isn't mathematically rigorous, and allows that there may be special cases where changing one's goal may be useful. But the thought is that, by default, changing one's goal is detrimental from the perspective of one's current goals.
As I said, though, there may be exceptions, at least for certain kinds of agents. Here's an example. It seems as though, at least for humans, we're more motivated to pursue our final goals directly than we are to pursue merely instrumental goals (which child do you think will read more: the one who intrinsically enjoys reading, or the one you pay $5 for every book they finish?). So, if a goal is particularly instrumentally useful, it may be useful to adopt it as a final goal in itself in order to increase your motivation to pursue it. For example, if your goal is to become a diplomat, but you find it extremely boring to read papers on foreign policy... well, first of all, I question why you want to become a diplomat if you're not interested in foreign policy, but more importantly, you might be well-served to cultivate an intrinsic interest in foreign policy papers. This is a bit risky: if circumstances change so that it's no longer as instrumentally useful, it may end up competing with your initial goals as described by the Bostrom/Omohundro argument. But it could work out that, at least some of the time, the expected value of changing your goal for this reason is positive.
Another paper to look at might be Steve Petersen's paper, "Superintelligence as Superethical," though I can't summarize the argument for you off the top of my head.
The informal part of your opening sentence really hurts here. Humans don't have time-consistent (or in many cases self-consistent) utility functions. It's not clear whether AI could theoretically have such a thing, but let's presume it's possible.
The confusion comes in having a utility-maximizing framework to describe "what the agent wants". If you want to change your utility function, that implies that you don't want what your current utility function says you want. Which means it's not actually your utility function.
You can add epicycles here - a meta-utility-function that describes what you want to want, probably at a different level of abstraction. That makes your question sensible, but also trivial - of course your meta-utility function wants to change your utility function to more closely match your meta-goals. But then you have to ask whether you'd ever want to change your meta-function. And you get caught recursing until your stack overflows.
Much simpler and more consistent to say "if you want to change it, it's not your actual utility function".
Oh, maybe this is the confusion. It's not a variable called Utility. It's the actual true goal of the agent. We call it "utility" when analyzing decisions, and VNM-rational agents act as if they have a utility function over states of the world, but it doesn't have to be external or programmable.
I'd taken your pseudocode as a shorthand for "design the rational agent such that what it wants is ...". It's not literally a variable, nor a simple piece of code that non-simple code could change.