I was discussing UDT yesterday and the question came up of how to treat uncertainty over your utility function. I suggested that this could be transformed into a question of uncertainty over outcomes. The intuition is that if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples. Is this approach correct? In particular, is it transformation compatible with UDT-style reasoning?
This is precisely the issue discussed at length in Brian Tomasik's article "Two-Envelopes Problem for Uncertainty about Brain-Size Valuation and Other Moral Questions".