I was discussing UDT yesterday and the question came up of how to treat uncertainty over your utility function. I suggested that this could be transformed into a question of uncertainty over outcomes. The intuition is that if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples. Is this approach correct? In particular, is it transformation compatible with UDT-style reasoning?
Indeed.
We tried to develop a whole theory to deal with these questions, didn't find any nice answer: https://www.lesswrong.com/posts/hBJCMWELaW6MxinYW/intertheoretic-utility-comparison