I was discussing UDT yesterday and the question came up of how to treat uncertainty over your utility function. I suggested that this could be transformed into a question of uncertainty over outcomes. The intuition is that if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples. Is this approach correct? In particular, is it transformation compatible with UDT-style reasoning?
The min-max normalisation of https://www.lesswrong.com/posts/hBJCMWELaW6MxinYW/intertheoretic-utility-comparison can be seen as the formalisation of normalising on effort (it normalises on what you could achieve if you dedicated yourself entirely to one goal).