timtyler comments on Issues with the Litany of Gendlin - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (79)
I probably blatantly reveal my ignorance by asking this, but do only agents who know what they want have a utility-function? An AGI undergoing recursive self-improvement can't possible know what exactly it is going to "want" later on (some (sub)goals may turn out to be impossible while world states previously believed to be impossible might turn out to be possible), yet it is implied by its given utility-function and the "nature of reality" (environmental circumstances).
You believe that what you want is actually different from what you want. You appear to be knowing that what you believe you want is different from what you actually want. Proof by contradiction that what you believe you want is what you actually want?
Your utility-function seems to assign high utility to world states where it is optimized according to new information. In other words, you believe that your utility-function should be undergoing recursive self-improvement.
Nope - in theory, all agents have a utility-function - though it might not necessarily be the neatest way of expressing what they value.