I've written before about the difficulty of distinguishing values from errors, from algorithms, and from context. Now I have to add to that list: How can we distinguish our utility function from the parameters we use to apply it?
In my recent discussion post, "Rationalists don't care about the future", I showed that exponential time-discounting, plus some assumptions about physics and knowledge, leads to not caring about the future. Many people responded by saying that, if I care about the future, this shows that my utility function does not use exponential time-discounting.
This response assumes that the shape of my time-discounting function is part of my utility function. In other words, the way you time-discount is one of your values.
By contrast, Eliezer wrote an earlier post saying that we should use human values, but without time-discounting. Eliezer is aware that humans appear to use time discounting. Therefore, this implicitly claims that the time-discounting function is not one of our values. It's a parameter for how we implement them.
(Some of the arguments Eliezer used were value-based arguments, suggesting that we can use our values to set the parameters that we use to implement our values... I suspect this recursive approach could introduce bogus solutions, like multiplying both sides of an equation by a variable, or worse; but that would take a longer post to address. I will note that some recursive equations do have unique solutions.)
The program of CEV assumes that a transhuman can use some extrapolated version of values currently used by some humans. If that transhuman has a life expectancy of a billion years, it will likely view time discounting differently. Eliezer's post against time discounting suggests, to me, a God-like view of the universe, in which we eliminate time discounting in the same way (and for the same reasons) that many people want to eliminate space-discounting (not caring about far-away people) in contemporary ethics. This is taking an ethical code that evolved agents have, which is constructed to promote the propagation of those agents' genes, and applying it without reference to any particular set of genes. This is also pretty much what folk-morality says a social moral code is. So the idea that you can apply the same utility function from a radically different context, is inherent in CEV, and is common to much public discourse on ethics which assumes that you can construct a social morality that is based on the morality we find in individual agents.
On the other hand, I have argued that assuming that social ethics and individual ethics are the same, is either merely sloppy thinking, or an evolved (or deliberately constructed) lie. People who believed this would probably subscribe to a social-contract theory of ethics. (This view also has problems, beyond the scope of this post.)
I have one heuristic that I think is pretty good for telling when something is not a value: If it's mathematically wrong, it's an error, not a value. So my inclination is to point out that exponential time-discounting is correct. All other forms of time-discounting lead to inconsistencies. You can time-discount exponentially; or you can not time-discount at all, as Eliezer suggested; or you can be in error.
But my purpose in this post is not to continue the arguments from that other post. It's to point out this additional challenge in isolating what values are. Is your time-discounting function a value, or a value parameter?
If "parameter" were "instrumental value", that would mean you could set the parameter according to your values. That would be very similar to saying it was part of your values.
The notion of "value" vs. "instrumental value" is probably bankrupt. It's very similar to the dichotomy of "grounded symbol vs. dependent symbol". I talked about this in the section "Value is a network problem" of my regrettably long post on values.
In this particular case, I'm also presenting the option that "parameter" is something you can place logical constraints on, regardless of values. Like math: You can't say that believing that 1+1=3 is one of your values.
Answer to Wei Dai: It's a parameter you need to set to implement your values; but I don't know how you set it. That's the problem I'm pointing out. Resolving whether this was an instrumental value or not would solve the problem.
If I understand you correctly, you're thinking of utility functions as f(p,U), were U is the state of the universe and p is some parameter. Here f is your values and p is the parameter. If that's the case, I don't understand the distinction your making by separating out p, i.e., why not just work with F(U)=f(p,U). If there's uncertainty about the value of p, how is that different from uncertainty about your utility function?