I've written before about the difficulty of distinguishing values from errors, from algorithms, and from context. Now I have to add to that list: How can we distinguish our utility function from the parameters we use to apply it?
In my recent discussion post, "Rationalists don't care about the future", I showed that exponential time-discounting, plus some assumptions about physics and knowledge, leads to not caring about the future. Many people responded by saying that, if I care about the future, this shows that my utility function does not use exponential time-discounting.
This response assumes that the shape of my time-discounting function is part of my utility function. In other words, the way you time-discount is one of your values.
By contrast, Eliezer wrote an earlier post saying that we should use human values, but without time-discounting. Eliezer is aware that humans appear to use time discounting. Therefore, this implicitly claims that the time-discounting function is not one of our values. It's a parameter for how we implement them.
(Some of the arguments Eliezer used were value-based arguments, suggesting that we can use our values to set the parameters that we use to implement our values... I suspect this recursive approach could introduce bogus solutions, like multiplying both sides of an equation by a variable, or worse; but that would take a longer post to address. I will note that some recursive equations do have unique solutions.)
The program of CEV assumes that a transhuman can use some extrapolated version of values currently used by some humans. If that transhuman has a life expectancy of a billion years, it will likely view time discounting differently. Eliezer's post against time discounting suggests, to me, a God-like view of the universe, in which we eliminate time discounting in the same way (and for the same reasons) that many people want to eliminate space-discounting (not caring about far-away people) in contemporary ethics. This is taking an ethical code that evolved agents have, which is constructed to promote the propagation of those agents' genes, and applying it without reference to any particular set of genes. This is also pretty much what folk-morality says a social moral code is. So the idea that you can apply the same utility function from a radically different context, is inherent in CEV, and is common to much public discourse on ethics which assumes that you can construct a social morality that is based on the morality we find in individual agents.
On the other hand, I have argued that assuming that social ethics and individual ethics are the same, is either merely sloppy thinking, or an evolved (or deliberately constructed) lie. People who believed this would probably subscribe to a social-contract theory of ethics. (This view also has problems, beyond the scope of this post.)
I have one heuristic that I think is pretty good for telling when something is not a value: If it's mathematically wrong, it's an error, not a value. So my inclination is to point out that exponential time-discounting is correct. All other forms of time-discounting lead to inconsistencies. You can time-discount exponentially; or you can not time-discount at all, as Eliezer suggested; or you can be in error.
But my purpose in this post is not to continue the arguments from that other post. It's to point out this additional challenge in isolating what values are. Is your time-discounting function a value, or a value parameter?
We are humans. We do not possess a utility-function. Our values and goals are not stable. We do not differentiate between means and ends, instrumental and terminal goals. Humans get bored. Humans have time preferences.
There is no crucial difference between inconsistency of goals caused by discounting versus inconsistency caused by boredom. You might enjoy collecting paperclips in 2011 and pay for a ticket to visit a paperclip conference in 2012; but then your future self in 2012 gets bored of paperclips and goes to visit Disneyland instead. That's humane!
What would happen if humans were to discard their time preferences? We would be terrorized by our expectations, always choosing the future over the present. We would only ever pursue instrumental goals and never reach any terminal goals. We would solely care about expected utility rather than actual experience utility.
What is irrational with regard to human nature is to allow the preservation of our values to outweigh their satisfaction. We can not pick and choose our values by their weighting and at the same time retain more than a few basic goals (e.g. survival). If to be rational means to win and to win means to satisfy our values, to reach our goals, then we have to account for the fact that the preservation and satisfaction of human values are overlapping. We value how we choose and we choose what we value.
I disagree. In this case, I would instrumentally value collecting paperclips, perhaps because I find it fun. What has changed is how much fun I derive from paperclips, not how much I value fun. This is not a true cas... (read more)