I've written before about the difficulty of distinguishing values from errors, from algorithms, and from context. Now I have to add to that list: How can we distinguish our utility function from the parameters we use to apply it?
In my recent discussion post, "Rationalists don't care about the future", I showed that exponential time-discounting, plus some assumptions about physics and knowledge, leads to not caring about the future. Many people responded by saying that, if I care about the future, this shows that my utility function does not use exponential time-discounting.
This response assumes that the shape of my time-discounting function is part of my utility function. In other words, the way you time-discount is one of your values.
By contrast, Eliezer wrote an earlier post saying that we should use human values, but without time-discounting. Eliezer is aware that humans appear to use time discounting. Therefore, this implicitly claims that the time-discounting function is not one of our values. It's a parameter for how we implement them.
(Some of the arguments Eliezer used were value-based arguments, suggesting that we can use our values to set the parameters that we use to implement our values... I suspect this recursive approach could introduce bogus solutions, like multiplying both sides of an equation by a variable, or worse; but that would take a longer post to address. I will note that some recursive equations do have unique solutions.)
The program of CEV assumes that a transhuman can use some extrapolated version of values currently used by some humans. If that transhuman has a life expectancy of a billion years, it will likely view time discounting differently. Eliezer's post against time discounting suggests, to me, a God-like view of the universe, in which we eliminate time discounting in the same way (and for the same reasons) that many people want to eliminate space-discounting (not caring about far-away people) in contemporary ethics. This is taking an ethical code that evolved agents have, which is constructed to promote the propagation of those agents' genes, and applying it without reference to any particular set of genes. This is also pretty much what folk-morality says a social moral code is. So the idea that you can apply the same utility function from a radically different context, is inherent in CEV, and is common to much public discourse on ethics which assumes that you can construct a social morality that is based on the morality we find in individual agents.
On the other hand, I have argued that assuming that social ethics and individual ethics are the same, is either merely sloppy thinking, or an evolved (or deliberately constructed) lie. People who believed this would probably subscribe to a social-contract theory of ethics. (This view also has problems, beyond the scope of this post.)
I have one heuristic that I think is pretty good for telling when something is not a value: If it's mathematically wrong, it's an error, not a value. So my inclination is to point out that exponential time-discounting is correct. All other forms of time-discounting lead to inconsistencies. You can time-discount exponentially; or you can not time-discount at all, as Eliezer suggested; or you can be in error.
But my purpose in this post is not to continue the arguments from that other post. It's to point out this additional challenge in isolating what values are. Is your time-discounting function a value, or a value parameter?
Some retractions, clarified by Peter in email:
Peter does place a limit on the probability distribution: It must never be zero. (I read a '<' as '<='.) This removes counterexamples 1 and 3. However, I am not sure it's possible to build a probability distribution satisfying his requirements (one that has cardinality 2^N, no zero terms, and sums to one). This link says it is not possible.
The reason Peter's expected value calculation is not of the form p(x)U(x) is because he is summing over one possible action. p(h) is the probability that a particular hypothesis h is true; h(k) is the result of action k when h is true; U(h(k)) is the utility from that result.
"To establish that this series does not converge, we will show that infinitely many of its terms have absolute value >= 1." This statement is valid.
However, my second counterexample still looks solid to me: U(n) = n if n is even, -n if n is odd. p(n) = 1/2^n. This doesn't work in Peter's framework because the domain of p is not countable.
So I give 3 caveats on Peter's theorem:
It only applies if the number of possible worlds, and possible actions you are summing over, is uncountable. This seems overly restrictive. It really doesn't matter if expected utility converges or not, when the sum for the expected utility for a single action would be non-computable regardless of whether it converged.
It is impossible to construct a probability distribution satisfying his requirements, so the theorem doesn't apply to any possible situations.
It doesn't prove that the expected utility isn't bounded. The way the theorem works, it can't rule out a bound over an interval of 2, e.g., U(k) might be provably between -1.1 and 1.1. A reasoner can work fine with bounded utility functions.
I can't agree with any of your caveats. (This is not the same as saying that I think everything in PdB's paper is correct. I haven't looked at it carefully enough to have an opinion on that point.)
"my second counterexample still looks solid to me ... It only applies if the number of ... is uncountable":
The function U in PdB's paper doesn't take integers as arguments, it takes infinite sequences of "perceptions". Provided there are at least two possible perceptions at each time step, there are uncountably many such sequences. How are yo... (read more)