Can you find any reason why this is not a counterexample?
Yes - the bit that violates de Blanc's conditions is where you say "I choose the case where the set of possible worlds consistent with observation is ...". In fact, you have to assign positive probability to every computable function whose values at points in the set I agree with known values. (Every member of S_I must be a hypothesis.)
Personally, I think it's very easy to 'oversell' de Blanc's result. It's an artifact of the weird constraint of being forced, for every n, to be able to compute a number representing "the probability of phi_n, if it turns out to be a meaningful hypothesis" even without knowing whether phi_n is a meaningful hypothesis. (E.g. "the first hypothesis whose utility (at k) is greater than [this thing which may not halt].")
Yes - the bit that violates de Blanc's conditions is where you say "I choose the case where the set of possible worlds consistent with observation is ...". In fact, you have to assign positive probability to every computable function whose values at points in the set I agree with known values. (Every member of S_I must be a hypothesis.)
What I am doing is choosing the members of S_I. So every member of S_I is a hypothesis.
If the theorem means what it says that it means - that "a computable utility function will have convergent expected utilities iff that function is bounded" - then this is true in any set of possible worlds, meaning any consistent choice of h and S_I. I have chosen one.
The only reason I would not have a free choice of S_I, is if I had already nailed down S_I in some other way (say, by choosing the set I), or if I were using a set S_I for which there was no possible set of observations I such that S_I was equal to the set of worlds consistent with I, or because each application of the theorem is meant to apply to all possible choices of S_I
The last of these would mean that the theorem is actually proving that, for any unbounded utility function U, there exists some set of possible worlds for which it does not converge. That would be a still-interesting, but much weaker result.
What I am doing is choosing the members of S_I.
You're not allowed to - de Blanc has already supplied a definition of S_I. One must either adopt his definition or be talking about something other than his result.
He supplied a definition, not a particular set. I am using his definition, and providing one possible instantiation that is compatible with that definition.
The set S is the set of all total recursive functions. This is set in stone for all time. Therefore, there is only one way that S_I can refer to different things:
But regardless of I and the values of h(i), it's easy to see that one cannot restrict S_I in the way you're attempting to do.
In fact, one can easily see that S_I = the set of functions of the form "if x is in I then h(x), otherwise f(x)" where f is an arbitrary recursive function.
That is, the whole "I" business is completely pointless, except (presumably) to help the reader assure themselves that the result does apply to AIXI.
So, we can rescue our utility function from the theorem if we are allowed to assign zero probability to arbitrary hypotheses that have no plausibility other than that they have not been absolutely ruled out. Such as the hypothesis that the laws of physics are valid at all times except on October 21, 2011.
Being allowed to do this would make the counterexample work.
Can you find any reason why this is not a counterexample?
The only step I'm suspicious of is the step where I choose the set of possible worlds.
Yes. You don't have a free choice of S_I. (Excuse the TeXisms -- I don't know of a way of getting subscripts and superscripts in a comment.) S_I is defined to be the subset of S that is consistent with observations so far, and S is the set of all total recursive functions from N to N (section 2 of the paper). H, the set of hypotheses, is a set of total functions from N to N that is required to include all of S_I.
Gödel numbering is used to enumerate R, the set of all partial recursive functions. R includes S and S_I, but neither S nor S_I is enumerable (by unsolvability of the halting problem).
S_I contains functions big enough and early enough in the enumeration that unless your utility function tends to zero uncomputably fast, it is blown up by the Busy Beaver.
Why would I not have a free choice of S_I when constructing a counterexample? Remember that a counterexample is one particular case, taken out of all possible cases that a theorem applies to.
The only reason I would not have a free choice of S_I, is if each application of the theorem is meant to apply to all possible choices of S_I. That would mean that the theorem is actually proving that, for any unbounded utility function U, there exists some set of possible worlds for which it does not converge. That would be a still-interesting, but much weaker result.
Gödel numbering is used to enumerate R, the set of all partial recursive functions. R includes S and S_I, but neither S nor S_I is enumerable (by unsolvability of the halting problem).
Sorry; I believe you are incorrect, for the two reasons given in my post. Think about it: Goedel numbers are integers; how could anything that is enumerated by Goedel numbers not be enumerable? S_I, S, and R are all enumerable. The original paper says that R is the set of partial mu-recursive functions, which means computable functions; and the number of computable functions is enumerable.
EDIT: As AlephNeil noted, I meant that R is countable. "Neither S nor S_I is enumerable (by unsolvability of the halting problem)" is not supported by any argument. Nor does whether it is enumerable or not affect anything in my original post.
Goedel numbers are integers; how could anything that is enumerated by Goedel numbers not be enumerable? S_I, S, and R are all enumerable. The original paper says that R is the set of partial mu-recursive functions, which means computable functions; and the number of computable functions is enumerable.
You seem to be using 'enumerable' to mean 'countable'. (Perhaps you're confusing it with 'denumerable' which does mean countable.)
RichardKenneway means "recursively enumerable".
You're right! But I may still be right that the set of functions in R is enumerable. (Not that it matters to my post.)
There is a Turing function that can take a Goedel number, and produce the corresponding Goedel function. If you can define a programming language that is Turing-complete, and for which all possible strings are valid programs, then you just turn this function loose on the integers, and it enumerates the set of all possible Turing functions. Can this be done?
Why would I not have a free choice of S_I when constructing a counterexample?
Because S_I is defined in the paper as the set of all total recursive functions compatible with the observations I. You cannot choose it to be anything else.
Gödel numbering is used to enumerate R, the set of all partial recursive functions. R includes S and S_I, but neither S nor S_I is enumerable (by unsolvability of the halting problem).
That was a sloppily expressed statement on my part. What is the case and I should have expressed is that:
R is (encoded by) a recursive set of integers: those integers that are Gödel numbers of partial-recursive functions from N to N. That it is recursive means that there is a Turing machine that can decide for any integer whether it is in R or not.
S is not recursive, nor is it even recursively enumerable. That is, there is no Turing machine which, started on a blank tape, will output exactly all the members of S and no others. The same is true of S_I.
The fact that the possible rate of growth of the function defined by the n'th member of S_I grows with n faster than any computable function is what makes the proof work.
Sorry; I believe you are incorrect, for the two reasons given in my post. Think about it: Goedel numbers are integers; how could anything that is enumerated by Goedel numbers not be enumerable? S_I, S, and R are all enumerable. The original paper says that R is the set of partial mu-recursive functions, which means computable functions; and the number of computable functions is enumerable.
I hope my restatement clarifies the matter. S_I is of course countable, but there is no algorithm to calculate "the sum, over all possible worlds f in S_I, of p(f)U(f(k))", because there is no way of enumerating S_I, i.e. no computable bijection between S_I and N. This in turn prevents this utility function from being computable, and as Peter de Blanc's paper shows, it cannot even be bounded below by a computable function.
Because S_I is defined in the paper as the set of all total recursive functions compatible with the observations I. You cannot choose it to be anything else.
As I said, I am claiming to have a counterexample. I can choose any example out of all possible examples. You must show that the example I've chosen is not a possible case, to say that I'm not free to choose it.
S is not recursive, nor is it even recursively enumerable. That is, there is no Turing machine which, started on a blank tape, will output exactly all the members of S and no others. The same is true of S_I.
S is a subset of R, and R is the set of partial mu-recursive functions, meaning it consists of computable functions, meaning it is countable. R is enumerated by Goedel numbers; therefore it is enumerable. The elements of S_I are exactly those X for which p(X) > 0; and as I showed, they must be countable for this to occur. These are three separate proofs that S_I is countable in cardinality.
Constructing a Turing machine that generates S_I would be difficult. We don't have the information needed to do that. But saying that you don't know how to write that Turing machine, is not a proof that it does not exist.
S_I is of course countable, but there is no algorithm to calculate "the sum, over all possible worlds f in S_I, of p(f)U(f(k))", because there is no way of enumerating S_I, i.e. no computable bijection between S_I and N.
I've just demonstrated an example in which I provided a bijection between S_I and N, and computed the sum.
Because S_I is defined in the paper as the set of all total recursive functions compatible with the observations I. You cannot choose it to be anything else.
As I said, I am claiming to have a counterexample. I can choose any example out of all possible examples. You must show that the example I've chosen is not a possible case, to say that I'm not free to choose it.
A counterexample to what? Peter de Blanc defines exactly what he means by S_I. Replacing S_I by something completely different and calling that S_I does not make a counterexample to what he proves. It may be an example of something worth proving; he himself suggests this at the end of his 2009 paper:
We could use a smaller hypothesis space; perhaps not all computable environments should be considered.
This is supposed to be a counterexample. A counterexample means you are allowed to select any possible case. This is one possible case. I am not "replacing S_I by something completely different." I am choosing one possible S_I. If you don't like my choice of S_I, you need to show that it is not a possible case. That would include showing that I have chosen an S_I that is inconsistent with my earlier choices. Try to do that.
If I had specified the set of observations I, then also specified S_I in a way not determined by I, then you could make that argument. As I have not specified I, you need to show that my choice of S_I is inconsistent with any possible set I in order to make the argument you're trying to make.
Let J be the constant set that I is currently equal to. Your agent's set of hypotheses then does not contain the computable function f(k) =
(2(1-1/2^k))^4 for all K in J
-k for all K not in J
By construction, this hypothesis must be compatible with the known input-output pairs, since it is indistinguishable from your f_4 given the observations in J. S_I must therefore contain f iff it contains f_4. Your S_I does not satisfy this, so it is not a counterexample.
I think you're right. You may have unravelled my counterexample. Tentative congratulations!
I now get to eat a piece of chocolate, as a reward for being proven wrong.
Suggestion: treat yourself to a second piece of chocolate and add an ETA at the top of the post - the parent comment is buried too deeply in the thread, for informational purposes.
As I have not specified I, you need to show that my choice of S_I is inconsistent with any possible set I in order to make the argument you're trying to make.
In the original paper, S_I is determined by I. There is no I for which your set S_I is the set defined in the paper. It really is as simple as that.
Constructing a Turing machine that generates S_I would be difficult. We don't have the information needed to do that. But saying that you don't know how to write that Turing machine, is not a proof that it does not exist.
I am not saying that I don't know how, I am saying that it cannot be done. Of course, saying that it cannot be done is also not a proof that it does not exist, merely a claim that there is such a proof. So, here is such a proof.
Suppose S_I were recursively enumerable. This means that there is a function f from N to N such that (1) f is Turing-computable, (2) for all n, f(n) is the Gödel number of a Turing machine that computes a function in S_I, and (3) for every function in S_I there is an n such that f(n) computes that function.
Recall that S_I is, by definition, the set of total recursive functions that agree with the finite set of ordered pairs I.
Let g be the function from N to N that agrees with I, and such that for every integer n, g(n') = f(n)(n')+1, where n' is the nth integer that is not in the domain of I. Because f and I are total recursive, g is total recursive. Since g is consistent with I, g is in S_I. But g differs from f(n) for every n, contradicting hypothesis (3). Therefore S_I is not r.e.
Peter de Blanc ... claims to show that a utility function can have a value only if the utility function is bounded.
This is important because it implies that, if a utility function is unbounded, it is useless.
From the LW wiki (emphasis added):
Peter de Blanc has proven that if an agent assigns a finite probability to all computable hypotheses...and assigns unboundedly large finite utilities over percept sequences...then the sum in the expected utility formula does not converge.
Peter de Blanc's paper [is] sometimes misinterpreted as showing that any agent with an unbounded finite utility function over outcomes is not consistent, but this has yet to be demonstrated.
What distinction is being drawn in the wiki article between percept sequences and outcomes? The agent's perceptions are its only clue to the outcomes it has achieved, so a utility function over outcomes reduces, via the agent's posterior distribution of outcomes given perceptions, to one over perceptions.
ETA: I'm itching to add a {{by whom}} tag after "sometimes" and {{citation needed}} after "misinterpreted", but I don't think the LW wiki supports those. The implication of the sentence is that some people have interpreted the paper that way and at least one person has argued that this is incorrect, but who and where?
The percept sequences paper is his 2009 paper. That wiki page does refer to his 2007 paper; that may be an error.
It is in any case clear in the 2007 paper that the entities in the sum are possible worlds, and possible actions; and that it is the utility function that is unbounded. "Percepts" in this paper are the result of evaluating a possible-world function on a possible-action. None of this terminological quibbling affects anything I wrote.
As I quoted above, the 2007 paper says, "A computable utility function will have convergent expected utilities iff that function is bounded." Please dissect the difference between that, and the statement I made that you are criticizing, if you think my statement is a misinterpretation. (I just now changed my statement to "a utility function can have an expected value", but I assume you realize that was what I meant.)
As I quoted above, the 2007 paper says, "A computable utility function will have convergent expected utilities iff that function is bounded." Please dissect the difference between that, and the statement I made that you are criticizing, if you think my statement is a misinterpretation.
In that case, the difference is the word "computable". Not that that affects your argument, since as I understood it your proposed counterexample is the identity function.
The problem with my counterexample was that the set S_I is much larger than the set I tried to define it as.
But I think this reveals a fatal flaw in the paper. The environment is described by a function h: N->N. S_I is the set of functions that match h on a finite number of examples.
This means that S_I will have cardinality N^N - the cardinality of the reals. But this makes it impossible to satisfy the precondition that there exists a probability distribution over S_I such that every element is > 0, and the total sums to 1.
Nope. S_I has cardinality N, because it's a subset of S, which is the set of computable functions, which has cardinality N.
Are you sure you get to choose a specific normalized probability function? Because then you can just number your possible outcomes by integers n and set p(n) to 1/U(n) * 1/2^n, which seems too easy to have been missed.
It might be slightly more complicated in practice, since S doesn't have to be countable if the uncountable parts can be integrated over. This doesn't violate the linked theorem because each item in the "sum" is not greater than zero, it's infinitesimal. But you could do pretty much the same thing to the probability densities inside any integrals.
So I suspect that the result is only proved for when you don't get to choose your own probability function, and that that comes up in the paper. But I agree that due to some effects you can end up with probability and utility functions that naturally are biased against high-utility situations, and this would be a counterexample to the result's applicability in those special cases.
The probability function I chose meets the requirements in the paper; therefore it is a case that the theorem should apply to.
Trying to set p(n) to 1/U(n) * 1/2^n is clever; but it doesn't work, because that probability distribution is not known to sum to 1. (It would sum to 1 if U(n) = 1 for all n.)
Because then you can just number your possible outcomes by integers n and set p(n) to 1/U(n) * 1/2^n, which seems too easy to have been missed.
The reason why this wouldn't work is that sometimes what you're calling "U(n)" would fail to be well defined (because some computation doesn't halt) whereas p(n) must always return something.
The reason why this wouldn't work is that sometimes what you're calling "U(n)" would fail to be well defined (because some computation doesn't halt)
No; the utility function is stipulated to be computable.
BTW, I am confused about Godel numbers. Godel numbering is supposed to be able to assign a number to every computable function. The cardinality of the number of functions from N to N is R. Are only a countable number of the set of possible functions computable?
My comment on that was here. To quote it in its entirity:
Um: yuck. For one thing, why is the agent assumed to be summing infinite sequences in its utility function in the first place?
My reaction these days is much the same. We can forget about most things after the universal heat death. So: this is a kind of fantasy mathematics. Best to ignore it - and focus instead on things that have some chance of being useful.
You're objecting to his 2009 paper, which is about a sum over infinite time. This 2007 paper is timeless.
My response to his 2009 paper is here. At present I think that it a) assumes what it is trying to prove, and b) assumes there is no time discounting, and so could be seen as an argument for time discounting. But I haven't looked at it closely enough to be confident of (a), given how accurate Peter typically is.
The future is big enough that I don't think that cutting off the infinite tail helps. Look at the gigantic quantities of utility involved in hypothetical but practically grounded futures involving AI-assisted mankind reaching merely the rest of our galaxy, and according to which the most important thing for everyone to do right now is either work on FAI or donate to the SIAI. Is that a reasonable calculation, or a Pascal's Mugging?
As a rule of thumb, whenever something blows up infinitely at an infinite limit, one should consider that it may blow up practically at a practically reachable point.
Look at the gigantic quantities of utility involved in hypothetical but practically grounded futures involving AI-assisted mankind reaching merely the rest of our galaxy, and according to which the most important thing for everyone to do right now is either work on FAI or donate to the SIAI. Is that a reasonable calculation, or a Pascal's Mugging?
Personally, I rate that as not being a reasonable calculation - though I do think that machine intelligence is an important field, which people should consider working on, if they have an aptitude for it.
The future is big enough that I don't think that cutting off the infinite tail helps.
It sure helps with the particular problem of adding up an infinite number of finite things giving an infinite value. That's what the problem under discussion here boils down to.
Peter de Blanc submitted a paper to arXiv.org in 2007 called "Convergence of Expected Utilities with Algorithmic Probability Distributions." It claims to show that a computable utility function can have an expected value only if the utility function is bounded.
This is important because it implies that, if a utility function is unbounded, it is useless. The purpose of a utility function is to compare possible actions k by choosing the k for which U(k) is maximal. You can't do this if U(k) is undefined for any k, let alone for every k.
I don't know whether any agent we contemplate can have a truly unbounded utility function, since the universe is finite. (The multiverse, supposing you believe in that, might not be finite; but as the utility function is meant to choose a single universe from the multiverse, I doubt that's relevant.) But it is worth exploring, as computable functions are worth exploring despite not having infinitely long tapes for our Turing machines. I previously objected that the decision process is not computable; but this is not important - we want to know whether the expected value exists, before asking how to compute (or approximate) it.
The math in the paper was too difficult for me to follow all the way through; so instead, I tried to construct a counterexample. This counterexample does not work; the flaw is explained in one of comments below. Can you find the flaw yourself? This type of error is both subtle and common. (The problem is not that the theorem actually proves that for any unbounded utility function, there is some set of possible worlds for which the expected value does not converge.)
The abstract says:
This is the formalism the paper uses as I understand it:
The world is described by a function h:N →N, where N represents the integers, and h is a function from a countable set of possible actions, into a countable set of possible observations.
H is the set of possible functions h. p:H →[0 .. 1] is a probability distribution for H. SI is a subset of H representing all possible worlds (possible values for h) that are consistent with prior observations, meaning that for f in SI, p(f) > 0, and for f not in SI, p(f) = 0.
The agent considers a countable number of possible actions. The expected utility of action k is the sum, over all possible worlds f in SI, of p(f)U(f(k)). Theorem 1 says that this sum does not converge, given some conditions which are supposed to have the meaning, "U is unbounded".
First, note that SI must be countable. This is because the sum over f in SI of p(f) = 1, and as Planetmath.org says, a sum can be finite only if the number of items summed is countable. (Also, they are all computable functions, and the number of computable functions is countable; and they are already indexed with Gödel numbers in the paper.) I will renumber the set SI. Instead of indexing them with Gödel numbers, as the paper does, I will index them so that fi (the ith member of SI) is a function of i in a different way.
I choose a probability function p:SI→[0 .. 1]. The sum of p(fi) must be 1. I choose the function p(fi) = 3/4i. This sums to 1, since the infinite series 1/4i (starting from i=1) sums to 1/3. (To see this, let the sum from i=0 to infinity be S; see that S/4 = S - 1, 4S-S = 4. s(4-1)=4, S = 4/3.)
Since the theorem is supposed to hold for all cases, I choose the case where the set of possible worlds consistent with observation is SI = { fi | fi(k) = (2(1-1/2k))i }. Note that for every i, for every k2 > k1, fi(k2) ≥ fi(k1), and the limit as k approaches infinity of fi(k) is 2i. So fi(k) is bounded above by 2i.
I choose the unbounded utility function U(f(k)) = f(k). Since fi(k) is bounded above by 2i, p(fi)U(fi(k)) is bounded above by (3/4i)2i. The sum from i=1 to infinity of (3/4i)2i is 3×(sum from i=1 to infinity of 2i/4i) = 3×(sum from i=1 to infinity of 1/2i) = 3.
So, for this case, the utility function is U(n) = n, which is unbounded; all other criteria are met; and the expected value is always bounded above by 3.
Can you find any reason why this is not a counterexample?
The only step I'm suspicious of is the step where I choose the set of possible worlds. Does the theorem implicitly assume that the utility function may be evaluated in any countable set of possible worlds (meaning it is actually proving that, for any unbounded utility function, there is some set of possible worlds for which the expected value does not converge)?