Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A model of UDT with a halting oracle

41 Post author: cousin_it 18 December 2011 02:18PM

This post requires some knowledge of mathematical logic and computability theory. The basic idea is due to Vladimir Nesov and me.

Let the universe be a computer program U that can make calls to a halting oracle. Let the agent be a subprogram A within U that can also make calls to the oracle. The source code of both A and U is available to A.

Here's an example U that runs Newcomb's problem and returns the resulting utility value:

  def U():
    # Fill boxes, according to predicted action.
    box1 = 1000
    box2 = 1000000 if (A() == 1) else 0
    # Compute reward, based on actual action.
    return box2 if (A() == 1) else (box1 + box2)

A complete definition of U should also include the definition of A, so let's define it. We will use the halting oracle only as a provability oracle for some formal system S, e.g. Peano arithmetic. Here's the algorithm of A:

  1. Play chicken with the universe: if S proves that A()≠a for some action a, then return a.
  2. For every possible action a, find some utility value u such that S proves that A()=a ⇒ U()=u. If such a proof cannot be found for some a, break down and cry because the universe is unfair.
  3. Return the action that corresponds to the highest utility found on step 2.

Now we want to prove that the agent one-boxes, i.e. A()=1 and U()=1000000. That will follow from two lemmas.

Lemma 1: S proves that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000. Proof: you can derive that from just the source code of U, without looking at A at all.

Lemma 2: S doesn't prove any other utility values for A()=1 or A()=2. Proof: assume, for example, that S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1. According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent unsound (thx Misha). So if S is sound, that can't happen.

We see that the agent defined above will do the right thing in Newcomb's problem. And the proof transfers easily to many other toy problems, like the symmetric Prisoner's Dilemma.

But why? What's the point of this result?

There's a big problem about formalizing UDT. If the agent chooses a certain action in a deterministic universe, then it's a true fact about the universe that choosing a different action would have caused Santa to appear. Moreover, if the universe is computable, then such silly logical counterfactuals are not just true but provable in any reasonable formal system. When we can't compare actual decisions with counterfactual ones, it's hard to define what it means for a decision to be "optimal".

For example, one previous formalization searched for formal proofs up to a specified length limit. Problem is, that limit is a magic constant in the code that can't be derived from the universe program alone. And if you try searching for proofs without a length limit, you might encounter a proof of a "silly" counterfactual which will make you stop early before finding the "serious" one. Then your decision based on that silly counterfactual can make it true by making its antecedent false... But the bigger problem is that we can't say exactly what makes a "silly" counterfactual different from a "serious" one.

In contrast, the new model with oracles has a nice notion of optimality, relative to the agent's formal system. The agent will always return whatever action is proved by the formal system to be optimal, if such an action exists. This notion of optimality matches our intuitions even though the universe is still perfectly deterministic and the agent is still embedded in it, because the oracle ensures that determinism is just out of the formal system's reach.

P.S. I became a SingInst research associate on Dec 1. They did not swear me to secrecy, and I hope this post shows that I'm still a fan of working in the open. I might just try to be a little more careful because I wouldn't want to discredit SingInst by making stupid math mistakes in public :-)

Comments (100)

Comment author: Vladimir_Nesov 18 December 2011 04:02:52PM *  8 points [-]

The basic idea is due to Vladimir Nesov and me.

(In the setting where there were no oracles, this problem was generally discussed on the decision theory list, with various conditions used to prove when intended moral arguments [A=A1=>U=U1] will in fact be proven by the agent (are agent-provable, not just provable). The "chicken rule" condition used here I figured out in explicit form sometime in April ("To avoid confusion, immediately perform any action that implies absurdity"), though I'm not sure anyone published an actual proof using this condition. A few days ago, cousin_it posted a thought experiment involving a Turing oracle, and I pointed out that using the chicken rule in this setting elicits a much nicer result.)

Comment author: cousin_it 18 December 2011 04:05:03PM *  3 points [-]

That's an accurate account, thanks for posting it here! I'm always worried about coming off as though I'm grabbing priority...

Comment author: Vladimir_Nesov 18 December 2011 04:13:07PM *  5 points [-]

Thanks for writing this up. I'm currently trying to repurpose my failed unpublished post draft from August (on acausal dependencies) to this result, to give a more systematic account of the problem it addresses, and describe the no-oracles case in more detail. Hope to get it done in about a week.

Edit: And posted, 2.5 months late.

Comment author: orthonormal 18 December 2011 07:19:34PM 5 points [-]

Congratulations! This is a key step forward. And also, congrats to the SIAI for getting both you and Vladimir Nesov as official researchers.

Comment author: cousin_it 18 December 2011 08:08:46PM *  4 points [-]

Thanks!

In case anyone's interested: I'm not a paid researcher and don't want to become one, as long as I can support myself by programming.

Comment author: DanielVarga 18 December 2011 08:21:39PM 3 points [-]

Congratulations! Do you mind an off-topic question? I see you work at Google. What's Google's attitude toward such extracurricular activities? Obviously, I am not asking about specific negotiations, but about written and unwritten company rules. For example, can you dedicate Google company time to SingInst work? (I am thinking of the famous Google 20% rule here.)

Comment author: cousin_it 18 December 2011 08:27:44PM 6 points [-]

The answer is refreshingly boring: I can do this stuff, but not on company time. 20% time is supposed to be used for Google projects.

Comment author: SilasBarta 19 December 2011 10:02:59PM *  11 points [-]

Sorry, I just had this image:

cousin_it: I want to use my 20% time to prevent the extermination of humanity.
Google overlord: Okay, and this would help Google ... how, exactly?

(I know, I know, oversimplification.)

Comment author: Dr_Manhattan 21 December 2011 12:36:49AM -2 points [-]

Ads^^^3

Comment author: Nisan 17 January 2012 05:56:27AM 0 points [-]

Heh.

Comment author: AlephNeil 18 December 2011 06:13:26PM *  5 points [-]

But the bigger problem is that we can't say exactly what makes a "silly" counterfactual different from a "serious" one.

Would it be naive to hope for a criterion that roughly says: "A conditional P ⇒ Q is silly iff the 'most economical' way of proving it is to deduce it from ¬P or else from Q." Something like: "there exists a proof of ¬P or of Q which is strictly shorter than the shortest proof of P ⇒ Q"?

A totally different approach starts with the fact that your 'lemma 1' could be proved without knowing anything about A. Perhaps this could be deemed a sufficient condition for a counterfactual to be serious. But I guess it's not a necessary condition?

Comment author: cousin_it 18 December 2011 06:51:39PM *  4 points [-]

Both these approaches have been proposed on the workshop list. Good job figuring them out so quickly!

Would it be naive to hope for a criterion that roughly says: "A conditional P ⇒ Q is silly iff the 'most economical' way of proving it is to deduce it from ¬P or else from Q." Something like: "there exists a proof of ¬P or of Q which is strictly shorter than the shortest proof of P ⇒ Q"?

I can make such a criterion fall into nasty Loebian traps by maliciously tweaking the formal system to make some proofs longer than others. That means any proof of correct behavior (like one-boxing) must rely on the intimate details of the proof enumeration order, but we have no idea how to talk formally about such things.

A totally different approach starts with the fact that your 'lemma 1' could be proved without knowing anything about A. Perhaps this could be deemed a sufficient condition for a counterfactual to be serious. But I guess it's not a necessary condition?

A doesn't necessarily get the code of U neatly factored into A and everything else. The agent has to find copies of itself in the universe, it doesn't get told the positions of all copies explicitly. Note that if we replace the U in the post with some other U' that can be proven equivalent to U by S, then A can notice that equivalence, unscramble the code of U' into U, and win.

Comment author: [deleted] 18 December 2011 07:51:31PM 0 points [-]

Something related to the 'most economical proof' thought is the following: suppose both P and Q depend on some variable x, and the conditional P(x) ⇒ Q(x) is true for all values of x. Then it is silly if either ¬P(x) holds for all x, or if Q(x) holds for all x.

The tricky thing would be to introduce x in a meaningful way. In the case where we want to prove conditionals of the form "agent does X ⇒ world does Y", we want to avoid ending up with a conditional that's true because we prove "agent doesn't do X" (I think we're actually alright with an unconditional proof of "world does Y"). So we want to make the agent's actions somehow depend on whatever x is. For instance, if we gave the agent a very very low probability of picking a random action (and let x be the source of randomness), would that do anything?

Comment author: cousin_it 18 December 2011 07:57:50PM 0 points [-]

That solution has also been discussed on the list. Unfortunately, for it to work as intended, the different copies of the agent must use correlated random variables, which is not very realistic in problems like the Prisoner's Dilemma.

Comment author: [deleted] 18 December 2011 08:55:44PM 0 points [-]

Right, but I'm not suggesting necessarily that the agent actually make decisions randomly, merely that we use this as an aid in proofs. For instance, suppose we assume that all agents (including A itself) have a probability of p (independently) of being replaced by a different algorithm, which just acts randomly. Then, we can try to prove that Pr[world does Y|agent does X] in this probability space approaches 1 as p approaches 0 (colloquially, "agent does X ⇒ world does Y with high probability").

Then if we end up accidentally proving silly things, then we merely end up proving that Pr[agent does X] goes to 0 (because it doesn't happen except when actions are chosen at random). However, the conditional probability shouldn't be affected.

For instance, take Newcomb's problem. The terrible thing in the usual case is that we stumble on a proof that A()=2, so then it follows that A()=1 => U() = 0, which scares us so we output A()=2.

Here that can't happen. Suppose we stumbled on a proof that A()=2 w.h.p. Then when we calculate the conditional probability Pr[U() = 0 | A() = 1], we calculate the probability Pr[U() = 0 and A() = 1] and end up with p^2/4 -- if even one of U() and A() ends up what it's supposed to be, this doesn't happen. So the conditional probability is p/2, which still goes to 0. On the other hand, when we calculate Pr[U() = 1000000 | A() = 1] = Pr[U() = 1M and A() = 1] / Pr[A() = 1], the numerator is p^2/4 + p/2(1-p) -- in the case where A went haywire but U didn't, this is what happens. So this conditional probability is 1 - p/2, which goes to 1. We believe this over the other probability, so we actually output A()=1 -- and this shouldn't happen.

Comment author: cousin_it 20 December 2011 03:01:01PM *  0 points [-]

I tried to parse your comment but couldn't. What are expressions like Pr[U()=0 | A()=1] supposed to mean? What event is "A()=1"? Newcomb's problem makes two calls to A() which affect the resulting utility differently. If the two instances of A always get "randomly replaced" together or not at all, then I agree that it solves Newcomb's problem, but I think the assumption is too strong. On the other hand, if they get "randomly replaced" independently, I think you need to give a more careful argument, and also I think it won't work :-(

Comment author: [deleted] 20 December 2011 05:28:47PM *  0 points [-]

This is why I defined the probability space to be that, instead of A sometimes doing something random, there's a low probability that A is replaced with a different agent that always does something random. I don't see why the assumption is too strong. We can define the probability space any way we like, since we don't actually have to implement it, all we need is to be able to prove things about the probability space.

Now that I say it carefully, it's somewhat reminiscent of the problem you're always objecting to: that we can't separate A from the rest of the universe. But if we can pick out the things that are "agents" -- basically, if we pick out anything that's not immediately predictable, and I think that can be made rigorous -- then we can make this definition.

Oh, but in the actual Newcomb's problem, the two calls to A are actually calls to different but identical routines, aren't they? Are they? One of them is A's actual thought process, the other is Omega's absolutely perfect prediction of A's thought process. But on the other hand, none of the proofs go through if you can't verify that the two copies are the same, which is equivalent to making them the same subroutine.

Comment author: cousin_it 21 December 2011 01:09:16PM *  2 points [-]

Yeah, the problem in the non-oracle setting is about separating A from the rest of the universe. I feel that any good solution to this problem should be "crisp" rather than delegated to A's fuzzy reasoning abilities, because at this point in our research we're not yet trying to make a good optimizer, but trying to define mathematically what optimization means in the first place.

Comment author: JoshuaZ 19 December 2011 03:17:57AM 6 points [-]

This is one of the most interesting posts I've seen on Less Wrong. It is a non-trivial result and it goes a long way towards dealing with a lot of the problems with UDT. I'm deeply impressed and wish I had some helpful comment to make other than general praise.

Comment author: cousin_it 19 December 2011 05:00:45AM 1 point [-]

Thanks!

Comment author: lukeprog 11 January 2012 04:08:35AM 2 points [-]

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

This totally deserves to be polished a bit and published in a mainstream journal.

Comment author: cousin_it 11 January 2012 06:20:33PM *  3 points [-]

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

That's a question of degree. Some past posts of mine are similar to this one in formality.

Nesov also said in an email on Jan 4 that now we can write this stuff up. I think Wei and Gary should be listed as coauthors too.

Comment author: Vladimir_Nesov 12 January 2012 10:35:56AM 0 points [-]

I still want to figure out games (like PD) in the oracle setting first. After the abortive attempt on the list, I didn't yet get around to rethinking the problem. Care to take a stab?

Comment author: cousin_it 12 January 2012 12:41:27PM *  1 point [-]

The symmetric case (identical payoffs and identical algorithms) is trivial in the oracle setting. Non-identical algorithms seems to be moderately difficult, our candidate solutions in the non-oracle setting only work because they privilege one of the outcomes apriori, like Loebian cooperation. Non-identical payoffs seems to be very difficult, we have no foothold at all.

I think we have a nice enough story for "fair" problems (where easy proofs of moral arguments exist), and no good story for even slightly "unfair" problems (like ASP or non-symmetric PD). Maybe the writeup should emphasize the line between these two kinds of problems. It's clear enough in my mind.

Comment author: Vladimir_Nesov 12 January 2012 05:39:11PM 0 points [-]

Part of the motivation was to avoid specifying agents as algorithms, specifying them as (more general) propositions about actions instead. It's unclear to me how to combine this with possibility of reasoning about such agents (by other agents).

Comment author: cousin_it 12 January 2012 06:32:50PM *  0 points [-]

That's very speculative, I don't remember any nontrivial results in this vein so far. Maybe the writeup shouldn't need to wait until this gets cleared up.

Comment author: Vladimir_Nesov 12 January 2012 10:35:35AM 0 points [-]

Is this the first time an advanced decision theory has had a mathematical expression rather than just a verbal-philosophical one?

(It's not "advanced", it's not even in its infancy yet. On the other hand, there is a lot of decision theory that's actually advanced, but solves different problems.)

Comment author: Jayson_Virissimo 16 June 2012 08:48:08AM *  2 points [-]

I think Luke meant "advanced" as in superrationality, not "advanced" as in highly developed.

BTW, nice work.

Comment author: Vladimir_Nesov 18 December 2011 03:55:05PM *  2 points [-]

In contrast, the new model with oracles has a nice notion of optimality, relative to the agent's formal system.

Specifically, given any formal system S for reasoning about the world and agent's place in it, the chicken rule (step 1) forces S to generate consistent theories of consequences for all possible actions. This seems to crack a long-standing problem in counterfactual reasoning, giving a construction for counterfactual worlds (in form of consistent formal theories) from any formal theory that has actual world as a model.

Comment author: Vladimir_Nesov 21 December 2011 03:51:23PM *  3 points [-]

...and the construction turns out not as interesting as I suspected. Something like this is very easy to carry out by replacing the agent A with another that can't be understood in S, but is equivalent to A (according to a system stronger than S). As a tool for understanding decision problems, this is intended to solve the problem of parsing the world in terms of A, finding how it depends on A, finding where A is located in the world, but if we can find all instances of A in the world to perform such surgery on them, we've already solved the problem!

Perhaps A can decide to make itself incomprehensible to itself (to any given S, rather), thus performing the transformation without surgery, formalization of free will by an act of then-mysterious free will? This could still be done. But it's not clear if this can be done "from the outside", where we don't have the power of making A transform to make the dependence of the world on its actions clear.

Comment author: DanArmak 18 December 2011 02:51:16PM *  1 point [-]

You put the oracle outside the universe, but the logical impossibility of an oracle is still visible from inside the universe. For example, this agent (EDIT simplified from original comment):

  1. Choose any two different actions a, b.
  2. If S can prove A()=a, return b, proving S inconsistent.
  3. Otherwise return a, proving S incomplete (i.e. undeserving of the name oracle).
  4. The universe breaks down and cries because A is unfair.
  5. Profit???

I don't see why exploring a universe with such contradictions is useful for UDT, can you explain?

Comment author: cousin_it 18 December 2011 02:55:55PM *  3 points [-]

The universe defined in the post doesn't have any contradictions. Every oracle studied in theoretical CS is "incomplete" in the sense that it can't answer all questions about programs that invoke the oracle itself, e.g. the halting oracle cannot answer all questions about programs that can invoke the halting oracle. But it can answer some questions about such programs, which turns out to be enough for the agent to make a decision.

Comment author: [deleted] 18 December 2011 04:41:34PM 0 points [-]

I am not convinced that the oracle can answer any questions about programs that call the oracle. That is simply not one of the things it does.

The halting problem corresponds to a language L which consists of all pairs (P, w) such that program P halts on input w. Then the oracle is defined to return 1 if you give it a pair in L, and 0 otherwise. But to do this, we have to have L defined already, so that definition cannot refer to this oracle. A recursive definition "L is the set of programs with an oracle for L that halt" is just ill-defined.

Comment author: cousin_it 18 December 2011 04:52:18PM *  5 points [-]

We're not asking the oracle directly about the halting behavior of programs that call the oracle. Instead we're using it as a provability oracle, asking it only questions of the form "if a program exhaustively searches for proofs of such-and-such statement in the formal system S, will it ever terminate?"

Since the statements we're trying to prove, in turn, refer to the behavior of programs that call the oracle, we need S to have some assumptions about the behavior of the oracle. These assumptions can be pretty weak, e.g. the proof in the post seems to go through even if S knows only that the oracle is a pure function that returns identical outputs for identical inputs. Or you could make the assumptions stronger, e.g. formalize the notion of "halting oracle" within S. That wouldn't break anything either, I think.

Comment author: [deleted] 18 December 2011 05:56:50PM *  2 points [-]

I see, thanks, that clears it up.

However, now I have an objection to a different step! (which I'm sure you will also address)

But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1. According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent.

Just to clarify, let's say that the first step of A first checks if S proves that A()≠2, and then checks if S proves that A()≠1 (we could do this in either order, it doesn't matter, but this order makes the following explanation make sense better).

S proves that A()≠1. However, S will only be inconsistent if S can also prove that A()=1. To prove that A()=1, S needs to prove that S proves that A()≠1 (does this still make sense?). Furthermore, S needs to prove that S does not prove that A()≠2 (otherwise, A will perversely return 2 before it considers perversely returning 1).

In particular, the statement "S does not prove that A()≠2" implies "S is consistent" and therefore S cannot prove it.

I also have the suspicion that "S proves that A()≠1" is not something S can prove, but my logic-fu is not strong enough to decide one way or the other.

Comment author: cousin_it 18 December 2011 06:46:18PM *  2 points [-]

Let's step through the algorithm. If S proves that A()≠2, then A will immediately return 2. That means S has proved a false statement about the universe. If S doesn't prove that A()≠2, but proves that A()≠1, then A will immediately return 1, which also means S has proved a false statement about the universe.

So yeah, I guess calling it a violation of consistency was wrong. It's actually a violation of soundness: all axioms of S must be true statements about the universe. If S is sound, then the above reasoning shows that S cannot prove A()≠a for any a. (Also soundness implies consistency, which might explain why I made the mistake.) Thanks for pointing that out! Edited the post accordingly.

Comment author: [deleted] 18 December 2011 07:13:59PM 2 points [-]

Alright, that makes sense (if my reading of proofs seems uncharitable at times, it is because I know enough about logic to know when a statement doesn't make sense, but I don't know enough to tell what the statement wants to be).

Soundness seems like an interesting property to think about for a formal system. I am reminded of the bizarre systems you can get, e.g., by taking PA and adding the axiom "PA is inconsistent". This, if I recall correctly, is consistent provided PA itself is consistent, but (whether or not it's consistent) it definitely can't be sound.

Comment author: cousin_it 18 December 2011 07:17:35PM *  6 points [-]

if my reading of proofs seems uncharitable at times

I want people to read my math uncharitably and poke holes in it, otherwise I wouldn't post :-)

Comment author: DanArmak 18 December 2011 03:25:29PM 0 points [-]

Do we know in advance which questions are going to be answered? Do we know agent A from your OP isn't going to end up crying in step 2?

I apologize if I'm just asking about standard CS assumptions here. I have a CS bachelor's degree but I don't remember ever discussing programs that can call halting oracles. Either my memory is faulty (more likely) or I chose the wrong electives.

Comment author: cousin_it 18 December 2011 03:38:03PM *  4 points [-]

It's easy to write a universe program U that would make A cry. The post only proves that A won't end up crying for one specific U, and outlines an argument why it won't cry in some other similar problems.

The class of universes where A doesn't end up crying is supposed to roughly correspond to the informal class of "fair" decision problems where the agent's action is the only thing that determines the resulting utility, but I have no strong argument why it's a good formalization of that class.

One example of an "unfair" decision problem would be a universe that rewarded you for having a specific algorithm, rather than for returning a specific value. Such "unfair" problems can be made up to punish any decision theory you can come up with, so they're probably not a good test case.

Comment author: DanArmak 18 December 2011 04:17:13PM 1 point [-]

Thanks for explaining.

Comment author: whpearson 02 January 2012 02:34:09AM 0 points [-]

Is anyone at all working on classes of "unfair" problems, such as ones that give different utilities based upon the amount of time spent computing? Or ones that take into consideration any type of resource used to make that decision (energy or memory). This class seems important to me and less arbitrary than "unfair" problems that punish specific algorithms.

Comment author: cousin_it 02 January 2012 03:21:58PM *  1 point [-]

Wei Dai has a tentative decision theory that covers some of those cases. I didn't find it very convincing, but it's likely that I overlooked something. Any work on such problems would be very welcome, of course.

Comment author: whpearson 02 January 2012 04:28:01PM 0 points [-]

I'll have a think. An optimal decision maker for all scenarios seems impossible if your utility is reduced by an amount proportional to the time take to make the decision ("solving death" has this structure, less people die if you solve it earlier). The best in general I can think of is an infinite table mapping scenarios to a the decision computed by something like your UDT + oracle for that scenario. And this can be beaten in each individual scenario by a specialised algorithm for that scenario, that needs no look up.

And it still has an infinite quantity which I don't like in my theories that I might want to connect to the real world one day (and requires an infinite amount of precomputation).

I wonder if there is a quality apart from strict optimality that we need to look for. Making the optimal decision in most problems( what is the correct weighting of scenarios)? Making the right decision eventually?

Anyway I'll think some more. It is definitely thornier and nastier than "fair" problems.

Comment author: cousin_it 31 March 2012 08:28:53AM *  0 points [-]

I recently made some progress on your question. Section 4 seems to be the most relevant.

Comment author: snarles 25 December 2011 11:10:06AM 1 point [-]

Would it be correct to say that S can't prove Lemma 2?

Comment author: cousin_it 25 December 2011 01:15:15PM 1 point [-]

Yes.

Comment author: Zetetic 18 December 2011 09:13:21PM *  1 point [-]

According to the first step of the algorithm, A will play chicken with the universe and return 1, making S inconsistent. So if S is consistent, that can't happen.

Is this right? I'm wondering if you're assuming soundness relative to the natural semantics about A, because in step one, it isn't clear that there is a contradiction in S rather than a failure of S to be sound with respect to the semantics it's supposed to model (what actions A can take and their utility). I might be confused, but wouldn't this entail contradiction of the soundness of S rather than entailing that S is inconsistent? S would only be inconsistent if it can prove both A() = a and A()≠a, but unless you have further hidden assumptions about S I don't see why A returning a would entail that S proves A() = a.

This is how I want to interpret this: S is some deduction system capable of talking about all actions A() can make, and proving some range of utility results about them. S is also consistent and sound.

Play chicken with the universe: if S proves that A()≠a for some action a, then return a.

If S proves for all a there is some u such that [A() = a ⇒ U() = u] , output argmax (a) else exit and output nothing.

Since proving A can't take an action a ( that is, A()≠a ) entails that S is not sound (because A will take such an action in step 1), S can't prove any such result. Also, since proving that an action has two distinct utility values leads to A≠a, the soundness and consistency of S entails that this can't happen. Does this seem right?

Also, step two seems too strong. Wouldn't it suffice to have it be:

1) For all actions a, if there is some u such that [A() = a ⇒ U() = u] and u > 0, (), else add [A() = a ⇒ U() = 0] to the axioms of S (only for the duration of this decision)

2) output argmax (a)

My thought is that there could be some possible actions the agent can take that might not have provable utility and it seems like you should assign an expected utility of 0 to them (no value being privileged, it averages out to 0), but if you can prove that at least one action has positive utility, then you maximize expected utility by choosing the one with the highest positive utility.

This is weaker than the current step two but still seems to have the desired effect. Does this hold water or am I missing something?

Comment author: cousin_it 18 December 2011 09:22:12PM *  1 point [-]

I'm wondering if you're assuming soundness relative to the natural semantics about A

You're right. Misha already pointed that out and I edited the post.

For all actions a, if there is some u such that [A() = a ⇒ U() = u] and u > 0, else add [A() = a ⇒ U() = 0] to the axioms of S

Yeah, such a variation would work, but I'm uncomfortable calling A the optimal algorithm in a universe where some actions don't have provable utility. Such universes often arise from non-symmetric multiplayer games and I'd rather not insinuate that I have a good solution for those.

Comment author: Quill_McGee 08 June 2014 03:29:50PM 0 points [-]

"S proves that A()=1 ⇒ U()=42. But S also proves that A()=1 ⇒ U()=1000000, therefore S proves that A()≠1" I don't see how this follows. Perhaps it is because, if the system was sound, it would never prove more than one value for U() for a given a, therefore by the principle of explosion it proves A()≠1? But that doesn't seem to actually follow. I'm aware that this is an old post, but on the off chance that anyone ever actually sees this comment, help would be appreciated.

Comment author: Decius 29 August 2012 06:32:07AM 0 points [-]

Suppose S proves that A()≠1 and A()≠2. What does A return?

Comment author: Kindly 29 August 2012 12:28:20PM 1 point [-]

A will check for one of those first, and then return it. The order in which it checks 1 and 2 doesn't matter, because this can only happen if S is unsound.

Comment author: Decius 29 August 2012 08:40:04PM 0 points [-]

So, the primary point is to make it trivial to show that there is no proof in S regarding the output of A?

Comment author: Kindly 29 August 2012 09:14:01PM 1 point [-]

Exactly. We're worried about proving something like "A()=1 ⇒ U()=0" by proving "A()≠1", which would convince A() not to return 1. This avoids shortcuts of that nature.

Comment author: Decius 29 August 2012 10:27:01PM *  0 points [-]

So, in the example given, it is clear that A()=1. and also clear that S cannot prove that A()=1. What system are we using to show that A()=1, and why do we believe that that system is sound?

EDIT: More generally, is there any sound formal system T which can prove that A()=1 for all sound formal systems S that can prove that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000?

If so, then T cannot prove that A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000, why do we believe that? If not, why do we believe that A()=1?

Comment author: Kindly 29 August 2012 11:57:30PM *  1 point [-]

Presumably whichever system one typically uses to reason about the soundness of Peano arithmetic would suffice.

Considering that the algorithm involved is already impossible to implement by virtue of calling a halting oracle, I don't see what further harm it does to make the additional assumption that Peano arithmetic is sound.

Comment author: Decius 30 August 2012 03:01:08AM -1 points [-]

The provable oracle is only needed for the general case implementation: If you have a proof that S does not prove A()≠n for any n, and a the further proofs that A()=n ⇒ U()=u for each n, then the need for the oracle is satisfied. If you replace the oracle with a routine which sometimes breaks down and cries, this specific example might still work.

What I'm saying is that either there exists a sound system which can prove the three statements A()=1, A()=1 ⇒ U()=1000000 and A()=2 ⇒ U()=1000, or there does not exist such a sound system. If such a system exists, then it does not prove that A()=1 ⇒ A()≠2 (or any other value), and therefore does not contain Peano arithmetic. Since the proof here does make all three claims, either the system used here either does not contain PA or is unsound. Since PA is assumed whenever numbers are used, the system is unsound.

The claim that the algorithm will do the right thing in Newcomb's problem is suspect, because any formal system that can use the proof provided is unsound.

Comment author: Kindly 30 August 2012 05:53:24PM *  0 points [-]

If such a system exists, then it does not prove that A()=1 ⇒ A()≠2 (or any other value)

Why not?

Comment author: Decius 30 August 2012 06:23:42PM 0 points [-]

Because such a system would prove that A()≠2, which would result in A() returning 2 in the first step, when that system is used as the system S. Since it proves something which is not true, it is unsound.

A sound system need not include the axioms of PA to include the axiom "Anything that PA proves is true".

Such a system T can be sound and still prove that A()=1, but cannot prove that A()=1 ⇒ A()≠2, because:

T does not contain the axiom A=n ⇒ A≠m, for m≠n. Therefore A()=1 ⇒ A()≠2 is not provable in T

PA does not prove (T proves A()=1 ⇒ A()≠2), and does not include the axiom "Anything that T proves is true." A()=1 is not provable in PA, and therefore A()≠2 is not provable in PA.

No sound system that can be used as S can prove that A()≠2: Since U only requires that S to be sound and be able to prove A()=a ⇒ U()=u for all permissible a, no sound system can prove A()=a ⇒ U()=u for all permissible a and A()≠2.

Comment author: Kindly 30 August 2012 06:32:47PM *  3 points [-]

Wait, no, no, no. The system in which we are working to analyze the algorithm A() and prove that it works correctly is not the same as the system S used by A()! Although S is not powerful enough to prove A()≠2 -- this would require that S is aware of its own soundness -- we can pick a system S that we know is sound, or can assume to be sound, such as Peano arithmetic, and based on that assumption we can prove that A()=1.

Comment author: Douglas_Knight 19 December 2011 01:17:30AM 0 points [-]

What about the prisoner's dilemma?
Here's a concrete example of a game between non-identical agents: agent A of this post, who maximizes outcome, and agent B, who cooperates iff the other player plays the same move.

You seem to imply that you consider this approach practically equivalent to your earlier approach with bounded proof length. Is that right?

But in your first post, you say you're not sure what happens with two player games, while in your second, you say "if agent A tries to reason about agent B in the same way, it will fail miserably." What changed your mind? Do you still hold this position? Did you consider this PD pairing back then?

Comment author: cousin_it 19 December 2011 04:25:11AM *  2 points [-]

This post says only that the result transfers to the symmetric PD, i.e. the case with identical agents. I still don't have any good solution for other cases.

Here's a concrete example of a game between non-identical agents: agent A of this post, who maximizes outcome, and agent B, who cooperates iff the other player plays the same move.

B cooperates iff S proves that A()=B(), right? In this case, if A and B are using the same S, I think A will cry and B will defect :-( Here's why:

1) S cannot prove any statement like "S cannot prove X", because that requires S to prove the consistency of S.

2) Therefore S cannot prove something like "S cannot prove that A doesn't cooperate".

3) Therefore S cannot prove that A won't play chicken on step 1.

4) Therefore S cannot prove that A()=B().

5) Therefore B will defect, and S cannot prove that.

6) Therefore A can't find a complete set of proofs saying all of A's actions imply B's defection.

7) If A can find a complete set of proofs saying one of A's actions implies B's cooperation, then A will do that action, making the counterfactual true. But B defects, so that can't happen.

8) By (6) and (7), the only remaining possibility is that A can't find a complete set of proofs at all, so it breaks down and cries.

I haven't checked the above argument very carefully, so it can contain many mistakes. Please recheck before accepting it!

ETA: Douglas found a fatal mistake in the argument, so don't bother checking :-)

Comment author: Douglas_Knight 19 December 2011 05:27:28AM 4 points [-]

I don't see how you got from 3) S cannot prove that A won't play chicken to 4) S cannot prove that A()=B(). That inference seems way too general; it should apply to any agent of this type. In particular, S certainly can prove that A()=A(). Perhaps 3 is supposed to break some symmetry between A and B? Perhaps I wasn't clear that I wanted B to play chicken, too. That restores the symmetry, if that's what the point of 4 was.

I also think that there is a subtle gap in 5 => 6. If S could prove that all of A's actions imply B's defection, then S could prove that B defects, hence contradiction. But A's decision doesn't need to examine all of A's actions: crying can be omitted. I doubt that this actually helps, though.

Comment author: cousin_it 19 December 2011 08:36:01PM *  5 points [-]

I can't solve your problem yet, but I found a cute lemma. Let P be the proposition "A()=a" where a is the first action inspected in step 1 of A's algorithm.

  1. (S⊢¬P)⇒P (by inspection of A)

  2. S⊢((S⊢¬P)⇒P) (S can prove 1)

  3. (S⊢(S⊢¬P))⇒(S⊢P) (unwrap 2)

  4. (S⊢¬P)⇒(S⊢(S⊢¬P)) (property of any nice S)

  5. (S⊢¬P)⇒(S⊢P) (combine 3 and 4)

  6. (S⊢¬P)⇒(S⊢(P∧¬P)) (rephrasing of 5)

  7. (S⊢¬P)⇒¬Con(S)

All the above steps can also be formalized within S, so each player knows that if any player plays chicken with the first inspected action, then S is inconsistent. The proof generalizes to the second inspected action, etc., by looking at the first one that yields a contradiction. But if S is inconsistent, then it will make all players play chicken. So if one player plays chicken, then all of them do, and that fact is provable within S.

Did you manage to make any progress?

Comment author: Nisan 29 January 2012 07:36:23AM 3 points [-]

The proof generalizes to the second inspected action, etc., by looking at the first one that yields a contradiction.

I tried this in the case that the output of A provably lies in the set {a,b}. I only managed to prove

(S⊢¬P)⇒¬Con(S+Con(S))

where P is the proposition "A()=b" where b is the second inspected action. But this still implies

if one player plays chicken, then all of them do, and that fact is provable within S.

Comment author: cousin_it 29 January 2012 11:18:43AM 0 points [-]

Thanks! I think you found a real hole and the conclusion is also wrong. Or at least I don't see how

(S⊢¬P)⇒¬Con(S+Con(S))

implies the conclusion.

Comment author: Nisan 29 January 2012 07:18:00PM 1 point [-]

The conclusions that I think I can draw are

¬(S⊢A()=a)∧(S⊢A()=b) ⇒ Con(S)∧¬Con(S+Con(S)) ⇒ A()=b

So if one player plays chicken on the second inspected action, then all of them do.

Comment author: cousin_it 29 January 2012 09:15:08PM 0 points [-]

I'm still not getting it. Can you explain the proof of the following:

Con(S)∧¬Con(S+Con(S)) ⇒ A()=b

Comment author: Nisan 30 January 2012 12:59:37AM 2 points [-]
  1. ¬Con(S+Con(S)) ⇒ S⊢¬Con(S)

  2. ⇒ S⊢Prv(A()≠a) (if S is inconsistent, then it proves anything)

  3. ⇒ S⊢A()=a (because A plays chicken)

  4. ⇒ S⊢A()≠b

  5. Con(S)∧¬Con(S+Con(S)) ⇒ Con(S)∧(S⊢A()≠b) (follows directly from 4)

  6. ⇒ (S⊬A()=b)∧(S⊢A()≠b) (a consistent theory can't prove a proposition and its negation)

  7. ⇒ (S⊬A()≠a)∧(S⊢A()≠b) (here I'm assuming that S⊢(A()=a∨A()=b). I don't know what to do if A can choose between more than two alternatives.)

  8. ⇒ A()=b (A plays chicken on the second inspected action)

Comment author: Douglas_Knight 19 December 2011 10:34:14PM 1 point [-]

Cool.

I had a lot of trouble reading this because in my mind ⇒ binds tighter than ⊢. When I figured it out, I was going to suggest that you use spaces to hint at parsing, but you already did. I don't know what would have helped.

Comment author: Douglas_Knight 20 December 2011 07:25:12PM 0 points [-]

Since we like symmetry, I'm going to change notation from A and B to I and O for "I" and "opponent." (or maybe "input" and "output")

We should be careful about the definition of B. Simply saying that it cooperates if I()=O() causes it to blow up against the defectbot. Instead, consider the propositions PC: I()=C ⇒ O()=C and PD: I()=D ⇒ O()=D. We really mean that B should cooperate if S proves P=PC∧PD. What if it doesn't? There are several potential agents: B1 defects if S doesn't prove P; B2 defects if S proves ¬P, but breaks down and cries if it is undecidable; B3 breaks down if either PC and PD are undecidable, but defects they are both decidable and one is false. B3 sounds very similar to A and so I think that symmetry proves that they cooperate together. If we modified A not to require that every action had a provable utility, but only that one action had a utility provably as big as all others, then I think it would cooperate with B2.

These examples increase my assessment of the possibility that A and B1 cooperate.

(I'm ignoring the stuff about playing chicken, because the comment I'm responding to seems to say I can.)

Comment author: cousin_it 22 December 2011 01:32:49PM *  0 points [-]

B3 sounds very similar to A and so I think that symmetry proves that they cooperate together. If we modified A not to require that every action had a provable utility, but only that one action had a utility provably as big as all others, then I think it would cooperate with B2.

I think your conclusions can be right, but the proofs are vague. Can you debug your reasoning like you debugged mine above?

Comment author: cousin_it 19 December 2011 05:33:12AM 1 point [-]

Nice catches! You're right on both points. I have to go to sleep now, will figure this out tomorrow.

Comment author: FAWS 19 December 2011 12:34:38AM *  0 points [-]

Wouldn't the same job be done by the agent using proper counterfactuals instead of logic ones, which seems like something that would also be needed for other purposes?

I don't know who (if anyone) has done any work on this, but when a human considers a counterfactual statement like "If Gore won in 2000" that is very underspecified, because the implicit assumption is to discard contradicting knowledge, but how to do that exactly is left open. Humans just know that they should assume something like "Bush didn't win Florida" instead of "266 > 271".

If an example agent needs to be able to use precisely defined proper counterfactuals I think it might be possible to do that with an ordering function for its current knowledge. The agent would start with the couterfactual under consideration, add items from its knowledge base in the order specified for that counterfactual, test for each item whether it can find a contradiction, and discard the current item whenever it finds a contradiction (from consideration for evaluating the counterfactual).

For the example I think the order would look like this: A=a, S, U except for A, the source code of A.

That would do much the same thing as "playing chicken with the universe" with respect of not being impressed by proofs about its output, no?

More generally I think the items would be further split up, particularly "A behaves like a program with this source code in all other cases" would come before "A behaves like a program with this source code in the case under consideration". Other instances of A would also have to be treated as instances of A and not as programs with the source code (i.e. statements like "A' has the same source code as A" would come before either of their source codes in the ordering).

Does that make sense?

Comment author: cousin_it 19 December 2011 04:51:59AM *  1 point [-]

The idea of using an ordering function for knowledge is new to me, thanks!

For the example I think the order would look like this: A=a, S, U except for A, the source code of A.

The hard part is getting "U except for A". Given the source code of U and the source code for A, we don't know how to "factor" one program by another (or equivalently, find all correlates of the agent in the universe). If we knew how to do that, it would help a lot with UDT in general.

Comment author: FAWS 19 December 2011 07:05:43AM 0 points [-]

I guess you would actually have various knowledge items about the world, some of them implying things about A, and any that in conjunction with the others so far cause a contradiction with A=a that the agent can find would be discarded. Maybe that already would be enough; I'm not sure.

Comment author: cousin_it 20 December 2011 06:17:25PM 0 points [-]

What considerations should be used to order the knowledge items?

Comment author: FAWS 20 December 2011 07:57:41PM *  0 points [-]

That's a really difficult question. It's hard to say what principles humans follow when evaluating counterfactuals, and even harder to say in how far that's a reasonable example to follow.

I think higher level observational laws should usually have a higher priority than concrete data points they are based on, and all else equal they should be in descending order of generality and confidence. That the US. president can veto US federal legislation and that the person who can veto US federal legislation is the same person as the commander in chief of the US military forces should both have a higher priority than that George W. Bush could veto US federal legislation.

It would also depend on what the counterfactual is used for. For counterfactuals concerning the past timing would obviously extremely important.

In the case of considering the counterfactual implications of a decision the agent makes you could try ascending order of strength as Bayesian evidence about the agent as a secondary criterion, maybe? Or perhaps instead ratio of that strength to general importance? (Which would probably require nested counterfactuals? Are we concerned with computability yet?)

EDIT: I think the knowledge items would have redundancy so that even if the agent can derive itself directly from the laws of physics and needs to reject (one of) them it can reconstruct almost normal physics from various observational laws. It also seems redundancy could reduce the importance of the initial order somewhat.

Comment author: shminux 18 December 2011 09:20:48PM 0 points [-]

Stupid question: A() always returning 1 manifestly solves the Newcomb problem, no fancy proofs required. What am I missing?

Comment author: cousin_it 18 December 2011 09:23:30PM *  6 points [-]

The implementation of A in the post works correctly for many different U's.

Comment author: gRR 10 March 2012 07:20:20PM *  -1 points [-]

Another possible modification of the algorithm:
1. Remove the playing chicken step.
2. Keep the step 2 - proving "A()=a ⇒ U()=u".
3. Change the choice rule to:
Return the action for which S does not prove that A()≠a, and which corresponds to the highest utility found on step 2 among all such actions.

Comment author: Vladimir_Nesov 10 March 2012 07:52:29PM *  1 point [-]

Your scheme doesn't guarantee that all actions will be considered, or that a known collection of proofs of statements [A()=a ⇒ U()=u] will generate the action that agent actually chooses. If S proves that A()=a1, a1 is what your agent will choose, without any consequentialist reason for this choice apparent in the decision algorithm. You are letting S decide for you, implementing a kind of fatalist decision-making, asking "What do the laws of nature say I'm going to do?" instead of "What should I do?" The chicken rule (diagonal step) is what provides these guarantees (there are likely other ways to this same end).

Comment author: gRR 10 March 2012 08:23:13PM 0 points [-]

I visualized the process like this: S would start proving things. It will notice that "if S does not prove A()!=a AND S does not prove A()!=b AND AgentStep2Proves("A()=a => U()=u") AND AgentStep2Proves("A()=b => U()=w") AND u>w THEN A()!=b", so it would prove A()!=b. Etc.

Which is wrong since it makes S look on itself, which would make it inconsistent. Except it is inconsistent anyway...

On the other hand, this method still does not let the agent prove its decision, since it would make the step 2 explode. Damn, I'm stupid :(

But this should work. It's just TDT, I think. Does it have serious drawbacks that make UDT/ADT desirable?

Comment author: gRR 10 March 2012 05:45:15PM -1 points [-]

I still have doubts about the consistency of this architecture. What if the agent sees a copy of itself perform some action in a situation exactly equal to the situation in which the agent finds itself now. Would it not mean that the agent can now prove that it would perform the same action? (There would be a difference between the agent and the copy, but only an "additive" difference - the agent will have additional knowledge that the copy wouldn't - so whatever the copy proved, the agent must also be able to prove. [And this fact would be provable to the agent!]).

An example situation would be the Parfit's hitchhiker. If the system finds itself in town, saved by the driver, would it not be able to prove that it will cooperate?

In fact, the system has its own source code, so won't it directly simulate itself in any case, as a side-effect of running S? Guess it's just a standard proof why S is impossible...

Comment author: gRR 23 February 2012 02:12:19AM *  -1 points [-]

Is it possible to change the definition of A() to something like this:
Let UX(X) be the function that results from the source of U() if all calls "A()" were changed to "eval(X)".
For some action p, prove that for any possible agent X, UX(X) <= UX("return p").
return p.

Then the agent would be able to prove that it returns a specific action, and yet there would be no contradictions.

It seems to me this should be better than the "chicken rule" (it's really funny and clever, though! :)). But I'm getting no response to similar comments in other posts. Am I making some ridiculous mistake here?

Comment author: [deleted] 23 February 2012 06:57:23AM 3 points [-]

So, one of the classical examples of what we want this decision algorithm to do is the following: if it is playing a Prisoner's dilemma game with an identical algorithm, then it should cooperate. The informal reasoning (which this post is trying to formalize) is the following: the agent proves that the output of A() and of A'() are always equal, compares the values of "A() = A'() = cooperate" and "A() = A'() = defect" and decides to cooperate.

The key point is that the agent is not initially told which other agents are identical to it. So your suggestion fails to work because if we replace "A()" with "eval(cooperate)" or "eval(defect)" then we end up proving things about how the agent A'() plays against a CooperateBot or a DefectBot. We conclude that A'() defects against both of these, and end up defecting ourselves.

Comment author: gRR 10 March 2012 06:38:44PM *  0 points [-]

Ok, modification:

def U():
box1 = 1000
box2 = (A_1() == 2) ? 0 : 1000000
return box2 + ((A_2() == 2) ? box1 : 0)

or

def U():
if(A_1() == 'C' and A_2() == 'C') return 2;
if(A_1() == 'D' and A_2() == 'C') return 3;
...

def A():
Forall i try to prove A()==A_i()
Let UX(X) be the function that results from the source of U() if all calls "A_i()", for which the system was able to prove A()==A_i() were changed to "eval(X)".
For some action p, prove that for any possible action q, UX("return q") <= UX("return p").
Return p.

What results should be equivalent to TDT, I think.

Comment author: [deleted] 14 March 2012 06:35:18PM 0 points [-]

If every step in that code is valid, then that would work. In my opinion the shakiest step in your algorithm is the "Forall i" in the first line of A(). I'll probably reason myself into a hole if I try to figure out whether or not it's valid.

Comment author: gRR 14 March 2012 07:07:22PM 0 points [-]

Well, at least in some cases (e.g., if A's code is equal to A_i's code) the proof is immediate. Proof is also possible for minor obvious modifications, like variable renaming, so the agent should still behave better than a quining agent for PD.

My goal for inventing this is to have a consistent agent that can know (can prove) its decision beforehand, at least sometimes, without getting into contradictions...

Comment author: gRR 23 February 2012 07:15:05AM *  0 points [-]

No, it wouldn't defect against itself, because UX will call the same eval("return p") twice:

UX(X) = { return PDPayoffMatrix[eval(X), eval(X)]; }

The payoff with p=cooperate is greater, so the agent will cooperate.

EDIT: Sorry, that was all wrong, and missed your point. Thinking now...