Wei_Dai comments on Where do selfish values come from? - Less Wrong

27 Post author: Wei_Dai 18 November 2011 11:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 19 November 2011 07:13:45AM 3 points [-]

Why does AIXI refuse to pay in CM?

To make things easier to analyze, consider an AIXI variant where we replace the universal prior with a prior that assigns .5 probability to each of just two possible environments: one where Omega's coin lands heads, and one where it lands tails. Once this AIXI variant is told that the coin landed tails, it updates the probability distribution and now assigns 1 to the second environment, and its expected utility computation now says "not pay" maximized EU.

Does that make sense?

Comment author: Vladimir_Nesov 20 November 2011 06:28:40PM *  0 points [-]

Does that make sense?

It used to, as Tim notes, but I'm not so sure now. AIXI works with its distribution over programs and sequences of observations, not with states of a world and its properties. If presented with a sequence of observations generated by a program, it quickly figures out what the following observations are, but it's more tricky here.

With other types of agents, we usually need to stipulate that the problem statement is somehow made clear to the agent. The way in which this could be achieved is not specified, and it seems very difficult to arrange through presenting an actual sequence of observations. So the shortcut is to draw the problem "directly" on agent's mind in terms of agent's ontology, and usually it's possible in a moderately natural way. This all takes place apart from the agent observing the state of the coin.

However in case of AIXI, it's not as clear how the elements of the problem setting should be expressed in terms of its ontology. Basically, we have two worlds corresponding to the different coin states, which could for simplicity be assumed to be generated by two programs. The first idea is to identify the programs generating these worlds with relevant AIXI's hypotheses, so that observing "tails" excludes the "heads"-programs, and therefore the "heads"-world, from consideration.

But there are many possible "tails"-programs, and AIXI's response depends on their distribution. For example, the choice of a particular "tails"-program could represent the state of other worlds. What does it say about this distribution that the problem statement was properly explained to the AIXI agent? It must necessarily be more than just observing "tails", the same as for other types of agents (if you only toss a coin and it falls "tails", this observation alone doesn't incite me to pay up). Perhaps "tails"-programs that properly model CM also imply paying the mugger.

Comment author: lessdazed 21 November 2011 01:24:18AM 0 points [-]

But there are many possible "tails"-programs, and AIXI's response depends on their distribution.

I don't understand. Isn't the biggest missing piece (an) AIXI's precise utility function, rather than its uncertainty?

Comment author: timtyler 19 November 2011 01:18:14PM *  0 points [-]

It makes sense, but the conclusion apparentlly depends on how AIXI's utility function is written. Assuming it knows Omega is trustworthy...

  • If AIXI's utility function says to maximise revenue in this timeline, it does not pay.

  • If it says to maximise revenue across all its copies in the multiverse, it does pay.

The first case - if I have analysed it correctly - is kind-of problematical for AIXI. It would want to self-modify.,,

Comment author: Stuart_Armstrong 21 November 2011 10:30:09AM 3 points [-]

AIXI is incapable of understanding the concept of copies of itself. In fact, it's incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.

Comment author: gwern 21 November 2011 04:19:54PM 1 point [-]

You've said that twice now, but where did Dewy do that?

Comment author: Stuart_Armstrong 22 November 2011 10:49:09AM 1 point [-]

I don't think he's published it yet; he did it in an internal FHI meeting. It's basically an extension of the fact that an uncomputable algorithm looking only at programmable models can't find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.

Comment author: timtyler 21 November 2011 07:23:47PM -1 points [-]

Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.

These esoteric problems apparentlly need rephrasing in more practical terms - but then they won't be problems with AIXI any more.

Comment author: endoself 19 November 2011 06:27:05PM 0 points [-]

If it says to maximise revenue across all its copies in the multiverse, it should pay.

If there is no multiverse and the coin flip is simply deterministic - perhaps based of the parity of the quadrillionth digit of pi - there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so. AIXI, however, is designed to rule out possibilities once they contradict its observations, so it does not act correctly here.

Comment author: timtyler 19 November 2011 06:58:39PM *  0 points [-]

If there is no multiverse and the coin flip is simply deterministic - perhaps based of the parity of the quadrillionth digit of pi - there is no version of AIXI that will benefit from paying the mugger, but it is still advantageous to precommit to doing so.

That seems to be a pretty counter-factual premise, though. There's pretty good evidence for a multiverse, and you could hack AIXI to do the "right" thing - by giving it a "multiverse-aware" environment and utility function.

Comment author: endoself 19 November 2011 07:35:40PM 1 point [-]

"No multiverse" wasn't the best way to put it. Even in a multiverse, there is only one value of the quadrillionth digit of pi, so modifying AIXI to account for the multiverse does not provide a solution here, since we get the same result as in a single universe.

Comment author: timtyler 19 November 2011 07:50:28PM 0 points [-]

I don't think multiverse theory works like that. In one universe it will be the 1001th digit, in another it will be the 1002th digit. There is no multiverse theory where some agent is presented with a problem involving the quadrillionth digit of pi in all the universes.

Comment author: endoself 19 November 2011 08:13:09PM *  1 point [-]

Once AIXI is told that the coin flip will be over the quadrillionth digit of pi, all other scenarios contradict its observations, so they are ruled out and the utility conditional on them stops being taken into account.

Comment author: timtyler 20 November 2011 12:34:22AM 0 points [-]

Possibly. If that turns out to be a flaw, then AIXI may need more "adjustment" than just expanding its environment and utility function to include the mulltiverse.

Comment author: endoself 20 November 2011 01:50:02AM 0 points [-]

Possibly.

I'm not sure what you mean. Are you saying that you still ascribe significant probability to AIXI paying the mugger?

Comment author: timtyler 20 November 2011 12:38:55PM 3 points [-]

Uncomputable AIXI being "out-thought" by uncomputable Omega now seems like a fairly hypothetical situation in the first place. I don't pretend to know what would happen - or even if the question is really meaningful.