Stuart_Armstrong comments on Where do selfish values come from? - Less Wrong

27 Post author: Wei_Dai 18 November 2011 11:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 21 November 2011 10:30:09AM 3 points [-]

AIXI is incapable of understanding the concept of copies of itself. In fact, it's incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.

Comment author: gwern 21 November 2011 04:19:54PM 1 point [-]

You've said that twice now, but where did Dewy do that?

Comment author: Stuart_Armstrong 22 November 2011 10:49:09AM 1 point [-]

I don't think he's published it yet; he did it in an internal FHI meeting. It's basically an extension of the fact that an uncomputable algorithm looking only at programmable models can't find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.

Comment author: timtyler 21 November 2011 07:23:47PM -1 points [-]

Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.

These esoteric problems apparentlly need rephrasing in more practical terms - but then they won't be problems with AIXI any more.