Stuart_Armstrong comments on Where do selfish values come from? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (57)
It makes sense, but the conclusion apparentlly depends on how AIXI's utility function is written. Assuming it knows Omega is trustworthy...
If AIXI's utility function says to maximise revenue in this timeline, it does not pay.
If it says to maximise revenue across all its copies in the multiverse, it does pay.
The first case - if I have analysed it correctly - is kind-of problematical for AIXI. It would want to self-modify.,,
AIXI is incapable of understanding the concept of copies of itself. In fact, it's incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
You've said that twice now, but where did Dewy do that?
I don't think he's published it yet; he did it in an internal FHI meeting. It's basically an extension of the fact that an uncomputable algorithm looking only at programmable models can't find itself in them. Computable versions of AIXI (AIXItl for example) have a similar problem: they cannot model themselves in a decent way, as they would have to be exponentially larger than themselves to do so. Shortcuts need to be added to the algorithm to deal with this.
Yes, more problems with my proposed fix. But is this even a problem in the first place? Can one uncomputable agent really predict the actions of another one? Besides, Omega can probably just take all the marbles and go home.
These esoteric problems apparentlly need rephrasing in more practical terms - but then they won't be problems with AIXI any more.