moridinamael comments on Would AIXI protect itself? - Less Wrong

8 Post author: Stuart_Armstrong 09 December 2011 12:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: moridinamael 09 December 2011 06:47:53PM -2 points [-]

I agree with your assessments here. I think that AIXI's effectiveness could be greatly amplified by having a good teacher and being "raised" in a safe environment where it can be taught how to protect itself. Humans aren't born knowing not to play in traffic.

If AIXI were simply initialized and thrown into the world, it is more likely that it might accidentally damage itself, alter itself, or simply fail to protect itself from modification.

Comment author: orthonormal 10 December 2011 04:13:12AM 2 points [-]

You're not understanding how AIXI works.

Comment author: moridinamael 10 December 2011 04:49:13AM -1 points [-]

AIXI doesn't work. My point was that if it did work, it would need a lot of coddling. Someone would need to tend to its utility function utility continually to make sure it was doing what it was supposed to.

If AIXI were interacting dynamically with its environment to a sufficient degee*, then the selected hypothesis motivating AIXI's next action would come to contain some description of how AIXI is approaching the problem.

If AIXI is consistently making mistakes which would have been averted if it had possessed some model of itself at the time of making the mistake, then it is not selecting the best hypothesis, and it is not AIXI.

I think my use of words like "learning" suggested that I think of AIXI as a neural net or something. I get how AIXI works, but it's often hard to be both accurate and succinct when talking about complex ideas.

*for some value of "sufficient"