Cyan comments on Failures of an embodied AIXI - LessWrong

29 Post author: So8res 15 June 2014 06:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cyan 10 June 2014 11:42:15AM *  0 points [-]

I'm not trying to claim that AIXI is a good model in which to explore self-modification. My issue isn't on the agent-y side at all -- it's on the learning side. It has been put forward that there are facts about the world that AIXI is incapable of learning, even though humans are quite capable of learning them. (I'm assuming here that the environment is sufficiently information-rich that these facts are within reach.) To be more specific, the claim is that humans can learn facts about the observable universe that Solomonoff induction can't. To me, this claim seems to imply that human learning is not computable, and this implication makes my brain emit, "Error! Error! Does not compute!"