Viliam_Bur comments on Failures of an embodied AIXI - LessWrong

29 Post author: So8res 15 June 2014 06:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 10 June 2014 07:37:28AM *  1 point [-]

This is the place where an equation could be more convincing than verbal reasoning.

To be honest, I probably wouldn't understand the equation, so someone else would have to check it. But I feel that this is one of those situations (similar to group selectionism example), where humans can trust their reasonable sounding words, but the math could show otherwise.

I am not saying that you are wrong, at this moment I am just confused. Maybe it's obvious, and it's my ignorance. I don't know, and probably won't spend enough time to find out, so it's unfair to demand an answer to my question. But I think the advantage of AIXI is that is a relatively simple (well, relatively to other AIs) mathematical model, so claims about what AI can or cannot do should be accompanied by equations. (And if I am completely wrong and the answer is really obvious, then perhaps the equation shouldn't be complicated.) Also, sometimes the devil is in the details, and writing the equation could make those details explicit.

Comment author: AlexMennen 10 June 2014 08:33:43PM *  1 point [-]

Just look at the AIXI equation itself: .

(observations) and (rewards) are the signals sent from the environment to AIXI, and (actions) are AIXI's outputs. Notice that future are predicted by picking the one that would maximize expected reward through timestep m, just like AIXI does, and there is no summation over possible ways that the environment could make AIXI output actions computed some other way, like there is for and .