paulfchristiano comments on AIXI and Existential Despair - Less Wrong

13 Post author: paulfchristiano 08 December 2011 08:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 09 December 2011 12:26:56AM *  2 points [-]

AIXI learns a function f : outputs -> inputs, modeling the environment's response to AIXI's outputs.

Let y be the output of A. Then we have one function f1(y) which uses y to help model the world, and another function f2(y) which ignores y and essentially recomputes it from the environment. These two models make identical predictions when applied to the actual sequence of outputs of the algorithm, but make different predictions about counterfactuals which are essential to determining the agent's behavior. If you are using f1, as AIXI intends to, then you do a sane thing if you try and rely on causal control. If you are using f2, as AIXI probably actually would, then you have no causal control over reality, and so go catatonic if you rely on causal control.

I'll try and make this a little more clear.