You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

djm comments on Can AIXI be trained to do anything a human can? - Less Wrong Discussion

3 Post author: Stuart_Armstrong 20 October 2014 01:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread.

Comment author: djm 21 October 2014 01:02:15PM 1 point [-]

Therefore, they cannot identify "that computer running the code" with "me", and would cheerfully destroy themselves in the pursuit of their goals/reward.

I am curious as to why an AIXI like entity would need to model itself (and all its possible calculations) in order to differentiate the code it is running with the external universe.

The human in charge of a reward channel could work for initial versions, but once its intelligence grew wouldn't it know what was happening (like the box AI example - not likely to work in the long term).

Comment author: Stuart_Armstrong 21 October 2014 02:25:49PM 1 point [-]

I am curious as to why an AIXI like entity would need to model itself (and all its possible calculations) in order to differentiate the code it is running with the external universe.

See other posts on this problem (some of them are linked to in the post above).

The human in charge of a reward channel could work for initial versions, but once its intelligence grew wouldn't it know what was happening

At this point, the "hope" is that the AIXI will have made sufficient generalisations to keep it going.