Wei_Dai comments on Does Solomonoff always win? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (55)
This comment led me to the following tangential train of thought: AIXI seems to capture the essence of reinforcement learning, but does not feel pain or pleasure. I do not feel morally compelled to help an AIXI-like agent (as opposed to a human) gain positive reinforcements and avoid negative reinforcements (unless it was some part of a trade).
After writing the above, I found this old comment of yours, which seems closely related. But thinking about an AIXI-like agent that has only "wants" and no "likes", I feel myself being pulled towards what you called the "naive view". Do you have any further thoughts on this subject?