You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam comments on Versions of AIXI can be arbitrarily stupid - Less Wrong Discussion

15 Post author: Stuart_Armstrong 10 August 2015 01:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread.

Comment author: Viliam 11 August 2015 07:23:27AM *  3 points [-]

So essentially the AIXI will avoid experiments where it has high prior probability that the punishment could be astronomical (greater than any benefit gained by learning). And, never doing experiments in that area, it cannot update.

If I imagine the same with humans, it seems like both good and bad thing. Good: it would make a human unlikely to experiment with suicide. Bad: it would make a human unlikely to experiment with abandoning religion, or doing any other thing with scary taboo.

Or perhaps an analogy would be running LHC-like experiements which (some people believe) have a tiny chance of destroying the universe. Maybe the chance is extremely small, but if we keep doing more and more extreme experiments, it seems like a question of time until we find something that "works". On the other hand, this analogy has the weakness that we can make educated guesses about laws of physics by doing things other than potentially universe-destroying experiments, while in the example in article, the AIXI has no other source to learn from.