You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on LW is to rationality as AIXI is to intelligence - Less Wrong Discussion

2 Post author: XiXiDu 06 March 2011 08:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: timtyler 07 March 2011 07:43:03PM 0 points [-]

Thanks to LW I learnt about Solomonoff induction. Great...fascinating! But wait, I also learnt that there is a slight problem: "the only problem with Solomonoff induction is that it is incomputable" Phew, thanks for wasting my time!

So: use a computable approximation. Not a huge deal, I figure.

Comment author: Pavitra 08 March 2011 03:48:00AM 2 points [-]

Beware of the representativeness heuristic. Basing your computable approximation on AIXI does not necessarily maximize its accuracy, any more than naive Bayes is inherently superior to its fellow non-Bayesian algorithms due to having "Bayes" in the name.

Comment author: timtyler 08 March 2011 07:39:01AM *  1 point [-]

Using a computable approximation of Solomonoff induction (not AIXI, that's different!) is not some kind of option that can be avoided - modulo some comments about the true razor.

You can warn about its dangers - but we will plunge in anyway.

Comment author: Pavitra 08 March 2011 07:48:27AM 0 points [-]

Ah, I have no idea why I said AIXI. Must have gotten my wires crossed. :|

This seems to leave open the question of what approximation to use, which is essentially the same question posed by the original post. In the real world, for practical purposes, what do you actually use?

Comment author: timtyler 08 March 2011 08:26:06AM *  0 points [-]

Making a computable approximation Solomonoff induction that can be used repeatedly is essentially the same problem as building a stream compressor.

There is quite a bit of existing work on that problem - and it is one of my current projects.

Comment author: Pavitra 08 March 2011 05:55:02PM 1 point [-]
Comment author: timtyler 08 March 2011 09:23:48PM 0 points [-]

I don't understand the question. Can you explain what was wrong with the answer I just gave?

Comment author: Pavitra 08 March 2011 09:27:26PM 2 points [-]

The question is: please recommend a model of rationality that a human can actually use in the real world. It's not clear to me in practice how I would use, say, gzip to help make predictions.

Comment author: timtyler 08 March 2011 09:41:11PM *  0 points [-]

Right, well, the link between forecasting and compression was gone over in this previously-supplied link. See also, the other introductory material on that site:

http://matchingpennies.com/machine_forecasting/

http://matchingpennies.com/introduction/

http://matchingpennies.com/sequence_prediction/

If you want to hear something similar from someone else, perhaps try:

http://www.mattmahoney.net/dc/rationale.html

Comment author: Pavitra 08 March 2011 09:58:00PM 1 point [-]

I understand the theoretical connection. I want a real-world example of how this theoretical result could be applied.