You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open thread, Dec. 21 - Dec. 27, 2015 - Less Wrong Discussion

2 Post author: MrMind 21 December 2015 07:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (230)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 24 December 2015 10:24:27AM *  1 point [-]

jacob_cannell above seems to think it is very important for physicists to know about Solomonoff induction.

I think a more charitable read would go like this: being smarter doesn't necessarily mean that you know everything there's to know nor that you are more rational than other people. Since being rational or knowing about Bayesian epistemology is important in every field of science, physicists should be motivated to learn this stuff. I don't think he was suggesting that French pastries are literally useful to them.

Solomonoff induction is one of those ideas that keeps circulating here, for reasons that escape me.

Well, LW was born as a forum about artificial intelligence. Solomonoff induction is like an ideal engine for generalized intelligence, which is very cool!

Bayesian methods didn't save Jaynes from being terminally confused about causality and the Bell inequalities.

That's unfortunate, but we cannot ask of anyone, even geniuses, to transcend their time. Leonardo da Vinci held some ridiculous beliefs, for our standars, just like Ramanujan or Einstein. With this I'm not implying that Jaynes was a genius of that caliber, I would ascribe that status more to Laplace. On the 'bright' side, in our time nobody knows how to reconcile epistemic probability and quantum causality :)

Comment author: ChristianKl 24 December 2015 10:41:12AM 2 points [-]

Solomonoff induction is like an ideal engine for generalized intelligence

That seems to be a pretty big claim. Can you articulate why you believe it to be true?

Comment author: MrMind 28 December 2015 08:30:24AM 1 point [-]

Because AIXI is the first complete mathematical model of a general AI and is based on Solomonoff induction.
Also, computable approximation to Solomonoff prior has been used to teach small AI to play videogames unsupervised.
So, yeah.

Comment author: jacob_cannell 20 January 2016 08:49:00PM *  0 points [-]

As far as I am aware, Solomonoff induction describes the singularly correct way to do statistical inference in the limits of infinite compute. (It computes generalized/full Bayesian inference)

All of AI can be reduced to universal inference, so understanding how to do that optimally with infinite compute perhaps helps one think more clearly about how practical efficient inference algorithms can exploit various structural regularities to approximate the ideal using vastly less compute.