You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kaj_Sotala comments on Open thread, August 19-25, 2013 - Less Wrong Discussion

2 Post author: David_Gerard 19 August 2013 06:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (325)

You are viewing a single comment's thread.

Comment author: Kaj_Sotala 20 August 2013 06:49:12AM *  5 points [-]

Artificial intelligence and Solomonoff induction: what to read?

Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter's work, comes away unimpressed, and asks for recommendations.

One concept that is sometimes claimed to be of central importance in contemporary AGI research is the so-called AIXI formalism. [...] In the presentation, Hutter advices us to consult his book Universal Artificial Intelligence. Before embarking on that, however, I decided to try one of the two papers that he also directs us to in the presentation, namely his A philosophical treatise of universal induction, coauthored with Samuel Rathmanner and published in the journal Entropy in 2011. After reading the paper, I have moved the reading of Hutter's book far down my list of priorities, because gerneralizing from the paper leads me to suspect that the book is not so good.

I find the paper bad. There is nothing wrong with the ambition - to sketch various approaches to induction from Epicurus and onwards, and to try to argue how it all culminates in the concept of Solomonoff induction. There is much to agree with in the paper, such as the untenability of relying on uniform priors and the limited interest of the so-called No Free Lunch Theorems (points I've actually made myself in a different setting). The authors' emphasis on the difficulty of defending induction without resorting to circularity (see the well-known anti-induction joke for a drastic illustration) is laudable. And it's a nice perspective to view Solomonoff's prior as a kind of compromise between Epicurus and Ockham, but does this particular point need to be made in quite so many words? Judging from the style of the paper, the word "philosophical" in the title seems to mean something like "characterized by lack of rigor and general verbosity".4 Here are some examples of my more specific complaints [...]

I still consider it plausible to think that Kolmogorov complexity and Solomonoff induction are relavant to AGI7 (as well as to statistical inference and the theory of science), but the experience of reading Uncertainty & Induction in AGI and A philosophical treatise of universal induction strongly suggests that Hutter's writings are not the place for me to go in order to learn more about this. But where, then? Can the readers of this blog offer any advice?

Comment author: Wei_Dai 27 August 2013 09:48:04AM 0 points [-]

My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can't really back up his claims as to how central these ideas are.

If Olle wanted to become an FAI researcher then I'd suggest getting an overview of the AIT field from Li and Vitanyi's textbook, but if he is more interested in what I called "Singularity Strategies" (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I'm afraid nobody has the answers at the moment.

(I wonder if we could convince Olle to join LW? I'd comment on some of Olle's posts but I'm really wary of personal blogs, which tend to disappear and take all of my comments with them.)

Comment author: gwern 27 August 2013 03:08:15PM 3 points [-]

I'd comment on some of Olle's posts but I'm really wary of personal blogs, which tend to disappear and take all of my comments with them.

Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.

Comment author: linkhyrule5 20 August 2013 08:44:27AM 0 points [-]

Random question - is AGI7 a typo, or a term?

Comment author: Manfred 20 August 2013 09:26:50AM *  6 points [-]

Open link, control+f "relavant to AGI". Get directed to "relavant to AGI<sup>7</sup>".

Footnote 7 is "7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance."