You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Shane Legg's Thesis: Machine Superintelligence, Opinions? - Less Wrong Discussion

9 Post author: Zetetic 08 May 2011 08:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: cousin_it 08 May 2011 08:47:50PM *  6 points [-]

You need to understand Solomonoff's and Hutter's ideas first to see where Legg is coming from. One of the best introductions to these topics available online is Legg's "Solomonoff Induction", though Li and Vitanyi's book is more thorough if you can get it. Legg's paper about prediction is very nice. I haven't studied his other papers but they're probably nice too. He comes across as a smart and cautious researcher who doesn't make technical mistakes. His thesis seems to be a compilation of his previous papers, so maybe you're better off just reading them.

Comment author: gwern 08 May 2011 08:56:03PM 5 points [-]

The thesis is quite readable and I found it valuable to sink deeply into the paradigm, rather than have things spread out over a bunch of papers.

The most worthless part of the thesis, IIRC*, was his discussion and collecting of definitions of intelligence; it doesn't help persuade anyone of the intelligence=sequence-prediction claim, and just takes up space.

* It's been a while; I've forgotten whether the thesis actually covers this or whether I'm thinking of another paper.

Comment author: Zetetic 08 May 2011 09:36:17PM 2 points [-]

I've got Li and Vitanyi's book and am currently working through the Algorithmic Probability Theory sequence they suggest. I am also working through Legg's Solomonoff Induction paper.

I actually commented on your thread from February earlier today mentioning this paper, which seems to deal with the issues related to semi-measures in detail (something that you were indicating was very important) and it seems to do so in the context of the quote from Eliezer.

In particular, from the abstract:

Universal semimeasures work by modelling the sequence as generated by an unknown program running on a universal computer. Although these predictors are uncomputable, and so cannot be implemented in practice, the serve to describe an ideal: an existence proof for systems that predict better than humans.