cousin_it comments on Shane Legg's Thesis: Machine Superintelligence, Opinions? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
You need to understand Solomonoff's and Hutter's ideas first to see where Legg is coming from. One of the best introductions to these topics available online is Legg's "Solomonoff Induction", though Li and Vitanyi's book is more thorough if you can get it. Legg's paper about prediction is very nice. I haven't studied his other papers but they're probably nice too. He comes across as a smart and cautious researcher who doesn't make technical mistakes. His thesis seems to be a compilation of his previous papers, so maybe you're better off just reading them.
The thesis is quite readable and I found it valuable to sink deeply into the paradigm, rather than have things spread out over a bunch of papers.
The most worthless part of the thesis, IIRC*, was his discussion and collecting of definitions of intelligence; it doesn't help persuade anyone of the intelligence=sequence-prediction claim, and just takes up space.
* It's been a while; I've forgotten whether the thesis actually covers this or whether I'm thinking of another paper.
I've got Li and Vitanyi's book and am currently working through the Algorithmic Probability Theory sequence they suggest. I am also working through Legg's Solomonoff Induction paper.
I actually commented on your thread from February earlier today mentioning this paper, which seems to deal with the issues related to semi-measures in detail (something that you were indicating was very important) and it seems to do so in the context of the quote from Eliezer.
In particular, from the abstract: