I've got Li and Vitanyi's book and am currently working through the Algorithmic Probability Theory sequence they suggest. I am also working through Legg's Solomonoff Induction paper.
I actually commented on your thread from February earlier today mentioning this paper, which seems to deal with the issues related to semi-measures in detail (something that you were indicating was very important) and it seems to do so in the context of the quote from Eliezer.
In particular, from the abstract:
Universal semimeasures work by modelling the sequence as generated by an unknown program running on a universal computer. Although these predictors are uncomputable, and so cannot be implemented in practice, the serve to describe an ideal: an existence proof for systems that predict better than humans.
Yes, I already rederived most of these results and even made a tiny little bit of progress on the fringe :-) But it turned out to be tangential to the problem I'm trying to solve.
I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.