The thesis is probably throwing yourself in at the deep end - not necessarilly the best way to learn - perhaps. It depends a lot on what you have already studied so far, though.
You might be correct on that. As of now I suppose I should focus on mastering the basics. I've nearly finished Legg's write up of Solomonoff Induction, but since it seems like there is a good bit of controversy over the AIXI approach I suppose I'll go ahead and get a few more of the details of algorithmic probability theory under my belt and move on to something more obviously useful for a bit; like the details of machine learning and vision and maybe the ideas for category theoretic ontologies.
I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.