Note that most mainstream AI researchers are deeply skeptical of the AIXI/universal intelligence approach.
Observe the following parallel:
Decades ago the STRIPS formalism for automatic planning was very popular. The STRIPS formalism was extremely general: a wide variety of planning problems could be expressed in its action- and state-representation language. Furthermore, the STRIPS framework came with a tantalizing theoretical observation: if you could obtain good heuristic functions, that provided an accurate lower-bound estimate of the distance to the goal, then the A* algorithm could be used to find an optimal plan. So the problem of achieving general intelligence was "just" a problem of finding good heuristic functions for graph search.
Nowadays the AIXI formalism is gaining popularity. The formalism is extremely general: nearly any problem can be formulated in its terms. It comes with a tantalizing theoretical observation: if the Kolmogorov complexity of the observation stream can be found, then the action sequence executed by the agent is provably optimal. So the problem of achieving general intelligence is "just" a problem of estimating the Kolmogorov complexity of an observation sequence.
I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.