Note that most mainstream AI researchers are deeply skeptical of the AIXI/universal intelligence approach.
It does seem there is some skepticism about AIXI - possibly deserved. However, "Solomonoff Induction" - which it is heavily based on - is a highly fundamental principle. There's a fair amount of skepticism and resistance concerning that as well. However, as far as I can see, this is entirely misguided. Often it seems due to entrenched existing ideas.
On the other hand, many others are embracing the ideas. They are certainly important to understand. Here is Shane Legg on the topic of the significance of this kind of material:
Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area. ''
I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.