"The universal and unified evaluation of intelligence, be it human, non-human animal, artificial or extraterrestrial, has not been approached from a scientific viewpoint before, and this is a first step," the researcher concludes.
It seems as though they are trying to take credit for Shane Legg's moves.
If you read the paper, I believe Legg is credited a number of times as are Hutter and Schmidhuber.
"Measuring universal intelligence: Towards an anytime intelligence test"; abstract:
http://www.csse.monash.edu.au/~dld/Publications/HernandezOrallo+DoweArtificialIntelligenceJArticle.pdf
Example popular media coverage: http://www.sciencedaily.com/releases/2011/01/110127131122.htm
The group's homepage: http://users.dsic.upv.es/proy/anynt/
(There's an applet but it seems to be about constructing a simple agent and stepping through various environments, and no working IQ test.)
The basic idea, if you already know your AIXI*, is to start with simple programs** and then test the subject on increasingly harder ones. To save time, boring games such as random environments or one where the agent can 'die'*** are excluded and a few rules added to prevent gaming the test (by, say, deliberately failing on harder tests so as to be given only easy tests which one scores perfectly on) or take into account how slow or fast the subject makes predictions.
* apparently no good overviews of the whole topic AIXI but you could start at http://www.hutter1.net/ai/aixigentle.htm or http://www.hutter1.net/ai/uaibook.htm
** simple as defined by Kolmogorov complexity; since KC is uncomputable, one of the computable variants - which put bounds on resource usage - is used instead
*** make a mistake which turns any future rewards into fixed rewards with no connection to future actions