Can a computable human beat a Solomonoff hyperintelligence at making predictions about an incoming sequence of bits? If the sequence is computable, he probably can't. I'm interested in what happens when the sequence can be uncomputable. The answer depends on what you mean by "beat".
Game 1: on each round you and Omega state your probabilities that the next bit will be 1. The logarithm of the probability you assigned to the actual outcome gets added to your score. (This setup is designed to incentivize players to report their true beliefs, see Eliezer's technical explanation.)
Game 2: you both start with a given sum of money. On each round you're allowed to bet some of it on 0 or 1 at 1:1 odds. You cannot go below zero. (This is the "martingale game", for motivation see the section on "constructive martingales" in the Wikipedia article on Martin-Löf randomness.)
Game 3: on each round you call out 0 or 1 for the next bit. If you guess right, you win 1 dollar, otherwise you lose 1 dollar. Going below zero is allowed. (This simple game was suggested by Wei Dai in this thread on one-logic.)
As it turns out, in game 1 you cannot beat Omega by more than an additive constant, even if the input sequence is uncomputable and you know its definition. (I have linked before to Shane Legg's text that can help you rederive this result.) Game 2 is a reformulation of game 1 in disguise, and you cannot beat Omega by more than a multiplicative constant. In game 3 you can beat Omega. More precisely, you can sometimes stay afloat while Omega sinks below zero at a linear rate.
Here's how. First let's set the input sequence to be very malevolent toward Omega: it will always say the reverse of what Omega is projected to say based on the previous bits. As for the human, all he has to do is either always say 0 or always say 1. Intuitively it seems likely that at least one of those strategies will stay afloat, because whenever one of them sinks, the other rises.
So is Solomonoff induction really the shining jewel at the end of all science and progress, or does that depend on the payoff setup? It's not clear to me whether our own universe is computable. In the thread linked above Eliezer argued that we should be trying to approximate Solomonoff inference anyway:
If you're dealing with non-exceptional situations - non-devilish environments - then shouldn't a proof of epistemic error-boundedness generally carry over to a proof of decision error-boundedness? In other words, are you sure you're not assuming that the environment is a superintelligent adversary which is strictly more superintelligent than you, which is the sort of reasoning that leads people to adopt randomized algorithms?
Eliezer's argument sounds convincing, but to actually work it must rely on some prior over all of math, including uncomputable universes, to justify the rarity of such "devilish" or "adversarial" situations. I don't know of any prior over all of math, and Wei's restating of Berry's paradox (also linked above) seems to show that inventing such a prior is futile. So we seem to lack any formal justification for adopting the Solomonoff distribution in reasoning about physics etc., unless I'm being stupid and missing something really obvious again.
(posting this to discussion because I'm no longer convinced that my mathy posts belong in the toplevel)
There are serious weaknesses even over computable universes. Since Solomonoff induction doesn't search for ambient dependencies, considering instead explicit dependencies of reward on a particular action-symbol, it counts the same world multiple times if the agent is instantiated multiple times in it and receives the same prefix of the sequence of observations, and considers incorrect counterfactuals in these worlds, CDT-style. So two identical AIXI agents would defect in PD (CDT-style reasoning problem). In Newcomb's problem, its decision depends on the number of various explicit dependencies that carve the problem as two-boxing or one-boxing, so at least it's unclear what it'll do, and there seems to be no reason that it should reliably one-box. Given an anthropic problem where on a toss of a coin in one branch we create very many identical AIXI agents for the purpose of making some decision, while on the other branch there's only one agent (assuming the branches have sufficiently similar K-complexity), probability (complexity) of the branch with more agents would be double-counted as many times, so the consequences of actions in one-agent world won't be taken into consideration compared to consequences in the many-agents world. In Counterfactual Mugging, AIXI would not give money to Omega, since it updates. And so on.
Someone should really write up an UDT-AIXI that doesn't have these problems, I expect it should be straightforward enough (so that the problems like those I listed don't count against the arguments AIXI is used in, as a definition of impressive capability; make it stronger, not laugh at its flaws).
I'd say ambient dependencies are the least of AIXI's problems. It has >99% probability of wireheading, and in >99% of the remaining outcomes it disassembles itself with its mining claws. timtyler put the nail in the coffin quite a while ago.
Some days ago I tried to do just that, and stopped because I don't know enough. Now I'm back to studying more math. The most obvious problem is that AIXI is uncomputable, so a UDT-AIXI will probably also be uncomputable. But how does an uncomputable agent get to find itself embedded in a computable universe?