Wei_Dai comments on Does Solomonoff always win? - Less Wrong

11 Post author: cousin_it 23 February 2011 08:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: Wei_Dai 25 February 2011 03:46:28PM *  3 points [-]

It occurs to me that an important question is, if it turns out that our universe actually contains a halting oracle (or even more powerful oracles), can an AI programmed with the Solomonoff prior exploit it properly (i.e., do no worse than a human) to accomplish its goals? This question is not really captured by any of the three games, since they all force the player to make guesses without being able to interact with the universe.

It seems hard to think of a game that does capture this question. Can anyone think of such a game?

Comment author: JGWeissman 25 February 2011 08:16:53PM 1 point [-]

It seems that you could design such games by taking any of the games we are discussing and modifying it so the agent is allowed to ask an oracle questions and get answers before making its decision. Predicting the results of such games seems harder.

Comment author: Wei_Dai 26 February 2011 07:39:13PM 2 points [-]

Actually, I think a human can easily win such games. Consider game 1 with Chaitin's Ω as the string to be predicted. A human can use the halting oracle to compute Ω exactly, whereas the Solomonoff prior treats it like a random string and would converge to answering .5 for the probability of the next bit being 1.

But this isn't completely convincing (for the case that Solomonoff isn't sufficient) because perhaps an AI programmed with the Solomonoff prior can do better (or the human worse) if the halting oracle is a natural part of the environment, instead of something they are given special access to.

Comment author: JGWeissman 26 February 2011 07:47:11PM 2 points [-]

whereas the Solomonoff prior treats it like a random string and would converge to answering .5 for the probability of the next bit being 1.

Isn't something like "If you have access to a halting oracle, ask it for Chaitin's Ω" among the computable generators included in the Solomonoff prior? If so, the Solomonoff agent should converge to the correct sequence.

Comment author: Wei_Dai 26 February 2011 07:55:33PM 0 points [-]

No, at least not in the usual formulation of the Solomonoff prior that I'm familiar with.

Comment author: Wei_Dai 26 February 2011 08:49:06PM 0 points [-]

An idea from discussion with cousin_it:

What if we let AIXI play game 1, and allow probabilities to be written in some specific non-computable notation, such as a programming language that allows calls to a halting oracle. Does AIXI win this game?

Comment author: cousin_it 01 March 2011 09:03:08AM *  0 points [-]

How does AIXI play regular game 1? Finite binary notation isn't enough to specify the uncomputable probability values it wants to output.

If we handwave this difficulty away and say that the AI's output may include infinite sums etc., then there's a simple agent that avoids losing your game by more than a constant: the mixture of all computable consistent mixtures expressible in your notation. Proof of optimality is unchanged.

On the other hand, if only finite notation is allowed, then it's not clear what AIXI is supposed to output at each step. The AIXI paper would advise us to construct the optimal agent tailor-made for your game, using a universal prior over computable input sequences. I have no idea how such an agent would fare on uncomputable input.