MrMind comments on Understanding and justifying Solomonoff induction - Less Wrong

1 Post author: gedymin 15 January 2014 01:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 15 January 2014 02:08:33PM 0 points [-]

In the context of Bayesian reasoning, I understand "random" as "not enough information", which is different from "non-deterministic".
So that:

If there is no source of randomness involved, the process is fully deterministic, and can be best predicted by deductive reasoning.

Only if we have enough information to exactly compute the next state from the previous ones. When this is not the case, lack of information acts as a source of randomness, for which SI can account.

If there are no rules, the process is fully random. In this case just tossing a fair coin will predict equally well (with P=0.5).

In a sense, yes. There might still be useful pockets of computability inside the universe, though.

It it's hypercomputing, a "higher-order" Solomonoff induction will do better.

I'm not sure "higher-order" Solomonoff induction is even a thing.

Comment author: gedymin 15 January 2014 02:50:12PM *  1 point [-]

"Higher-order" SI is just SI armed with an upgraded universal prior - one that is defined with reference to a universal hypercomputer instead of a universal Turing machine.

Comment author: MrMind 15 January 2014 03:30:43PM *  -1 points [-]

It's not that simple. There isn't a single model of hypercomputation, and even inside the same model hypercomputers might have different cardinal powers.