private_messaging comments on How Bayes' theorem is consistent with Solomonoff induction - Less Wrong

9 Post author: Alex_Altair 09 July 2012 10:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: private_messaging 11 July 2012 07:23:38AM *  2 points [-]

A clarification if I might:

"is the probability that we will see data sequence E, given that we run program H on the universal Turing machine."

I think it'll be helpful to word it as "output begins with the data sequence E", as it is generally a very common misconception that it suffices to see E somewhere within the output; that it suffices that the H "explains" the data (the original article used "explains").

When thinking of e.g. the universe, the "explains" is typically taken to mean "the universe contains me somewhere" and a form of anthropic reasoning, which can lead to substantially different concept than Solomonoff induction.

As a side note, one can obtain a type of anthropic reasoning prior by including some self-description on extra tape that can be read; then the code can search for instances of itself within the models for only a constant cost, but still needs to be predictive, i.e. output string that begins with the observed data. This seems no different (up to a constant) from simply including the self description as part of the data sequence E . edit: on second thought, extra tape is different in major fallible way: the self description on extra tape, if sufficiently complete, can allow to construct the god in your own image for 'goddidit' . One should just add self description as part of the data sequence E . It is still no-different-up-to-a-constant though.