Strange7 comments on An Intuitive Explanation of Solomonoff Induction - Less Wrong

53 Post author: Alex_Altair 11 July 2012 08:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: Strange7 12 July 2012 09:09:10AM 0 points [-]

Which would be more complex:

1) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, or

2) a Turing-computable set of physical laws and initial conditions that, run to completion, would produce a description of something uncannily similar to the universe as we know it, and then uniquely identify your brain within that description?

A program which produced a model of your mind but not of the rest of the universe would probably be even more complicated, since any and all knowledge of that universe encoded within your mind would need to be included with the initial program rather than emerging naturally.

Comment author: private_messaging 12 July 2012 09:27:35AM *  1 point [-]

Well, if the data is description of your mind then the code should produce a string that begins with description of your mind, somehow. Pulling the universe out will require examining the code's internals.

If you do S.I. in a fuzzy manner there can be all sorts of self-misconceptions; it can be easier (shorter coding) to extract an incredibly important mind, therefore you obtain not solipsism but narcissism. The prior for self importance may then be quite huge.

If you drop requirement that output string begins with description of the mind and search for the data anywhere within the output string, then a simple counter will suffice as valid code.