AlephNeil comments on Anthropics in a Tegmark Multiverse - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
I think I can see a flaw.
OK, so your central idea is to use the complexity prior for 'centered worlds' rather than 'uncentered worlds'. A 'centered world' here means a "world + pointer to the observer".
Now, if I give you a world + a pointer to the observer then you can tell me exactly what the observer's subjective state is, right? Therefore, the complexity of "world + pointer to observer" is greater than the complexity of "subjective state all by itself + the identity function".
Therefore, your approach entails that we should give massive weight to the possibility that solipsism is correct :-)
ETA: Fixed an important error - had written "less" instead of "greater".
My approach is defined by solipism. I don't use the complexity prior for 'centered worlds' I just use the complexity prior for 'subjective state.'
That said, all of the other people in the world probably also have mental states which are just as easily described, so its an empty sort of solipism.
(Also note that my idea is exactly the same as Wei Dai's).
You say:
Therefore, the complexity of "universe + pointer to the network of causal relationships constituting your thoughts" is greater than or equal to the complexity of "network of causal relationships constituting your thoughts + the identity function".
Really, you should just talk about the 'network of causal relationships constituting your thoughts' all by itself. So if Jupiter hasn't affected your thoughts yet, Jupiter doesn't exist? But what counts as an 'effect'? And what are the boundaries of your mental state? This gets awfully tricky.
The issue is that you mean a different thing by "complexity" then the definition.
How do you describe your thoughts all by themselves? You could describe the whole physical brain and its boundary with the world, but that is spectacularly complex. Simpler is to specify the universe (by giving some simple laws which govern it) and then to describe where to find your thoughts in it. This is the shortest computational recipe which outputs a description of your thoughts.
By describing the abstract structure of that 'network of causal relationships' you were talking about?
Look, there's a Massive Philosophical Problem here which is "what do you take your thoughts to be?" But whatever answer you give, other than just "a universe plus a pointer" I can carry on repeating my trick.
It sounds as though you want to give the answer "an equivalence class of universes-plus-pointers, where (W1, P1) ~ (W2, P2) iff the being at P1 'has the same thoughts' as the being at P2". But this is no good if we don't know what "thoughts" are yet.
ETA: Just wanted to say that the post was very interesting, regardless of whether I think I can refute it, and I hope LW will continue to see discussions like this.
So you can describe your brain by saying explicitly what it contains, but this is not the shortest possible description in the sense of Kolmogorov complexity.
I believe that the shortest way to describe the contents of your brain--not your brain sitting inside a universe or anything--is to describe the universe (which has lower complexity than your brain, in the sense that it is the output of a shorter program) and then to point to your brain. This has lower complexity than trying to describe your brain directly.
I understand what you were trying to do a little better now.
I think that so far you've tended to treat this as if it was obvious whereas I've treated it as if it was obviously false, but neither of us has given much in the way of justification.
Some things to keep in mind:
I think a pointer that effectively forces you to compute the entire program in order to find the object it references is still reducing complexity based on the definition used. Computationally expensive != complex.
Sure, it might be reducing complexity, but it might not be. Consider the Library of Babel example, and bear in mind that a brain-state has a ton of extra information over and above the 'mental state' it supports. (Though strictly speaking this depends on the notion of 'mental state', which is indeterminate.)
Also, we have to ask "reducing complexity relative to what?" (As I said above, there are many possibilities other than "literal description" and "our universe + pointer".)