rwallace comments on Poll: What value extra copies? - Less Wrong

5 [deleted] 22 June 2010 12:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: rwallace 23 June 2010 10:06:31AM 0 points [-]

In Solomonoff induction, the weight of a program is the inverse of the exponential of its length. (I have an argument that says this doesn't need to be assumed a priori, it can be derived, though I don't have a formal proof of this.) Given that, it's easy to see that the total weight of all the weird interpretations is negligible compared to that of the normal interpretation.

It's true that some things become easier when you try to restrict your attention to "our single physical world", but other things become less easy. Anyway, that's a metaphysical question, so let's leave it aside; in which case, to be consistent, we should also forget about the notion of simulations and look at an at least potentially physical scenario.

Suppose the copy took the form of a physical duplicate of our solar system, with the non-interaction requirement met by flinging same over the cosmic event horizon. Now do you think it makes sense to assign this a positive utility?

Comment author: DanArmak 23 June 2010 08:01:47PM 0 points [-]

Given that, it's easy to see that the total weight of all the weird interpretations is negligible compared to that of the normal interpretation.

I don't see why. My utility function could also assign a negative utility to (some, not necessarily all) 'weird' interpretations whose magnitude would scale exponentially with the bit-lengths of the interpretations.

Is there a proof that this is inconsistent? if I understand correctly, you're saying that any utility function that assigns very large-magnitude negative utility to alternate interpretations of patterns in simulations, is directly incompatible with Solomonoff induction. That's a pretty strong claim.

Suppose the copy took the form of a physical duplicate of our solar system, with the non-interaction requirement met by flinging same over the cosmic event horizon. Now do you think it makes sense to assign this a positive utility?

I don't assign positive utility to it myself. Not above the level of "it might be a neat thing to do". But I find your utility function much more understandable (as well as more similar to that of many other people) when you say you'd like to create physical clone worlds. It's quite different from assigning utility to simulated patterns requiring certain interpretations.

Comment author: rwallace 24 June 2010 02:45:35AM *  2 points [-]

Well, not exactly; I'm saying Solomonoff induction has implications for what degree of reality (weight, subjective probability, magnitude, measure, etc.) we should assign certain worlds (interpretations, patterns, universes, possibilities, etc.).

Utility is a different matter. You are perfectly free to have a utility function that assigns Ackermann(4,4) units of disutility to each penguin that exists in a particular universe, whereupon the absence of penguins will presumably outweigh all other desiderata. I might feel this utility function is unreasonable, but I can't claim it to be inconsistent.