JGWeissman comments on Another attempt to explain UDT - Less Wrong

35 Post author: cousin_it 14 November 2010 04:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: JGWeissman 14 November 2010 10:43:02PM 3 points [-]

So our deconstruction of Many Worlds vs Collapse is that there is a Many Worlds universe, and there are also single world Collapse universes for each sequence of ways the wave function could collapse. After pondering the difference between "worlds" and "universes", it seems that the winner is still Many Worlds.

Comment author: cousin_it 14 November 2010 11:03:27PM *  0 points [-]

Right. (Amusing comment by the way!) Under UDT + simplicity prior, if some decision has different moral consequences under MWI and under Copenhagen, it seems we ought to act as if MWI were true. I still remain on the fence about "accepting" UDT, though.

Comment author: Vladimir_Nesov 14 November 2010 11:08:48PM *  0 points [-]

I still remain on the fence about "accepting" UDT, though.

I believe that the parts about computable worlds and universal prior are simply wrong for most preferences, and human preference in particular. On the other hand, UDT gives an example of a non-confused way of considering a decision problem (even if the decision problem is not the one allegedly considered, that is not a general case).

Comment author: cousin_it 14 November 2010 11:23:20PM *  1 point [-]

Eliezer has expressed the idea that using a Solomonoff-type prior over all programs doesn't mean you believe the universe to be computable - it just means you're trying to outperform all other (ETA: strike the word "other") computable agents. This position took me a lot of time to parse, but now I consider it completely correct. Unfortunately the reason it's correct is not easy to express in words, it's just some sort of free-floating math idea in my head.

Not sure how exactly this position meshes with UDT, though.

Comment author: Eugine_Nier 14 November 2010 11:34:26PM 3 points [-]

Also, if the universe is not computable, there may be hyperturing agents running around. You might even want to become one.

Comment author: Peter_de_Blanc 14 November 2010 11:29:32PM 3 points [-]

Why do you say "all other computable agents"? Solomonoff induction is not computable.

Comment author: cousin_it 14 November 2010 11:35:39PM 0 points [-]

Right, sorry. My brain must've had a hiccup. It's scary how much this happens. Amended the comment.

Comment author: Vladimir_Nesov 14 November 2010 11:31:16PM 2 points [-]

Eliezer has expressed the idea that using a Solomonoff-type prior over all programs doesn't mean you believe the universe to be computable - it just means you're trying to outperform all other computable agents.

Outperform at generating "predictions", but why is that interesting? Especially if universe is not computable, so that "predictions" don't in fact have anything to do with the universe? (Which again assumes that "universe" is interesting.)