# Eliezer_Yudkowsky comments on Newcomb's Problem and Regret of Rationality - Less Wrong

66 31 January 2008 07:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Old

Comment author: 22 August 2009 03:52:17AM *  3 points [-]

For example, in a universe that ends at t = TREE(100), a time slice with t < googolplex has a much higher measure than a random time slice (since it takes more bits to represent a random t).

I have to say this strikes me as a really odd proposal, though it's certainly interesting from the perspective of the Doomsday Argument if advanced civilizations have a thermodynamic incentive to wait until nearly the end of the universe before using their hoarded negentropy.

But for me it's hard to see why "reality-fluid" (the name I give your "measure", to remind myself that I don't understand it at all) should dovetail so neatly with the information needed to locate events in universes or universes in Level IV. It's clear why an epistemic prior is phrased this way - but why should reality-fluid behave likewise? Shades of either Mind Projection Fallacy or a very strange and very convenient coincidence.

Comment author: 22 August 2009 06:00:38AM *  6 points [-]

Actually, I think I can hazard a guess to that one. I think the idea would be "the simpler the mathematical structure, the more often it'd show up as a substructure in other mathematical structures"

For instance, if you are building large random graphs, you'd expect to see some specific pattern of, say, 7 vertices and 18 edges show up as subgraphs more often then, say, some specific pattern of 100 vertices and 2475 edges.

There's a sense in which "reality fluid" could be distributed evenly which would lead to this. If every entire mathematical structure got an equal amount of reality stuff, then small structures would benefit from the reality juice granted to the larger structures that they happen to also exist as substructures of.

EDIT: blargh, corrected big graph edge count. meant to represent half a complete graph.

Comment author: 22 August 2009 07:30:51AM 1 point [-]

But for me it's hard to see why "reality-fluid" (the name I give your "measure", to remind myself that I don't understand it at all) should dovetail so neatly with the information needed to locate events in universes or universes in Level IV.

Well, why would it be easier to locate some events or universes than others, unless they have more reality-fluid?

It's clear why an epistemic prior is phrased this way - but why should reality-fluid behave likewise? Shades of either Mind Projection Fallacy or a very strange and very convenient coincidence.

Why is it possible to describe one mathematical structure more concisely than another, or to specify one computation using less bits than another? Is that just a property of the mind that's thinking about these structures and computations, or is it actually a property of Reality? The latter seems more likely to me, given results in algorithmic information theory. (I don't know if similar theorems has been or can be proven about set theory, that the shortest description lengths in different formalizations can't be too far apart, but it seems plausible.)

Also, recall that in UDT, there is no epistemic prior. So, the only way to get an effect similar to EDT/CDT w/ universal prior, is with a weighting scheme over universes/events like I described.

Comment author: 22 August 2009 07:32:22AM 2 points [-]

I can sort of buy the part where simple universes have more reality-fluid, though frankly the whole setup strikes me as a mysterious answer to a mysterious question.

But the part where later events have less reality-fluid within a single universe, just because they take more info to locate - that part in particular seems really suspicious. MPF-ish.

Comment author: 22 August 2009 07:47:08AM *  1 point [-]

I'm far from satisfied with the answer myself, but it's the best I've got so far. :)