Manfred comments on Results from MIRI's December workshop - Less Wrong

45 Post author: Benja 15 January 2014 10:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 31 December 2013 02:23:09AM *  0 points [-]

you pick a model at random like normal, but if it's not consistent with the information we're conditioning on, you throw it out.

You can do this for a finite set S by picking a random assignment of truth values to sentences in S, but (1) there is a nasty dependence on S, so the resulting distribution is very sensitive to the choice of S and definitely doesn't converge to any limit (this is related to the difficulty of defining a "random model" of some axioms), (2) if S is a finite set, you actually need to have some sentence like "P(x) is true for 90% of x such that 'P(x)' is in S" which is really awkward, and I don't know how to make it actually work.

A technically correct solution along these lines would still be of interest, but my intuition is that this isn't working for the right reasons and so there are likely to be further problems even if (2) is resolved and (1) is deemed non-problematic.

Also note that "pick a random model consistent with information we are conditioning on" is not the kind of solution we are looking for; we want conditioning to be conditioning. So your proposal would correspond to picking sentences at random for a while as in Abram Demski's proposal, and then transitioning into choosing a model at random at some (probably randomly chosen) point. This is something we considered explicitly.

Comment author: Manfred 31 December 2013 11:45:35AM 0 points [-]

definitely doesn't converge to any limit

I'm confused about what you want when you say this. I think an example or two would help me.

My best guess: you want the assigned probability to equal the "ideal" probability (as laid out in the probabilistic reflection paper) when we take the limit of computing power going to infinity. It's difficult to take this limit for Abram's original proposal, but if you make it doable by restricting it to S, now there can be things that are true but not provable using statements in S, so infinite computing power doesn't get us there.

Comment author: Will_Sawin 15 January 2014 06:05:42AM 0 points [-]

It's actually not too hard to demonstrate things about the limit for Abram's original proposal, unless there's another one that's original-er than the one I'm thinking of. It limits to the distribution of outcomes of a certain incomputable random process which uses a halting oracle to tell when certain statements are contradictory.

You are correct that it doesn't converge to a limit of assigning 1 to true statements and 0 to false statements. This is of course impossible, so we don't have to accept it. But it seems like we should not have to accept divergence - believing something with high probability, then disbelieving with high probability, then believing again, etc. Or perhaps we should?