cousin_it comments on Notes on logical priors from the MIRI workshop - Less Wrong

18 Post author: cousin_it 15 September 2013 10:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 16 September 2013 04:59:36AM *  0 points [-]

If you're proposing to treat Omega's words as just observational evidence that isn't connected to math and could turn out one way or the other with probability 50%, I suppose the existing formalizations of UDT already cover such problems. But how does the agent assign probability 50% to a particular math statement made by Omega? If it's more complicated than "the trillionth digit of pi is even", then the agent needs some sort of logical prior over inconsistent theories to calculate the probabilities, and needs to be smart enough to treat these probabilities updatelessly, which brings us back to the questions asked at the beginning of my post... Or maybe I'm missing something, can you specify your proposal in more detail?

Comment author: Manfred 16 September 2013 10:36:54AM *  0 points [-]

Well, I was thinking more in terms of a logical prior over single statements, see my favorite here.

But yeah I guess I was missing the point of the problem.

Also: suppose Omega comes up to you and says "If 1=0 was true I would have given you billion dollars if and only if you would give me 100 dollars if 1=1 was true. 1=1 is true, so can you spare $100?" Does this sound trustworthy? Frankly not, it feels like there's a principle of explosion problem that insists that Omega would have given you all possible amounts of money at once if 1=0 was true.

A formulation that avoids the principle of explosion is "I used some process that I cannot prove the outcome of to pick a digit of pi. If that digit of pi was odd I would have given you a billion dollars iff [etc]."