Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: 3p1cd3m0n 31 January 2015 02:51:51AM 0 points [-]

I really appreciate having the examples in parentheses and italicised. It lets me easily skip them when I know what you mean. I wish others would do this.

Comment author: 3p1cd3m0n 27 January 2015 01:16:31AM 0 points [-]

"Doesn't physics say this universe is going to run out of negentropy before you can do an infinite amount of computation?" Actually, there is a [proposal that could create a computer that runs forever.

Comment author: Kindly 15 January 2015 02:21:36AM 0 points [-]

It doesn't: for example, the set of strings 0, 10, 110, 1110, 11110, 111110, ... is infinite and prefix-free. So is the set of C-style (null-terminated) strings.

Comment author: 3p1cd3m0n 16 January 2015 03:50:52AM 0 points [-]

I see. Does the method of normalization you gave work even when there is an infinite number of hypotheses?

In response to Something to Protect
Comment author: Nominull3 31 January 2008 03:30:55AM 4 points [-]

I don't have anything desperately important to me, and you say I'm not allowed to just pick something. Given this, what am I supposed to do, to become more rational? Am I just doomed? I really desperately want to believe true things and not false things, but you say that's not good enough.

Comment author: 3p1cd3m0n 15 January 2015 01:53:00AM 1 point [-]

Decreasing existential risk isn't incredibly important to you? Could you explain why?

Comment author: Kindly 14 January 2015 02:52:19AM *  1 point [-]

The hypotheses are encoded in a prefix-free way: no hypothesis is a prefix of another hypothesis.

In particular, this eliminates your example: if both strings of length 1 ("0" and "1") are valid hypotheses, then no string of longer length can be a valid hypothesis (then we can take Omega=1 and assign a probability of 1/2 to each). Or we could have "0", "10", and "110" be valid hypotheses; then we can take Omega = 7/8 and assign probabilities of 4/7, 2/7 and 1/7 to these.

In general, the prefix-free condition ensures that the sum over all hypotheses converges, by Kraft's inequality, which is the real heavy lifter here; the constant is merely there to make the sum over all hypotheses converge to 1 instead.

You might imagine that hypotheses are required to be grammatically correct English sentences (I guess encoded in ASCII or something). In that case, no hypothesis is a prefix of another, because each hypothesis ends with a period.

Comment author: 3p1cd3m0n 15 January 2015 01:35:16AM 0 points [-]

Right; I forgot that it used a prefix-free encoding. Apologies if the answer to this is painfully obvious, but does having a prefix-free encoding entail that there is a finite number of possible hypotheses?

Comment author: Kindly 12 January 2015 02:02:31AM 0 points [-]

Sorry, I mean that we do this for each value of n. So, given a hypothesis H, it is assigned a probability of 2^-length(H)/Omega.

Comment author: 3p1cd3m0n 14 January 2015 01:50:10AM *  0 points [-]

I still don't see how that would make all the hypotheses sum to 1. Wouldn't that only make the probabilities of all the hypotheses of length n sum to 1, and thus make the sum of all hypotheses sum to > 1? For example, consider all the hypotheses of length 1. Assuming Omega = 1 for simplicity, there are 2 hypotheses, each with a probability of 2^-1/1 = 0.5, making all them sum to 1. There are 4 hypotheses of length 2, each with a probability of 2^-2/1 = 0.25, making them sum to 1. Thus, the sum of the probabilities of all hypotheses of length <= 2 = 2 > 1.

Is Omega doing something I don't understand? Would all hypotheses be required to be some set length?

Comment author: Kindly 11 January 2015 06:12:11AM 0 points [-]

If we assign a value of 2^-n to each hypothesis (that is, each halting program) of length n, then the total value summed over all hypotheses is Chaitin's Omega.

Thus, if we assign a value of 2^-n/Omega to each hypothesis, the total is 1, and this gives us a valid probability distribution.

Comment author: 3p1cd3m0n 11 January 2015 08:53:37PM *  0 points [-]

That would assign a zero probability of hypotheses that take more than n bits to specify, would it not? That sounds far from optimal.

Comment author: 3p1cd3m0n 10 January 2015 11:27:09PM 1 point [-]

Did you not previously state that one should learn about the problem as much as one can before coming to a conclusion, lest one falls prey to the confirmation bias? Should one learn about the problem fully before making a decision only when one doesn't suspect to be biased?

Comment author: Username 10 January 2015 04:39:08AM 0 points [-]

of course not

Comment author: 3p1cd3m0n 10 January 2015 10:15:07PM 0 points [-]

"Of course" implies that the answer is obvious. Why is it obvious?

Comment author: pengvado 07 January 2015 12:58:28PM *  2 points [-]

Use a prefix-free encoding for the hypotheses. There's not 2^n hypotheses of length n: Some of the length-n bitstrings are incomplete and you'd need to add more bits in order to get a hypothesis; others are actually a length <n hypothesis plus some gibberish on the end.

Then the sum of the probabilities of all programs of all lengths combined is 1.0. After excluding the programs that don't halt, the normalization constant is Chaitin's Omega.

Comment author: 3p1cd3m0n 10 January 2015 08:25:52PM 1 point [-]

Unfortunately Chaitin's Omega's incomputable, but even if it wasn't I don't see how it would work as a normalizing constant. Chaitin's Omega is a real number, there is an infinite number of hypotheses, and (IIRC) there is no real number r such that r multiplied by infinite equals one, so I don't see how Chaitin's Omega could possible work as a normalizing constant.

View more: Next