Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Why wouldn't the probability of two algorithms of different lengths appearing approach the same value as longer strings of bits are searched?
Better suited to the open thread.
Augh, right. I'd forgotten that was there.
There is no search for a program inside a huge string of bits. The probability of a program is the probability that random string of bits begins with it, which is 2^-len . The programs are self delimited, i.e. the program could be a string like 011011... , the ... being a string of random bits that aren't read or which we decide not to consider to be a program (e.g. if the program simply sets up copying of input tape onto output tape). There's also no search for data inside the huge string of output, it has to begin with data. The 'search' has to be done if you want to find this probability - you have to try every input string.
I get it now. Thanks!
Because when you integrate an exponential, you get a constant.
Is this a joke?
Ah, I misunderstood the question. I thought he thought that the solomonoff prior wouldn't be normalized - so for example, a program of length 30 and a program of length 33 would both be in infinite strings, so as you search infinity strings you find them equally common.
Still I don't understand the "exponential" part. I thought that you may have deliberately given an obscure brief answer to the obscure brief question in the OP.
So what happens is that as you search more and more strings, they get weighted exponentially (i.e. like e^-length), so even though the program of length 30 and the program of length 33 show up in infinite strings, when you sum up the total weight, you get two different constants.
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?