Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
It seems like in order to go from P(H) to P(H|E) you have to become certain that E. Am I wrong about that?
Say you have the following joint distribution:
P(H&E) = a
P(~H&E) = b
P(H&~E) = c
P(~H&~E) = d
Where a,b,c, and d, are each larger than 0.
So P(H|E) = a/(a+b). It seems like what we're doing is going from assigning ~E some positive probability to assigning it a 0 probability. Is there another way to think about it? Is there something special about evidential statements that justifies changing their probabilities without having updated on something else?
Suppose instead of using 2^-K(H) we just use 2^-length(H), does this do something obviously stupid?
Here's what I'm proposing:
Take a programing language with two characters. Assign each program a prior of 2^-length(program). If the program outputs some string, then P(string | program) = 1, else it equals 0. I figure there must be some reason people don't do this already, or else there's a bunch of people doing it. I'd be real happy to find out about either.
Clearly, it isn't a probability distribution, but we can still use it, no?
View more: Next