Tyrrell_McAllister comments on Take heed, for it is a trap - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (187)
If you're thinking truly reductionistically about programming an AI, you'll realize that "probability" is nothing more than a numerical measure of the amount of information the AI has. And when the AI counts the number of bits of information it has, it has to start at some number, and that number is zero.
The point is about the internal computations of the AI, not the output on the screen. The output on the screen may very well be "ERROR: SYNTAX" rather than "50%" for large classes of human inputs. The human inputs are not what I'm talking about when I refer to unspecified hypotheses like A,B, and C. I'm talking about when, deep within its inner workings, the AI is computing a certain number associated with a string of binary digits. And if the string is empty, the associated number is 0.
The translation of
-- "What is P(A), for totally unspecified hypothesis A?"
-- "50%."
into AI-internal-speak is
-- "Okay, I'm about to feed you a binary string. What digits have I fed you so far?"
-- "Nothing yet."
That's because in almost all practical human uses, "know nothing" doesn't actually mean "zero information content".
Why does not knowing the hypothesis translate into assigning the hypothesis probability 0.5 ?
If this is the approach that you want to take, then surely the AI-internal-speak translation of "What is P(A), for totally unspecified hypothesis A?" would be "What proportion of binary strings encode true statements?"
ETA: On second thought, even that wouldn't make sense, because the truth of a binary string is a property involving the territory, while prior probability should be entirely determined by the map. Perhaps sense could be salvaged by passing to a meta-language. Then the AI could translate "What is P(A), for totally unspecified hypothesis A?" as "What is the expected value of the proportion of binary strings that encode true statements?".
But really, the question "What is P(A), for totally unspecified hypothesis A?" just isn't well-formed. For the AI to evaluate "P(A)", the AI needs already to have been fed a symbol A in the domain of P.
Your AI-internal-speak version is a perfectly valid question to ask, but why do you consider it to be the translation of "What is P(A), for totally unspecified hypothesis A?" ?