Klaus comments on Questions of Reasoning under Logical Uncertainty - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
Hi, I come from a regular science background (university grad student) so I may be biased, but I still have some questions.
Reasoning under uncertainty sounds a lot like Fuzzy Logic. Can you elaborate on the difference of your approach?
What do you exactly mean with "we know how it works"? Is there a known or estimated probability for what the machine will do?
"impossible possibilities" sounds like a contradiction (an Oxymoron ?) which I think is unusual for science writing. Does this add something to the paper or wouldn't another term be better?
Why do you consider that the black box implements a "Rube Goldberg machine"? I looked up Rube Goldberg machine on Wikipedia and for me it sounds more like a joke than something that requires scientific assessment. Is there other literature on that?
Have you considered sending your work to a peer-reviewed conference or journal? This could give you some feedback and add more credibility to what you are doing.
Best regards and I hope this doesn't sound too critical. Just want to help.
Hmm, you seem to have missed the distinction between environmental uncertainty and logical uncertainty.
Imagine a black box with a Turing machine inside. You don't know which Turing machine is inside; all you get to see are the inputs and the outputs. Even if you had unlimited deductive capability, you wouldn't know how the black box behaved: this is because of your environmental uncertainty, of not knowing which Turing machine the box implemented.
Now imagine a python computer program. You might read the program and understand it, but not know what it outputs (for lack of deductive capability). As a simple concrete example, imagine that the program searches for a proof of the Riemann hypothesis using less than a googol symbols: in this case, the program may be simple, but the output is unknown (and very difficult to determine). Your uncertainty in this case is logical uncertainty: you know how the machine works, but not what it will do.
Existing methods for reasoning under uncertainty (such as standard Bayesian probability theory) all focus on environmental uncertainty: they assume that you have unlimited deductive capability. A principled theory of reasoning under logical uncertainty does not yet exist.
Consider that python program that searches for a proof of the Riemann hypothesis: you can imagine it outputting either "proof found" or "no proof found", but one of these possibilities is logically impossible. The trouble is, you don't know which possibility is logically impossible. Thus, when you reason about these two possibilities, you are considering at least one logically impossible possibility.
I hope this helps answer your other questions, but briefly:
(The field was by no means started by us. If it's arguments from authority that you're looking for, you can trace this topic back to Boole in 1854 and Bernoulli in 1713, picked up in a more recent century by Los, Gaifman, Halpern, Hutter, and many many more in modern times. See also the intro to this paper, which briefly overviews the history of the field, and which is on the same topic, and which is peer reviewed. See also many of the references in that paper; it contains a pretty extensive list.)
Thanks a lot, that clears up a lot of things. I guess I have to read up about the Riemann hypothesis, etc.
Maybe your Introduction could benefit from discussing the previous work of Huttner, etc. instead of putting all references in one sentence. Then the lay person would know that you are not making all the terms up.
A more precise way to avoid the oxymoron is "logically impossible epistemic possibility". I think 'Epistemic possibility' is used in philosophy in approximately the way you're using the term.