Wei_Dai comments on Unsolved Problems in Philosophy Part 1: The Liar's Paradox - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (130)
What the paradox tells me is that our understanding of the nature of language, logic, and mathematics is seriously incomplete, which might lead to disaster if we do anything whose success depends on such understanding.
The paradox is related to the fact that we don't have a formal language that can talk about all of of the content of math/logic, for example, the truth value (or meaningfulness, if some sentences are allowed to be meaningless) of sentences in the language itself, which is obviously part of math or logic.
Since our current best ideas about how to let an AI do math is through formal languages, this implies that we are still far from having an AI achieve the same kind of understanding of math as us. We humans use natural language which does have these paradoxes which we don't know how to resolve, but at least we are not (or at least not obviously) constrained in which parts of math we can even talk, or think about.
I deem "this sentence is false" as meaningless and unworthy of further scrutiny from me.
Challenge: On the basis of the above, paperclip-pump me. (Or assume I'm a human and money-pump me.)
What is your algorithm for determining which sentences are meaningless? Since we don't have such an algorithm (without serious flaws), I'm guessing your algorithm is probably flawed also, and I can perhaps exploit such flaws if I knew what your algorithm is. See also this quote from the IEP:
The "beliefs should pay rent" heuristic mentioned by User:Tiiba already answers this. My method (not strictly an algorithm[1], but sufficient to avoid paperclip-pumps) is to identify what constraint such an expression places on my expectations. This method [2] has been thoroughly discussed on this internet and is already invoked here as the de facto standard for what is and is not "meaningless", though such a characterisation might go by different names ("fake explanation", "maximum entropy probability distribution", "not a belief", "just belief as attire", "empty symbol", etc.).
Is your claim, then, that the "beliefs should pay rent" heuristic has serious enough flaws that it leaves an agent such as a human vulnerable to money-pumping? Typically, beliefs with such a failure mode immediately suggest an exploitable outcome, even in the absence of detailed knowledge of the belief holder's epistemology and decision theory, yet that is not the case here.
With that in mind, the excerpt you posted does not pose significant challenges. Observe:
This was not the justification that I or User:Tiiba gave.
The claim that a symbol string "is in English" suggests observable expectations of that symbol string -- for example, whether native speakers can read it, if most of its words are found in an English dictionary, etc. This is a crucial difference from the Liar Sentence.
Again, lack of a mapping to a probability distribution that diverges from maximum entropy.
The non-Liar Sentence part of them is not.
The requirement that beliefs imply anticipations is systematic, and prevents such a continuation.
[1] and your insistence on an algorithm rather than mere heuristic is too strict here
[2] which is also an integral part of the Clippy Language Interface Protocol (CLIP)
I can't argue with that!