Graham Priest discusses The Liar's Paradox for a NY Times blog. It seems that one way of solving the Liar's Paradox is defining dialethei, a true contradiction. Less Wrong, can you do what modern philosophers have failed to do and solve or successfully dissolve the Liar's Paradox? This doesn't seem nearly as hard as solving free will.
This post is a practice problem for what may become a sequence on unsolved problems in philosophy.
I like the article's approach, but it's a bit arbitrary in that "true contradiction" and "false contradiction" are equivalent. But perhaps due to bias towards the positive they get characterized as "true."
What the Liar's paradox really demonstrates is that true and false are not general enough to apply to every sentence, and so to deal with such cases satisfactorily we must generalize our logic somehow.
Then the question is - which generalization do we make? Going with the first thing that pops into our heads is probably bad. Well, let's start with some desiderata:
1) We want it to assign a definite classification to the Liar's sentence. Fairly straightforward - whether it's "option 3" or "1/2" or "0.321374..." we want our system to be able to handle the Liar's sentence without breaking.
2) It should reduce to classical logic in classical cases.
3) It should not be more complicated than necessary.
4) it should not be obviously vulnerable to a strengthened Liar's paradox.
5, etc.) Help me out here :P
Desideratum (3) suggests something along the lines of this, but that might fall prey to (4). I think it's possible that we'll need to allow a continuous truth value. But for now, sleep!
EDIT: After a little experience with this stuff, I don't like the article's approach anymore. "This sentence is not true and is not a 'true paradox.'"
Manfred's log, stardate 11/30
A little sleep, a little progress. The "fuzzy logic" approach that gives each statement a truth value between 0 and 1 can't handle the obvious "this sentence is not true," so it's out. The other one-parameter approach I can think of is more clever. The thought was that each self-referential statement defines a transformation of it's own "truth vector" (T, F), so consistency means that the statement should evaluate to eigenvectors of the transformation. Unfortunately, these transformations don't always commute, so you can get inconsistent answers to "this sentence is not true and is not (1/sqrt(2),1/sqrt(2))." Still working on that one.