Albert: "Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds. I don't believe the world changes around when I'm not looking."
Barry: "Wait a minute. If no one hears it, how can it be a sound?"
While writing the dialogue of Albert and Barry in their dispute over whether a falling tree in a deserted forest makes a sound, I sometimes found myself losing empathy with my characters. I would start to lose the gut feel of why anyone would ever argue like that, even though I'd seen it happen many times.
On these occasions, I would repeat to myself, "Either the falling tree makes a sound, or it does not!" to restore my borrowed sense of indignation.
(P or ~P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P. "This sentence is false" cannot be consistently viewed as true or false. And then there's the old classic, "Have you stopped beating your wife?"
Now if you are a mathematician, and one who believes in classical (rather than intuitionistic) logic, there are ways to continue insisting that (P or ~P) is a theorem: for example, saying that "This sentence is false" is not a sentence.
But such resolutions are subtle, which suffices to demonstrate a need for subtlety. You cannot just bull ahead on every occasion with "Either it does or it doesn't!"
So does the falling tree make a sound, or not, or...?
Surely, 2 + 2 = X or it does not? Well, maybe, if it's really the same X, the same 2, and the same + and =. If X evaluates to 5 on some occasions and 4 on another, your indignation may be misplaced.
To even begin claiming that (P or ~P) ought to be a necessary truth, the symbol P must stand for exactly the same thing in both halves of the dilemma. "Either the fall makes a sound, or not!"—but if Albert::sound is not the same as Barry::sound, there is nothing paradoxical about the tree making an Albert::sound but not a Barry::sound.
(The :: idiom is something I picked up in my C++ days for avoiding namespace collisions. If you've got two different packages that define a class Sound, you can write Package1::Sound to specify which Sound you mean. The idiom is not widely known, I think; which is a pity, because I often wish I could use it in writing.)
The variability may be subtle: Albert and Barry may carefully verify that it is the same tree, in the same forest, and the same occasion of falling, just to ensure that they really do have a substantive disagreement about exactly the same event. And then forget to check that they are matching this event against exactly the same concept.
Think about the grocery store that you visit most often: Is it on the left side of the street, or the right? But of course there is no "the left side" of the street, only your left side, as you travel along it from some particular direction. Many of the words we use are really functions of implicit variables supplied by context.
It's actually one heck of a pain, requiring one heck of a lot of work, to handle this kind of problem in an Artificial Intelligence program intended to parse language—the phenomenon going by the name of "speaker deixis".
"Martin told Bob the building was on his left." But "left" is a function-word that evaluates with a speaker-dependent variable invisibly grabbed from the surrounding context. Whose "left" is meant, Bob's or Martin's?
The variables in a variable question fallacy often aren't neatly labeled—it's not as simple as "Say, do you think Z + 2 equals 6?"
If a namespace collision introduces two different concepts that look like "the same concept" because they have the same name—or a map compression introduces two different events that look like the same event because they don't have separate mental files—or the same function evaluates in different contexts—then reality itself becomes protean, changeable. At least that's what the algorithm feels like from inside. Your mind's eye sees the map, not the territory directly.
If you have a question with a hidden variable, that evaluates to different expressions in different contexts, it feels like reality itself is unstable—what your mind's eye sees, shifts around depending on where it looks.
This often confuses undergraduates (and postmodernist professors) who discover a sentence with more than one interpretation; they think they have discovered an unstable portion of reality.
"Oh my gosh! 'The Sun goes around the Earth' is true for Hunga Huntergatherer, but for Amara Astronomer, 'The Sun goes around the Earth' is false! There is no fixed truth!" The deconstruction of this sophomoric nitwittery is left as an exercise to the reader.
And yet, even I initially found myself writing "If X is 5 on some occasions and 4 on another, the sentence '2 + 2 = X' may have no fixed truth-value." There is not one sentence with a variable truth-value. "2 + 2 = X" has no truth-value. It is not a proposition, not yet, not as mathematicians define proposition-ness, any more than "2 + 2 =" is a proposition, or "Fred jumped over the" is a grammatical sentence.
But this fallacy tends to sneak in, even when you allegedly know better, because, well, that's how the algorithm feels from inside.
Late to the party here, but:
Any English speaker who hasn't been brainwashed with prescriptivist poppycock will tell you that the sentence has two possible readings: one where 'his' refers to Martin, and one where it refers to Bob. In natural language, linear order or closeness tends to matter a lot less than you might think. (This is why many linguistic analyses represent sentences as hierarchical tree structures, and argue that the behavior of some word is predicted by its position in the tree.)
We can even see effects on the resolution of pronoun reference that apply across sentence boundaries:
Martin punched Bob in the face. He fell.
Martin punched Bob in the face. He was very angry.
There's a preference to interpret 'he' as Bob in the first case and Martin in the second (it's not absolutely impossible to interpret them the other way around, but there's a preference), and it comes not from syntax (we've kept that pretty constant) but from what we might nebulously call "the structure of the discourse". It's extremely hard to predict what the preferred interpretation will be in any given case.
However, I think that the example could have been better constructed for a different reason. There are actually two phenomena at work in the sentence: the deictic quality of the word 'left', and the problem of pronoun reference. The point could have been made with reference to either one individually. So it's not a very consequential confound, but it's worth separating the two effects nonetheless.
"Martin told Bob the building was on the left" still suffers from the problem that we don't know whose left is meant (Martin's, Bob's, the speaker's, maybe the addressee's?). In this case, I can't see any way of determining a definite answer, even one based on some word-counting bullshit.
There would still be ambiguity if we got rid of 'left' but kept the pronouns in:
Martin told Bob that the building was to the north of him.
('North' differs from 'left' in that it is defined relative to the entire earth, but the sentence has different truth conditions depending on who 'him' refers to.)
Or, with less grammatical awkwardness:
Martin told Bob that the Xbox was at his house.
Since "Either Martin told Bob that the Xbox was at his house, or Martin did not tell Bob that the Xbox was at his house" can be false if 'his' refers to Martin in the first clause and Bob in the second, it still fits the example, but the ambiguity comes from a different source.
"Have you stopped beating your wife?", as has been explained elsewhere, is simply an example of a question that has a presupposition. Linguistics grad students and the people who love them will sometimes answer "Presupposition failure" to questions, but this has yet to catch on in the general population. ;)