prase comments on [Link] Eleven dogmas of analytic philosophy - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
Depends on whose intuitions, and the truth of which class of statements. Truth value of a randomly constructed statement? Almost certainly no correlation. Truth value of a "relevant" statement, relevant in the context in which our intuitions were honed? Probably correlated.
EDIT: Requesting explanation for the downvote.
What do you mean by "randomly constructed statement" (I request example).
Please don't, especially if it's a single vote. Someone (not me) didn't like your comment, probably for no deep and important reasons. Karma discussions are inefficient use of resources.
Boolean formula (e.g. 3-SAT) with a random number of clauses, say around one googol (a very small example if the number of clauses is truly a random number).
Then I would agree that intuitions don't provide evidence for such statements, although it is a little bit confusing to say that there is no correlation between intuitions and truth for this class of statements since people don't have any intuitions pertaining to members of this class of statements.
To further nitpick, one googol is small as compared to what exactly?
(I'd find it way more natural for "randomly constructed statement" to be selected from statements people actually construct and have intuitive opinions about, given the context of the discussion.)
Maybe I can get my point across using this analogy:
"You would expect a human to be able to remember a random natural number."
"No - because only an infinitessimal fraction of random numbers fall into the range of having a number of digits that fit into a typical human's memory."
"Ah, I thought we are only talking about typical natural numbers in human use, from which we randomly choose one."
The discussion we're leading has some similarities with talking about a "random mind" - discussing our little corner of the mindspace versus choosing from the whole of it.
Similarly, "truth" is a property of a vastly, vastly bigger class of statements than just those our intuitions were conditioned on. Of course there are heuristics we can use even on e.g. enormous k-SAT statements, but would we still call that a human intuition? I'm not talking about uncomputability here, merely on the nonapplicability of human intuition for the vast majority of potentially true statements.
If indeed you only choose from statements that people "have intuitive opinions about", it would of course follow that people have intuitive opinions about said statements.
The googol amount of clauses was referred to as small if e.g. the number of clauses is randomly chosen as a natural number.
The original post was about analytic philosophy and, in particular, about how much intuitions should be trusted. This is a pretty interesting question when we are investigating hypotheses whose truth or falsity are subject to human intuitions. Therefore I find it natural to interpret the statement you were initially responding to, i.e.
as refering to statements inside the domain of human intuitions. There is certainly no disagreement between us about the fact that human intuitions isn't applicable to all statements.
I understood that you sort of say that googol is small as compared to a (typical?) random natural number, but I am not sure what it exactly means. What is a typical random natural number? Do you have in mind some specific distribution whose mean is greater than googol? What would be a number which weren't small in comparison to random natural numbers?
(I have a feeling that my responses may appear somewhat obnoxious. If it is so, it's not intentional.)
Actually, 3-SAT exhibits an interesting behavior where there's a 'phase transition' between likely satisfiable and unlikely to be satisfiable:
So for a random 3-SAT with a googol clauses we could make a very accurate prediction about whether it is satisfiable.
(I was reading about this because even with tinier 3-SATs there's now polynomial-time algorithms with 70% classification accuracy, and I thought this might be useful in a successor to Bitcoin.)
Interesting, although I'm not sure what you say follows:
As the number of clauses increases, so should the number of variables (randomly chosen). That is, the alpha that your result mentions can well fluctuate between over-and-underconstrained values even with a googol clauses. Alpha, being the clauses-to-variables ratio does not increase solely because the clauses increase, since the variables are randomly filled in as well.
Even for overconstrained values of alpha, the problem is still in NP, i.e. exponential in the worst case. Being able to make an accurate prediction for overconstrained values is certainly possible, but would we call that heuristic an "intuition"?
Why should it? You didn't say anything about that.
I don't think there's a difference between heuristics and intuitions, or rather, I think many intuitions may just be how heuristics feel on the inside.
For any particular 3-SAT instance, you can calculate α. If the random algorithm that generated it doesn't favor any particular clause types, then the distribution of outputs with c clauses and v variables should be uniform among all such 3-SAT instances, and therefore we can use α to figure out if the formula is (most likely) satisfiable.
Furthermore, unless you deliberately try to generate random instances with a particular value of α, I imagine under most models the value of α will be (with high probability) well on one side of the threshold, and so the probability that your formula is satisfiable will be either very nearly 1 or very nearly 0, depending on the random model.
Yes, you can calculate alpha, then depending on its value predict whether the formula is satisfiable with a high probability. Is that still intuiting an answer?
The question is not whether - faced with a typically very, very large (if randomly chosen) number of clauses - you can implement smarter algorithms, the question is whether human-level intuitions will still help you with randomly constructed truth statements.
Not whether they can be resolved, or whether there are neat tricks for certain subclasses.
Of course if the initial use of "intuition" that sparked this debate is used synonymously with "heuristic", the point is moot. I was referring to the subset that is more typically referred to as human intuitions.
(I should've said k-sat anyways.)