You're missing the fact that mathematicians develop an intuition for mathematical concepts, and that this is different from the intuition they have when they begin studying. When teaching real analysis, I have to focus on all kinds of counterexamples to things that seem intuitively true, or else people will be stuck on blind alleys when they're actually trying to prove things. (For example, most of the mathematical community was shocked at the time when Weierstrass produced a function that is differentiable nowhere, and now the exposure to such functions is an essential part of an analysis curriculum at the high undergraduate level.)
So in particular, it is still a bad idea to use untrained intuition on very abstract problems. Instead, we have to spend years training our mathematical intuitions as we would train an extra sense.
You're missing the fact that mathematicians develop an intuition for mathematical concepts...
I am aware that humans are not born with an innate intuition about Hilbert spaces. What I was talking about is that intuition is that which allows us humans to recognize structural similarity and transfer that understanding across levels of abstraction to reach a level of generality that allows us to shed light on big chunks of unexplored territory at once.
I (unsupportedly) suspect that something like “recognizing structural similarity” — or rather, a mechanism for finding the structures something has, after which structural similarity is just a graph algorithm — is the foundation for human abstract thought.
It usually seems to be a bad idea to try to solve problems intuitively or use our intuition as evidence to judge issues that our evolutionary ancestors never encountered and therefore were never optimized to judge by natural selection.
In mathematics, intuition is generally not used as evidence to support a conclusion, but instead as a tool with which to search for a rigorous way to solve a problem. First of all, this makes intuition a lot less dangerous. If a voter's intuition tells him that some particular economic policy will be beneficial, then he is likely to rely on his intuition being right, and can harm public policy if he is wrong. If a mathematician's intuition tells him that a certain way of attacking a problem is likely to be fruitful, he will fail to solve the problem if he is wrong. But if the mathematician intuitively feels that premise P is true, and he can use it to prove theorem T, he will not state T as fact. Instead, he will state that P implies T, and mention that he finds this especially interesting because he believes P to be true. Secondly, this makes mathematical intuition trainable. Although our brains are not optimized for math, they are extremely adaptable. When a mathematician tries a fruitless path towards solving a problem as a result of bad intuition, he will notice that he has failed to solve it, update his intuitions accordingly, and try a different way. Similarly, he will notice when his intuition helps him solve a problem, and he'll figure out what his intuition did right.
Would this be a valid rephrasing of your statement? "When you have done a certain number of problems and understood complex connected conceptions, your intuition becomes molded so that it becomes useful to trust them, but verify them as well."
Pretty close, but my intuition can still be useful even in instances where it can be less reliable than "trust but verify" would suggest, because in a sufficiently difficult problem, the first possible solution that my intuition hits on is more likely than not to be wrong, but it's still a lot better than chance. I trust that my intuitions are likely to help me find the right answer or a correct proof eventually if I work at it long enough. In these cases, I don't assume that a possible solution suggested by my intuition is probably right, and that I just have to verify it. Instead, I assume that it is worth exploring since it has a reasonable probability of being either right or close to right.
What is "intuition" but any set of heuristic approaches to generating conjectures, proofs, etc., and judging their correctness, which isn't a naive search algorithm through formulas/proofs in some formal logical language? At a low level, all mathematics, including even the judgment of whether a given proof is correct (or "rigorous"), is done by intuition (at least, when it is done by humans). I think in everyday usage we reserve "intuition" for relatively high level heuristics, guesses, hunches, and so on, which we can't easily break down in terms of simpler thought processes, and this is the sort of "intuition" that Terence Tao is discussing in those quotes. But we should recognize that even regarding the very basics of what it means to accept a proof is correct, we are using the same kinds of thought processes, scaled down.
Few mathematicians want to bother with actual formal logical proofs, whether producing them or reading them.
(And there's an even subtler issue, that logicians don't have any one really convincing formal foundation to offer, and Godel's theorem makes it hard to know which ones are even consistent--if ZFC turned out to be inconsistent, would that mean that most of our math is wrong? Probably not, but since people often cite ZFC as being the formal logical basis for their work, what grounds do we we have for this prediction?)
This seems related to the math intuition model talk a while ago in the decision theory mailing list.
Isn't it invitation only?
(I haven't bothered to check out the details since I'm behind my studies of all available material on TDT, that I started two weeks or so ago)
Isn't it invitation only?
Ask and it shall be given unto thee?
It's not especially exclusive. If you wrote a request on the google groups form I expect they'd just add you. It is occasionally active and includes things like the discussion leading up to cousin_it's recent post.
I need to review my decision theory somewhat. I knew more a year ago than I know now.
As an amusing example, consider the Pigeonhole Principle, which says that n+1 pigeons can’t be placed into n holes, with at most one pigeon per hole. It’s not hard to construct a propositional Boolean formula Φ that encodes the Pigeonhole Principle for some fixed value of n (say, 1000). However, if you then feed Φ to current Boolean satisfiability algorithms, they’ll assiduously set to work trying out possibilities: “let’s see, if I put this pigeon here, and that one there ... darn, it still doesn’t work!” And they’ll continue trying out possibilities for an exponential number of steps, oblivious to the “global” reason why the goal can never be achieved. Indeed, beginning in the 1980s, the field of proof complexity—a close cousin of computational complexity—has been able to show that large classes of algorithms require exponential time to prove the Pigeonhole Principle and similar propositional tautologies.
Hmm. The simplex method would determine that this is infeasible in at most n steps, so I'm not sure how generalizable this result is.
(Beyond that, it seems to me that this should be solvable in one step, by assuming what you want to prove and checking for contradictions. You convert "at most one" to "one", figure out that the largest feasible number of pigeons is 1,000, which is less than 1,001 and you're done. This looks like it depends on the symmetry of the problem, though, and so for general problems you'd need to use simplex.)
I've been thinking about this again recently, and I think I can articulate better what I was thinking then.
The Pigeonhole Principle is an example of a Boolean satisfiability problem that smells like an optimization problem. If you convert the problem to "fit as many pigeons into holes as possible," you can determine the maximum number of pigeons in linear time with respect to the number of constraints, and then compare the maximum number of pigeons to n+1, you discover than n+1 is going to be greater than n.
There's no guarantee that you can recast a general Boolean satisfiability problem as an optimization problem- but I suspect that if it's "obvious" to human observers that a propositional Boolean formula will be true or untrue, it's because you can recast that problem as an optimization problem. The Boolean satisfiability algorithms that take an exponential number of steps do so because they're designed to be able to solve any problem, rather than get the right answer quickly to a limited class of problems. That is, the simplex method is irrelevant to Boolean satisfiability in general but is very relevant to the class of problems that humans can see the answer to and Boolean satisfiability algorithms founder on.
Sure, in that it can't solve all boolean satisfiability problems. I understood the section as saying "look, these methods don't apply heuristics and so can spend an inordinate amount of time proving things that are trivial if you use heuristics," and so gave an example of an optimization method which won't use an inordinate amount of time to determine infeasibility, and is stronger / more useful than human intuition in the domains where it can be applied.
While reading the answer to the question 'What is it like to have an understanding of very advanced mathematics?' I became curious about the value of intuition in mathematics and why it might be useful.
It usually seems to be a bad idea to try to solve problems intuitively or use our intuition as evidence to judge issues that our evolutionary ancestors never encountered and therefore were never optimized to judge by natural selection.
And so it seems to be especially strange to suggest that intuition might be a good tool to make mathematical conjectures. Yet people like fields medalist Terence Tao seem to believe that intuition should not be disregarded when doing mathematics,
The author mentioned at the beginning also makes the case that intuition is an important tool,
But what do those people mean when they talk about 'intuition', what exactly is its advantage? The author hints at an answer,
At this point I was reminded of something Scott Aaronson wrote in his essay 'Why Philosophers Should Care About Computational Complexity',
Again back to the answer on 'what it is like to have an understanding of very advanced mathematics'. The author writes,
Humans are good at 'zooming out' to detect global patterns. Humans can jump conceptual gaps by treating them as "black boxes".
Intuition is a conceptual bird's-eye view that allows humans to draw inferences from high-level abstractions without having to systematically trace out each step. Intuition is a wormhole. Intuition allows us get from here to there given limited computational resources.
If true, it also explains many of our shortcomings and biases. Intuitions greatest feature is also our biggest flaw.
Our computational limitations make it necessary to take shortcuts and view the world as a simplified model. That heuristic is naturally prone to error and introduces biases. We draw connections without establishing them systematically. We recognize patterns in random noise.
Many of our biases can be seen as a side-effect of making judgments under computational restrictions. A trade off between optimization power and resource use.
It it possible to correct for the shortcomings of intuition other than by refining rationality and becoming aware of our biases? That's up to how optimization power scales with resources and if there are more efficient algorithms that work under limited resources.