You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Intuition and Mathematics

5 XiXiDu 31 December 2011 06:58PM

While reading the answer to the question 'What is it like to have an understanding of very advanced mathematics?' I became curious about the value of intuition in mathematics and why it might be useful.

It usually seems to be a bad idea to try to solve problems intuitively or use our intuition as evidence to judge issues that our evolutionary ancestors never encountered and therefore were never optimized to judge by natural selection.

And so it seems to be especially strange to suggest that intuition might be a good tool to make mathematical conjectures. Yet people like fields medalist Terence Tao seem to believe that intuition should not be disregarded when doing mathematics,

...“fuzzier” or “intuitive” thinking (such as heuristic reasoning, judicious extrapolation from examples, or analogies with other contexts such as physics) gets deprecated as “non-rigorous”. All too often, one ends up discarding one’s initial intuition and is only able to process mathematics at a formal level, thus getting stalled at the second stage of one’s mathematical education.

The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition. It is only with a combination of both rigorous formalism and good intuition that one can tackle complex mathematical problems;

The author mentioned at the beginning also makes the case that intuition is an important tool,

You are often confident that something is true long before you have an airtight proof for it (this happens especially often in geometry). The main reason is that you have a large catalogue of connections between concepts, and you can quickly intuit that if X were to be false, that would create tensions with other things you know to be true, so you are inclined to believe X is probably true to maintain the harmony of the conceptual space. It's not so much that you can imagine the situation perfectly, but you can quickly imagine many other things that are logically connected to it.

But what do those people mean when they talk about 'intuition', what exactly is its advantage? The author hints at an answer,

You go up in abstraction, "higher and higher". The main object of study yesterday becomes just an example or a tiny part of what you are considering today. For example, in calculus classes you think about functions or curves. In functional analysis or algebraic geometry, you think of spaces whose points are functions or curves -- that is, you "zoom out" so that every function is just a point in a space, surrounded by many other "nearby" functions. Using this kind of zooming out technique, you can say very complex things in short sentences -- things that, if unpacked and said at the zoomed-in level, would take up pages. Abstracting and compressing in this way allows you to consider extremely complicated issues while using your limited memory and processing power.

At this point I was reminded of something Scott Aaronson wrote in his essay 'Why Philosophers Should Care About Computational Complexity',

...even if computers were better than humans at factoring large numbers or at solving randomly-generated Sudoku puzzles, humans might still be better at search problems with “higher-level structure” or “semantics,” such as proving Fermat’s Last Theorem or (ironically) designing faster computer algorithms. Indeed, even in limited domains such as puzzle-solving, while computers can examine solutions millions of times faster, humans (for now) are vastly better at noticing global patterns or symmetries in the puzzle that make a solution either trivial or impossible. As an amusing example, consider the Pigeonhole Principle, which says that n+1 pigeons can’t be placed into n holes, with at most one pigeon per hole. It’s not hard to construct a propositional Boolean formula Φ that encodes the Pigeonhole Principle for some fixed value of n (say, 1000). However, if you then feed Φ to current Boolean satisfiability algorithms, they’ll assiduously set to work trying out possibilities: “let’s see, if I put this pigeon here, and that one there ... darn, it still doesn’t work!” And they’ll continue trying out possibilities for an exponential number of steps, oblivious to the “global” reason why the goal can never be achieved. Indeed, beginning in the 1980s, the field of proof complexity—a close cousin of computational complexity—has been able to show that large classes of algorithms require exponential time to prove the Pigeonhole Principle and similar propositional tautologies.

Again back to the answer on 'what it is like to have an understanding of very advanced mathematics'. The author writes,

...you are good at modularizing a conceptual space and taking certain calculations or arguments you don't understand as "black boxes" and considering their implications anyway. You can sometimes make statements you know are true and have good intuition for, without understanding all the details. You can often detect where the delicate or interesting part of something is based on only a very high-level explanation.

Humans are good at 'zooming out' to detect global patterns. Humans can jump conceptual gaps by treating them as "black boxes". 

Intuition is a conceptual bird's-eye view that allows humans to draw inferences from high-level abstractions without having to systematically trace out each step. Intuition is a wormhole. Intuition allows us get from here to there given limited computational resources.

If true, it also explains many of our shortcomings and biases. Intuitions greatest feature is also our biggest flaw.

The introduction of suitable abstractions is our only mental aid to organize and master complexity. — Edsger W. Dijkstra

Our computational limitations make it necessary to take shortcuts and view the world as a simplified model. That heuristic is naturally prone to error and introduces biases. We draw connections without establishing them systematically. We recognize patterns in random noise.

Many of our biases can be seen as a side-effect of making judgments under computational restrictions. A trade off between optimization power and resource use.

It it possible to correct for the shortcomings of intuition other than by refining rationality and becoming aware of our biases? That's up to how optimization power scales with resources and if there are more efficient algorithms that work under limited resources.

Should we discount extraordinary implications?

9 XiXiDu 29 December 2011 02:51PM

(Spawned by an exchange between Louie Helm and Holden Karnofsky.)

tl;dr:

The field of formal rationality is relatively new and I believe that we would be well-advised to discount some of its logical implications that advocate extraordinary actions.

Our current methods might turn out to be biased in new and unexpected ways. Pascal's mugging, the Lifespan Dilemma, blackmailing and the wrath of Löb's theorem are just a few examples on how an agent build according to our current understanding of rationality could fail.

Bayes’ Theorem, the expected utility formula, and Solomonoff induction are all reasonable heuristics. Yet those theories are not enough to build an agent that will be reliably in helping us to achieve our values, even if those values were thoroughly defined.

If we wouldn't trust a superhuman agent equipped with our current grasp of rationality to be reliably in extrapolating our volition, how can we trust ourselves to arrive at correct answers given what we know?

We should of course continue to use our best methods to decide what to do. But I believe that we should also draw a line somewhere when it comes to extraordinary implications.

Intuition, Rationality and Extraordinary Implications

It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability.  I'd sooner question my grasp of "rationality" than give five dollars to a Pascal's Mugger because I thought it was "rational". — Eliezer Yudkowsky

Holden Karnofsky is suggesting that in some cases we should follow the simple rule that "extraordinary claims require extraordinary evidence".

I think that we should sometimes demand particular proof P; and if proof P is not available, then we should discount seemingly absurd or undesirable consequences even if our theories disagree.

I am not referring to the weirdness of the conclusions but the foreseeable scope of the consequences of being wrong about them. We should be careful in using the implied scope of certain conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right.

As an example take the idea of quantum suicide and assume it would make sense under certain circumstances. I wouldn’t commit quantum suicide even given a high confidence in the many-worlds interpretation of quantum mechanics being true. Logical implications just don’t seem enough in some cases.

To be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd actions and their consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.

We are not going to stop pursuing whatever terminal goal we have chosen just because someone promises us even more utility if we do what that person wants. We are not going to stop loving our girlfriend just because there are other people who do not approve our relationship and who together would experience more happiness if we divorced than the combined happiness of us and our girlfriend being in love. Therefore we already informally established some upper and lower bounds.

I have read about people who became very disturbed and depressed taking ideas too seriously. That way madness lies, and I am not willing to choose that path yet.

Maybe I am simply biased and have been unable to overcome it yet. But my best guess right now is that we simply have to draw a lot of arbitrary lines and arbitrarily refuse some steps.

Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.

Until the problems are resolved, or rationality is sufficiently established, I will continue to put vastly more weight on empirical evidence and my intuition than on logical implications, if only because I still lack the necessary educational background to trust my comprehension and judgement of the various underlying concepts and methods used to arrive at those implications.

Expected Utility Maximization and Complex Values

One of the problems with my current grasp of rationality that I perceive to be unacknowledged are the consequences of expected utility maximization with respect to human nature and our complex values.

I am still genuinely confused about what a person should do. I don't even know how much sense that concept makes. Does expected utility maximization has anything to do with being human?

Those people who take existential risks seriously and who are currently involved in their mitigation seem to be disregarding many other activities that humans usually deem valuable because the expected utility of saving the world does outweigh the pursuit of other goals. I do not disagree with that assessment but find it troubling.

The problem is, will there ever be anything but a single goal, a goal that can either be more effectively realized and optimized to yield the most utility or whose associated expected utility simply outweighs all other values?

Assume that humanity managed to create a friendly AI (FAI). Given the enormous amount of resources that each human is poised to consume until the dark era of the universe, wouldn't the same arguments that now suggest that we should contribute money to existential risk charities then suggest that we should donate our resources to the friendly AI? Our resources could enable it to find a way to either travel back in time, leave the universe or hack the matrix. Anything that could avert the end of the universe and allow the FAI to support many more agents has effectively infinite expected utility.

The sensible decision would be to concentrate on those scenarios with the highest expected utility now, e.g. solving friendly AI, and worry about those problems later. But not only does the same argument always work but the question is also relevant to the nature of friendly AI and our ultimate goals. Is expected utility maximization even compatible with our nature? Does expected utility maximization lead to world states in which wireheading is favored, either directly or indirectly by focusing solely on a single high-utility goal that does outweigh all other goals?

Conclusion

  1. Being able to prove something mathematically doesn't prove its relation to reality.
  2. Relativity is less wrong than Newtonian mechanics but it still breaks down in describing singularities including the very beginning of the universe.

It seems to me that our notion of rationality is not the last word on the topic and that we shouldn't act as if it was.

What if we make better decisions when we trust our gut instincts? [Link]

11 XiXiDu 25 September 2011 12:22PM

[...]

How does one navigate a world of seemingly infinite alternatives? For thousands of years, the answer has seemed obvious: when faced with a difficult dilemma, we should carefully assess our options and spend a few moments consciously deliberating the information. Then, we should choose the toothpaste that best fits our preferences. This is how we maximize utility and get the most bang for the buck. We are rational agents – we should make decisions in a rational manner.

But what if rationality backfires? What if we make better decisions when we trust our gut instincts? While there is an extensive literature on the potential wisdom of human emotion, it’s only in the last few years that researchers have demonstrated that the emotional system (aka Type 1 thinking) might excel at complex decisions, or those involving lots of variables. If true, this would suggest that the unconscious is better suited for difficult cognitive tasks than the conscious brain, that the very thought process we’ve long disregarded as irrational and impulsive might actually be “smarter” than reasoned deliberation. This is largely because the unconscious is able to handle a surfeit of information, digesting the facts without getting overwhelmed. (Human reason, in contrast, has a very strict bottleneck and can only process about four bits of data at any given moment.)

[...]

The most widely cited demonstration of this theory is a 2006 Science paper led by Ap Dijksterhuis. (I wrote about the research in How We Decide.)

[...]

Dijksterhuis found that people given time to think in a rational manner – they could carefully contemplate each alternative – now chose the ideal car less than 25 percent of the time. In other words, they performed worse than random chance. However, subjects who were distracted for a few minutes found the best car nearly 60 percent of the time.

[...]

While the Dijksterhuis work generated plenty of buzz, it failed several tests of replication. This led many researchers to suggest that the supposed benefit of unconscious thought were an experimental accident, or perhaps a side-effect of incubation. [...] However, a new paper, published this month in Emotion by scientists then at Cornell University, provided the best test yet of the possible advantages of using our emotions to make complex decisions.

[...]

Once again, the “detail-focused” group excelled at making simple decisions. Thinking in a rational manner made them nearly 20 percent more effective at identifying the best car alternative when there were only sixteen total pieces of information. However, those focused on feelings proved far better at finding the best car in the complex condition. While deliberate thinkers barely beat random chance, those listening to their feelings identified the ideal option nearly 70 percent of the time. Similar results were found when the volunteers were quizzed about subjective choice quality, as those relying on their emotions tended to be much more satisfied with their car selection. [...] the advantages of emotional decision-making could be undone by a subsequent bout of deliberation, which suggests that we shouldn’t doubt a particularly strong instinct, at least when the considering lots of information. They also demonstrated that this phenomenon isn’t unique to cars: our feelings are also better at picking the best apartments and vacation spots.

[...]

Thanks to this new research, however, it’s becoming increasingly clear that our emotions have a logic all their own, that our instincts are often rooted in the processing powers of the unconscious brain. The massive computational capacity of the Type I system – its ability to process thousands of bits of data in parallel – ensures that we can analyze all the relevant information when assessing alternatives. As a result, we’re able to make sense of the plethora of options in the toothpaste aisle, assigning each alternative an affective tag: the best option is quickly associated with the most positive emotion. We know more than we know – that’s what our feelings are trying to tell us.

Link: wired.com/wiredscience/2011/09/how-should-we-make-hard-decisions/