Happy Ada Lovelace Day

10 palladias 16 October 2012 09:42PM

Today is Ada Lovelace Day, when STEM enthusiasts highlight the work of modern and historical women scientists, engineers, and mathematicians.  If you run a blog, you may want to participate by posting about a woman in a STEM field whom you admire.  But I'd love to have people share women scientists/mathematicians/authors in the comments that they think we could all stand to read more about. 

  • Women in STEM fields (living or dead, fiction or nonfictional) that you'd like us to know more about (preferably with a little precis and a link
  • Books about women in STEM fields that are awesome
  • Books written by women about STEM subjects that are awesome
  • Studies about sexism (or ways to combat it) in STEM fields (and anywhere else)
  • Practical things you or organizations you're with have done to cut down on careless or intentional sexism. (how did you implement it, how did you measure the effects, etc)

The deeper solution to the mystery of moralism—Believing in morality and free will are hazardous to your mental health

-19 metaphysicist 14 October 2012 01:21PM

[Crossposted.]

The complex relationship between Systems 1 and 2 and construal level

The distinction between pre-attentive and focal-attentive mental processes  has dominated cognitive psychology for some 35 years. In the past decade has arisen another cognitive dichotomy specific to social psychology: processes of abstract construal (far cognition) versusconcrete construal (near cognition). This essay will theorize about the relationship between these dichotomies to clarify further how believing in the existence of free will and in the objective existence of morality can thwart reason by causing you to choose what you don’t want.

The state of the art on pre-attentive and focal-attentive processes is Daniel Kahneman’s bookThinking, Fast and Slow, where he calls pre-attentive processes System 1 and focal-attentive processes System 2. The reification of processes into fictional systems also resembles Freud’sSystem Csc (Conscious) and System Pcs (Preconscious). I’ll adopt the language System 1 andSystem 2, but readers can apply their understanding of conscious –preconscious, pre-attentive – focal-attentive, or automatic processes – controlled processes dichotomies. They name the same distinction, in which System 1 consists of processes occurring quickly and effortlessly in parallel outside awareness; System 2 consists of processes occurring slowly and effortfully in sequentialawareness, which in this context refers to the contents of working memory rather than raw experience and accompanies System 2 activity.

To integrate Systems 1 and 2 with construal-level theory, we note that System 2—the conscious part of our minds—can perform any of three routines in making a decision about taking some action, such as whether to vote in an election, a good example not just for timeliness but also for linkages to our main concern with morality: voting is a clear example of an action without tangible benefit. The potential voter might:

Case 1. Make a conscious decision to vote based on applying the principle that citizens owe a duty to vote in elections.
Case 2. Decide to be open to the candidates’ substantive positions and vote only if either candidate seems worthy of support.
Case 3. Experience a change of mind between 1 and 2.

The preceding were examples of the three routines System 2 can perform:

Case 1. Make the choice.
Case 2. “Program” System 1 to make the choice based on automatic criteria that don’t require sequential thinking.
Case 3. Interrupt System 1 in the face of anomalies.

When System 2 initiates action, whether it retains the power to decide or passes to System 1 is the difference between concrete and abstract construal. The second routine is key to understanding how Systems 1 and 2 work to produce the effects construal-level theory predicts. Keep in mind that the unconscious, automatic System 1 includes not just hardwired patterns but also skilled habits. Meanwhile, System 2 is notoriously “lazy,” unwilling to interrupt System 1, as in Case 3, but despite the perennial biases that plague system 1, resulting from letting System 1 have its way, the highest levels of expertise also occur in System 1.

A delegate System 1 operates with potentially complex holistic patterns typifying far cognition. This pattern is far because we offload distant matter to System 1 but exercise sequential control under System 2 as immediacy looms—although there are many exceptions. It is critical to distinguish far cognition from the lazy failure of System 2 to perform properly in Case 3. Such failure isn’t specific to mode. Far cognition, System 1 acting as delegate for System 2, is a narrower concept than automatic cognition, but far cognition is automatic cognition. Nearcognition admits no easy cross-classification.

Belief in free will and moral realism undermine our “fast and frugal heuristics”

The two most important recent books on the cognitive psychology of decision and judgment areThinking, Fast and Slow by Daniel Kahneman and Gut Reactions: The Intelligence of the Unconscious by Gerd Gigerenzer, and both insist on the contrast between their positions, although conflicts aren’t obvious. Kahneman explains System 1 biases as due to the mechanisms employed outside the range of evolutionary usefulness; Gigerenzer describes “fast and frugal heuristics” that sometimes misfire to produce biases. Where these half-empty versus half-full positions on heuristics and biases really differ is their overall appraisal of near and far processes, as Gigerenzer is a far thinker and Kahneman a near thinker, and they are both naturally biased for their preferred modes. Far thought shows more confidence in fast-and-frugal heuristics, since it offloads to System 1, whose province is to employ them.

The fast-and-frugal-heuristics way of thinking is particularly useful in understanding the effect of moral realism and free will: they cause System 2 to supplant System 1 in decision-making. When we apply principles of integrity to regulate our conduct, sometimes we do better in far mode, where System 2 offloads the task of determining compliance to System 1. To the contrary, if you have a principle of integrity that includes an absolute obligation to vote, you act as in Case 1: on a conscious decision. But principles of integrity do not really take this absolute form, an illusion begotten by moral realism. A principle of integrity flexible enough for actual use might favor voting (based, say, on a general principle embracing an obligation to perform duties) but disfavor it for “lowering the bar” when there’s only a choice between the lesser of evils. To practice the art of objectively applying these principles depends on your honest appraisal of the strength of your commitment to each virtue. System 2 is incapable of this feat; when it can be accomplished, it’s due to System 1’s automatic skills, operating unconsciously.Principles of integrity are applied more accurately in far-mode than near-mode. [Hat Tip to Overcoming Bias for these convenient phrases.]

But belief in moral realism and free will impel moral actors to apply their principles in near-mode. Objective morality and moral realism imply that compliance with morality results from freely willed acts. I’m not going to defend this premise thoroughly here, but this thought experiment might carry some persuasive weight. Read the following in near mode, and introspect your emotions:

 

Sexual predator Jerry Sandusky will serve his time in a minimal security prison, where he’s allowed groups of visitors five days a week.

 


Some readers will experience a sense of outrage. Then remind yourself: There’s no free will.If you believe the reminder, your outrage will subside; if you’ve long been a convinced and consistent determinist, you might not need to remind yourself. Morality inculpates based on acts of free will: morality and free will are inseparable.

A point I must emphasize because of its novelty: it’s System 1 that ordinarily determines what you want. System 2 doesn’t ordinarily deliberate about the subject directly; it deliberates about relevant facts, but in the end, you can only intuit your volition. You can’t deduce it.

What a belief in moral realism and free will do is nothing less than change the architecture of decision-making. When we practice principles of integrity and internalize them, they and nonmoral considerations co-determine our System 1 judgments, whereas according to moral realism and free will, moral good is the product of conscious free choice, so System 2 contrastsits moral opinion to System 1’s intuition, for which System 2 compensates—and usually overcompensates. The voter had to weigh the imperatives of the duty to vote and the duty to avoid “lowering the bar” when both candidates are ideologically and programmatically distasteful. System 2 can prime and program System 1 by studying the issues, but the multifaceted decision is itself best made by System 1. What happens when System 2 tries to decide these propositions? System 2 makes the qualitative judgment that System 1 is biased one way or the other and corrects System 1. This will implicate the overcompensation bias, in which conscious attempts to counteract biases usually overcorrect. A voter who thinks correction is needed for a bias toward shirking duty will vote when not really wanting to, all things considered. A voter biased toward "lowering the bar" will be excessively purist. Whatever standard the voter uses will be taken too far.

Belief in moral realism and free will biases practical reasoning

This essay presents the third of three ways that belief in objective morality and free will can cause people to do what they don’t want to do:

 

  1. It retards people in adaptively changing their principles of integrity.
  2. It prevents people from questioning their so-called foundations.
  3. It systematically exaggerates the compellingness of moral claims.

 

Some will be tempted to think that the third either is contrary to experience or is socially desirable. It’s neither. In moralism, an exaggerated subjective sense of duty and excessive sense of guilt co-exist with unresponsiveness to morality’s practical demands.

Firewalling the Optimal from the Rational

86 Eliezer_Yudkowsky 08 October 2012 08:01AM

Followup to: Rationality: Appreciating Cognitive Algorithms  (minor post)

There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:

Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."

Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word.  As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences.  Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".

If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.

continue reading »

[Poll] Less Wrong and Mainstream Philosophy: How Different are We?

38 Jayson_Virissimo 26 September 2012 12:25PM

Despite being (IMO) a philosophy blog, many Less Wrongers tend to disparage mainstream philosophy and emphasize the divergence between our beliefs and theirs. But, how different are we really? My intention with this post is to quantify this difference.

The questions I will post as comments to this article are from the 2009 PhilPapers Survey. If you answer "other" on any of the questions, then please reply to that comment in order to elaborate your answer. Later, I'll post another article comparing the answers I obtain from Less Wrongers with those given by the professional philosophers. This should give us some indication about the differences in belief between Less Wrong and mainstream philosophy.

Glossary

analytic-synthetic distinction, A-theory and B-theory, atheism, compatibilism, consequentialism, contextualism, correspondence theory of truth, deontology, egalitarianism, empiricism, Humeanism, libertarianism, mental content externalism, moral realism, moral motivation internalism and externalism, naturalism, nominalism, Newcomb's problem, physicalism, Platonism, rationalism, relativism, scientific realism, trolley problem, theism, virtue ethics

Note

Thanks pragmatist, for attaching short (mostly accurate) descriptions of the philosophical positions under the poll comments.

The raw-experience dogma: Dissolving the “qualia” problem

2 metaphysicist 16 September 2012 07:15PM

[Cross-posted.]

1. Defining the problem: The inverted spectrum

Philosophy has been called a preoccupation with the questions entertained by adolescents, and one adolescent favorite concerns our knowledge of other persons’ “private experience” (raw experience or qualia). A philosophers’ version is the “inverted spectrum”: how do I know you see “red” rather than “blue” when you see this red print? How could we tell when we each link the same terms to the same outward descriptions? We each will say “red” when we see the print, even if you really see “blue.”

The intuition that allows us to be different this way is the intuition of raw experience (or of qualia). Philosophers of mind have devoted considerable attention to reconciling the intuition that raw experience exists with the intuition that inverted-spectrum indeterminacy has unacceptable dualist implications making the mental realm publicly unobservable, but it’s time for nihilism about qualia, whose claim to exist rests solely on the strength of a prejudice.

A. Attempted solutions to the inverted spectrum.

One account would have us examine which parts of the brain are activated by each perception, but then we rely on an unverifiable correlation between brain structures and “private experience.” With only a single example of private experience—our own—we have no basis for knowing what makes private experience the same or different between persons.

A subtler response to the inverted spectrum is that red and blue as experiences are distinct because red looks “red” due to its being constituted by certain responses, such as affect. Red makes you alert and tense; blue, tranquil or maybe sad. What we call the experience of red, on this account, just is the sense of alertness, and other manifestations. The hope is that identical observable responses to appropriate wavelengths might explain qualitative redness. Then, we could discover we experience blue when others experience red by finding that we idiosyncratically become tranquil instead of alert when exposed to the long wavelengths constituting physical red. This complication doesn’t remove the radical uncertainty about experiential descriptions. Emotion only seems more capable than cognition of explaining raw experience because emotional events are memorable. The affect theory doesn't answer how an emotional reaction can constitute a raw subjective experience.

B. The “substitution bias” of solving the “easy problem of consciousness” instead of the “hard problem.”

As in those examples, attempts at analyzing raw experience commonly appeal to the substitution process that psychologist Daniel Kahneman discovered in many cognitive fallacies. Substitution is the unthoughtful replacement of an easy for a related hard question. In the philosophy of mind, the distinct questions are actually termed the “easy problem of consciousness” and the “hard problem of consciousness,” and errors regarding consciousness typically are due to substituting the “easy problem” for the “hard,” where the easy problem is to explain some function that typically accompanies “awareness.” The philosopher might substitute knowledge of one’s own brain processes for raw experience; or, as in the previous example, experience’s neural accompaniments or its affective accompaniments. Avoiding the “substitution bias” is particularly hard when dealing with raw awareness, an unarticulated intuition; articulating it is a present purpose.

2. The false intuition of direct awareness

A. Our sense that the existence of raw experience is self-evident doesn’t show that it is true.

The theory that direct awareness reveals raw experience has long been almost sacrosanct in philosophy. According to the British Empiricists, direct experience consists of sense data and forms the indubitable basis of all synthetic knowledge. For Continental Rationalist Descartes, too, my direct experience—“I think”—indubitably proves my existence.
We do have a strong intuition that we have raw experience, the substance of direct awareness, but we have other strong intuitions, some turn out true and others false. We have an intuition that space is necessarily flat, an intuition proven false only with non-Euclidean geometries in the 19th century. We have an intuition that every event has a cause, which determinists believe but indeterminists deny. Sequestered intuitions aren’t knowledge.

B. Experience can’t reveal the error in the intuition that raw experience exists.

To correct wayward intuitions, we ordinarily test them against each other. A simple perceptual illusion illustrates: the popular Muller-Lyer illusion, where arrowheads on a line make it appear shorter than an identical line with the arrowheads reversed. Invoking the more credible intuition that measuring the lines finds their real length convinces us of the intuitive error that the lines are unequal. In contrast, we have no means to check the truth of the belief in raw experience; it simply seems self-evident, but it might seem equally self-evident if it were false. 

C. We can’t capture the ineffable core of raw experience with language because there’s really nothing there.

One task in philosophy is articulating the intuitions implicit in our thinking, and sometimes rejecting the intuition should result from concluding it employs concepts illogically. What shows the intuition of raw experience is incoherent (self-contradictory or vacuous) is that the terms we use to describe raw experience are limited to the terms for its referents; we have no terms to describe the experience as such, but rather, we describe qualia by applying terms denoting the ordinary cause of the supposed raw experience. The simplest explanation for the absence of a vocabulary to describe the qualitative properties of raw experience is that they don’t exist: a process without properties is conceptually vacuous.

D. We believe raw experience exists without detecting it.

One error in thinking about the existence of raw experience comes from confusing perception with belief, which is conceptually distinct. When people universally report that qualia “seem” to exist, they are only reporting their beliefs—despite their sense of certainty. Where “perception” is defined as a nervous system’s extraction of a sensory-array’s features, people can’t report their perceptions except through beliefs the perceptions sometimes engender: I can’t tell you my perceptions except by relating my beliefs about them. This conceptual truth is illustrated by the phenomenon of blindsight, a condition in  patients report complete blindness yet, by discriminating external objects, demonstrate that they can perceive them. Blindsighted patients can report only according to their beliefs, and they perceive more than they believe and report that they perceive. Qualia nihilism analyzes the intuition of raw experience as perceiving less than you believe and report you perceive, the reverse of blindsight.

3. The conceptual economy of qualia nihilism pays off in philosophical progress

Eliminating raw experience from ontology produces conceptual economy. A summary of its conceptual advantages:

   A. Qualia nihilism resolves an intractable problem for materialism: physical concepts are dispositional, whereas raw experiences concern properties that seem, instead, to pertain to noncausal essences. If raw experience was coherent, we could hope for a scientific insight, although no one has been able to define the general character of such an explanation. Removing a fundamental scientific mystery is a conceptual gain.
 
    B. Qualia nihilism resolves the private-language problem. There seems to be no possible language that uses nonpublic concepts. Eliminating raw experience allows explaining the absence of a private language by the nonexistence of any private referents.

    C.  Qualia nihilism offers a compelling diagnosis of where important skeptical arguments regarding the possibility of knowledge go wrong. The arguments—George Berkeley’s are their prototype—reason that sense data, being indubitable intuitions of direct experience, are the source of our knowledge, which must, in consequence, be about raw experience rather than the “external world.” If you accept the existence of raw experience, the argument is notoriously difficult to undermine logically because concepts of “raw experience” truly can’t be analogized to any concepts applying to the external world. Eliminating raw experience provides an effective demolition; rather than the other way around, our belief in raw experience depends on our knowledge of the external world, which is the source of the concepts we apply to fabricate qualia.

4. Relying on the brute force of an intuition is rationally specious.

Against these considerations, the only argument for retaining raw experience in our ontology is the sheer strength of everyone’s belief in its existence. How much weight should we attach to a strong belief whose validity we can't check? None. Beliefs ordinarily earn a presumption of truth from the absence of empirical challenge, but when empirical challenge is impossible in principle, the belief deserves no confidence.

Enjoy solving "impossible" problems? Group project!

-2 Epiphany 18 August 2012 12:20AM

In the Muehlhauser-Hibbard Dialogue on AGI, Hibbard states it will be "impossible to decelerate AI capabilities" but Luke counters with "Persuade key AGI researchers of the importance of safety ... If we can change the minds of a few key AGI scientists, it may be that key insights into AGI are delayed by years or decades." and before I read that dialogue, I had come up with three additional ideas on Heading off a near-term AGI arms race. Bill Hibbard may be right that "any effort expended on that goal could be better applied to the political and technical problems of AI safety" but I doubt he's right that it's impossible.

How do you prove something is impossible?  You might prove that a specific METHOD of getting to the goal does not work, but that doesn't mean there's not another method.  You might prove that all the methods you know about do not work.  That doesn't prove there's not some other option you don't see.  "I don't see an option, therefore it's impossible." is only an appeal to ignorance.  It's a common one but it's incorrect reasoning regardless.  Think about it.  Can you think of a way to prove that a method that does work isn't out there waiting to be discovered without saying the equivalent of "I don't see any evidence for this." We can say "I don't see it, I don't see it, I don't see it!" all day long. 

I say: "Then Look!"

How often do we push past this feeling to keep thinking of ideas that might work?  For many, the answer is "never" or "only if it's needed".  The sense that something is impossible is subjective and fallible.  If we don't have a way of proving something is impossible, but yet believe it to be impossible anyway, this is a belief.  What distinguishes this from bias? 

I think it's a common fear that you may waste your entire life on doing something that is, in fact, impossible.  This is valid, but it's completely missing the obvious:  As soon as you think of a plan to do the impossible, you'll be able to guess whether it will work.  The hard part is THINKING of a plan to do the impossible.  I'm suggesting that if we put our heads together, we can think of a plan to make an impossible thing into a possible one.  Not only that, I think we're capable of doing this on a worthwhile topic.  An idea that's not only going to benefit humanity, but is a good enough idea that the amount of time and effort and risk required to accomplish the task is worth it.

Here's how I am going to proceed: 

Step 1: Come up with a bunch of impossible project ideas. 

Step 2: Figure out which one appeals to the most people. 

Step 3: Invent the methodology by which we are going to accomplish said project. 

Step 4: Improve the method as needed until we're convinced it's likely to work.

Step 5: Get the project done.

 

Impossible Project Ideas

  • Decelerate AI Capabilities Research: If we develop AI before we've figured out the political and technical safety measures, we could have a disaster.  Luke's Ideas (Starts with "Persuade key AGI researchers of the importance of safety").  My ideas.
  • Solve Violent Crime: Testosterone may be the root cause of the vast majority of violent crime, but there are obstacles in treating it. 
  • Syntax/static Analysis Checker for Laws: Automatically look for conflicting/inconsistent definitions, logical conflicts, and other possible problems or ambiguities. 
  • Understand the psychology of money

  • Rational Agreement Software:  If rationalists should ideally always agree, why not make an organized information resource designed to get us all to agree?  This would track the arguments for and against ideas in such a way where each piece can be verified logically and challenged, make the entire collection of arguments available in an organized manner where none are repeated and no useless information is included, and it would need to be such that anybody can edit it like a wiki, resulting in the most rational outcome being displayed prominently at the top.  This is especially hard because it would be our responsibility to make something SO good, it convinces one another to agree, and it would have to be structured well enough that we actually manage to distinguish between opinions and facts. Also, Gwern mentions in a post about critical thinking that argument maps increase critical thinking skills.
  • Discover unrecognized bias:  This is especially hard since we'll be using our biased brains to try and detect it.  We'd have to hack our own way of imagining around the corners, peeking behind our own minds.
  • Logic checking AI: Build an AI that checks your logic for logical fallacies and other methods of poor reasoning.

Add your own ideas below (one idea per comment, so we can vote them up and down), make sure to describe your vision, then I'll list them here.

 

Figure out which one appeals to the most people.

Assuming each idea is put into a separate comment, we can vote them up or down.  If they begin with the word "Idea" I'll be able to find them and put them on the list.  If your idea is getting enough attention obviously, it will at some point make sense to create a new discussion for it.

 

Natural Laws Are Descriptions, not Rules

32 pragmatist 08 August 2012 04:27AM

Laws as Rules

We speak casually of the laws of nature determining the distribution of matter and energy, or governing the behavior of physical objects. Implicit in this rhetoric is a metaphysical picture: the laws are rules that constrain the temporal evolution of stuff in the universe. In some important sense, the laws are prior to the distribution of stuff. The physicist Paul Davies expresses this idea with a bit more flair: "[W]e have this image of really existing laws of physics ensconced in a transcendent aerie, lording it over lowly matter." The origins of this conception can be traced back to the beginnings of the scientific revolution, when Descartes and Newton established the discovery of laws as the central aim of physical inquiry. In a scientific culture immersed in theism, it was unproblematic, even natural, to think of physical laws as rules. They are rules laid down by God that drive the development of the universe in accord with His divine plan.

Does this prescriptive conception of law make sense in a secular context? Perhaps if we replace the divine creator of traditional religion with a more naturalist-friendly lawgiver, such as an ur-simulator. But what if there is no intentional agent at the root of it all? Ordinarily, when I think of a physical system as constrained by some rule, it is not the rule itself doing the constraining. The rule is just a piece of language; it is an expression of a constraint that is actually enforced by interaction with some other physical system -- a programmer, say, or a physical barrier, or a police force. In the sort of picture Davies presents, however, it is the rules themselves that enforce the constraint. The laws lord it over lowly matter. So on this view, the fact that all electrons repel one another is explained by the existence of some external entity, not an ordinary physical entity but a law of nature, that somehow forces electrons to repel one another, and this isn't just short-hand for God or the simulator forcing the behavior.

I put it to you that this account of natural law is utterly mysterious and borders on the nonsensical. How exactly are abstract, non-physical objects -- laws of nature, living in their "transcendent aerie" -- supposed to interact with physical stuff? What is the mechanism by which the constraint is applied? Could the laws of nature have been different, so that they forced electrons to attract one another? The view should also be anathema to any self-respecting empiricist, since the laws appear to be idle danglers in the metaphysical theory. What is the difference between a universe where all electrons, as a matter of contingent fact, attract one another, and a universe where they attract one another because they are compelled to do so by the really existing laws of physics? Is there any test that could distinguish between these states of affairs?

continue reading »

Self-skepticism: the first principle of rationality

36 aaronsw 06 August 2012 12:51AM

When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.

Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.

What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.

As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.

When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.

And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.

Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".

Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!

Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?

What are the optimal biases to overcome?

60 aaronsw 04 August 2012 03:04PM

If you're interested in learning rationality, where should you start? Remember, instrumental rationality is about making decisions that get you what you want -- surely there are some lessons that will help you more than others.

You might start with the most famous ones, which tend to be the ones popularized by Kahneman and Tversky. But K&T were academics. They weren't trying to help people be more rational, they were trying to prove to other academics that people were irrational. The result is that they focused not on the most important biases, but the ones that were easiest to prove.

Take their famous anchoring experiment, in which they showed the spin of a roulette wheel affected people's estimates about African countries. The idea wasn't that roulette wheels causing biased estimates was a huge social problem; it was that no academic could possibly argue that this behavior was somehow rational. They thereby scored a decisive blow for psychology against economists claiming we're just rational maximizers.

Most academic work on irrationality has followed in K&T's footsteps. And, in turn, much of the stuff done by LW and CFAR has followed in the footsteps of this academic work. So it's not hard to believe that LW types are good at avoiding these biases and thus do well on the psychology tests for them. (Indeed, many of the questions on these tests for rationality come straight from K&T experiments!)

But if you look at the average person and ask why they aren't getting what they want, very rarely do you conclude their biggest problem is that they're suffering from anchoring, framing effects, the planning fallacy, commitment bias, or any of the other stuff in the sequences. Usually their biggest problems are far more quotidian and commonsensical.

Take Eliezer. Surely he wanted SIAI to be a well-functioning organization. And he's admitted that lukeprog has done more to achieve that goal of his than he has. Why is lukeprog so much better at getting what Eliezer wants than Eliezer is? It's surely not because lukeprog is so much better at avoiding Sequence-style cognitive biases! lukeprog readily admits that he's constantly learning new rationality techniques from Eliezer.

No, it's because lukeprog did what seems like common sense: he bought a copy of Nonprofits for Dummies and did what it recommends. As lukeprog himself says, it wasn't lack of intelligence or resources or akrasia that kept Eliezer from doing these things, "it was a gap in general rationality."

So if you're interested in closing the gap, it seems like the skills to prioritize aren't things like commitment effect and the sunk cost fallacy, but stuff like "figure out what your goals really are", "look at your situation objectively and list the biggest problems", "when you're trying something new and risky, read the For Dummies book about it first", etc. For lack of better terminology, let's call the K&T stuff "cognitive biases" and this stuff "practical biases" (even though it's all obviously both practical and cognitive and biases is kind of a negative way of looking at it). 

What are the best things you've found on tackling these "practical biases"? Post your suggestions in the comments.

A cynical explanation for why rationalists worry about FAI

25 aaronsw 04 August 2012 12:27PM

My friend, hearing me recount tales of LessWrong, recently asked me if I thought it was simply a coincidence that so many LessWrong rationality nerds cared so much about creating Friendly AI. "If Eliezer had simply been obsessed by saving the world from asteroids, would they all be focused on that?"

Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it's the rare heroic act that can be accomplished without ever confronting reality.

After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.

Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn't even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.

Obviously this isn't any sort of proof that working on FAI is irrational, but it does seem awfully suspicious that people who really like to spend their time thinking about ideas have managed to persuade themselves that they can save the entire species from certain doom just by thinking about ideas.

View more: Prev | Next