Do We Believe Everything We're Told?

36 Eliezer_Yudkowsky 10 October 2007 11:52PM

Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively "busy" by asking them to keep a lookout for "5" in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors.  Most of the experiments seemed to bear out the idea that cognitive busyness increased anchoring, and more generally contamination.

Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging:  Do we believe everything we're told?

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it.  This obvious-seeming model of cognitive process flow dates back to Descartes.  But Descartes's rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y'know, logical and intuitive.  But Gilbert saw a way of testing Descartes's and Spinoza's hypotheses experimentally.

continue reading »

Debate tools: an experience report

38 Morendil 05 February 2010 02:47PM

Follow-up to: Argument Maps Improve Critical Thinking, Software Tools for Community Truth-Seeking

We are here, among other things, in an attempt to collaboratively refine the art of human rationality.

Rationality is hard, because the wetware we run rationality on is scavenged parts originally intended for other purposes; and collaboration is hard, I believe because it involves huge numbers of tiny decisions about what information others need. Yet we get by, largely thanks to advances in technology.

One of the most important technologies for advancing both rationality and collaboration is the written word. It affords looking at large, complex issues with limited cognitive resources, by the wonderful trick of "external cached thoughts". Instead of trying to hold every piece of the argument at once, you can store parts of it in external form, refer back to them, and communicate them to other people.

For some reason, it seems very hard to improve on this six-thousand-year-old technology. Witness LessWrong itself, which in spite of using some of the latest and greatest communication technologies, still has people arguing by exchanging sentences back and forth.

Previous posts have suggested that recent software tools might hold promise for improving on "traditional" forms of argument. This kind of suggestion is often more valuable when applied to a real and relevant case study. I found the promise compelling enough to give a few tools a try, in the context of the recent (and recurrent) cryonics debate. I report back here with my findings.

continue reading »

False Majorities

35 JamesAndrix 03 February 2010 06:43PM

If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

This should be clear if A is the koran and B is the bible.

continue reading »

The AI in a box boxes you

102 Stuart_Armstrong 02 February 2010 10:10AM

Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

Just as you are pondering this unexpected development, the AI adds:

"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."

Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

"How certain are you, Dave, that you're really outside the box right now?"

Edit: Also consider the situation where you know that the AI, from design principles, is trustworthy.

No One Can Exempt You From Rationality's Laws

51 Eliezer_Yudkowsky 07 October 2007 05:24PM

Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating—as defections from cooperative norms.  If you want me to accept a belief from you, you are obligated to provide me with a certain amount of evidence.  If you try to get out of it, we all know you're cheating on your obligation.  A theory is obligated to make bold predictions for itself, not just steal predictions that other theories have labored to make.  A theory is obligated to expose itself to falsification—if it tries to duck out, that's like trying to duck out of a fearsome initiation ritual; you must pay your dues.

Traditional Rationality is phrased similarly to the customs that govern human societies, which makes it easy to pass on by word of mouth.  Humans detect social cheating with much greater reliability than isomorphic violations of abstract logical rules.  But viewing rationality as a social obligation gives rise to some strange ideas.

For example, one finds religious people defending their beliefs by saying, "Well, you can't justify your belief in science!"  In other words, "How dare you criticize me for having unjustified beliefs, you hypocrite!  You're doing it too!"

continue reading »

The Least Convenient Possible World

165 Yvain 14 March 2009 02:11AM

Related to: Is That Your True Rejection?

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

   -- Black Belt Bayesian, via Rationality Quotes 13

Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

It wouldn't be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"

He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:

continue reading »

Deontology for Consequentialists

46 Alicorn 30 January 2010 05:58PM

Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

Consequentialism1 is built around a group of variations on the following basic assumption:

  • The rightness of something depends on what happens subsequently.

It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

  • The agent's epistemic state, either actual or ideal (e.g. thinking that some act would have a certain result, or being in a position such that it would be reasonable to think that the act would have that result)
  • The reference class of the act (e.g. it being an act of murder, theft, lying, etc.)
  • Historical facts (e.g. having made a promise, sworn a vow)
  • Counterfactuals (e.g. what would happen if others performed similar acts more frequently than they actually do)
  • Features of the people affected by the act (e.g. moral rights, preferences, relationship to the agent)
  • The agent's intentions (e.g. meaning well or maliciously, or acting deliberately or accidentally)
continue reading »

The Meditation on Curiosity

36 Eliezer_Yudkowsky 06 October 2007 12:26AM

"The first virtue is curiosity."
        —The Twelve Virtues of Rationality

As rationalists, we are obligated to criticize ourselves and question our beliefs... are we not?

Consider what happens to you, on a psychological level, if you begin by saying:  "It is my duty to criticize my own beliefs."  Roger Zelazny once distinguished between "wanting to be an author" versus "wanting to write".  Mark Twain said:  "A classic is something that everyone wants to have read and no one one wants to read."  Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you'll be able to say afterward that your faith is not blind.  This is not the same as wanting to investigate.

This can lead to motivated stopping of your investigation.  You consider an objection, then a counterargument to that objection, then you stop there.  You repeat this with several objections, until you feel that you have done your duty to investigate, and then you stop there. You have achieved your underlying psychological objective: to get rid of the cognitive dissonance that would result from thinking of yourself as a rationalist, and yet knowing that you had not tried to criticize your belief.  You might call it purchase of rationalist satisfaction—trying to create a "warm glow" of discharged duty.

continue reading »

The Bottom Line

49 Eliezer_Yudkowsky 28 September 2007 05:47PM

There are two sealed boxes up for auction, box A and box B.  One and only one of these boxes contains a valuable diamond.  There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable.  There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp.  Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny.

Now suppose there is a clever arguer, holding a sheet of paper, and he says to the owners of box A and box B:  "Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price."  So the box-owners bid, and box B's owner bids higher, winning the services of the clever arguer.

The clever arguer begins to organize his thoughts.  First, he writes, "And therefore, box B contains the diamond!" at the bottom of his sheet of paper.  Then, at the top of the paper, he writes, "Box B shows a blue stamp," and beneath it, "Box A is shiny", and then, "Box B is lighter than box A", and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A.  And then the clever arguer comes to me and recites from his sheet of paper:  "Box B shows a blue stamp, and box A is shiny," and so on, until he reaches:  "And therefore, box B contains the diamond."

continue reading »

How to Convince Me That 2 + 2 = 3

52 Eliezer_Yudkowsky 27 September 2007 11:00PM

In "What is Evidence?", I wrote:

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise.  If your retina ended up in the same state regardless of what light entered it, you would be blind...  Hence the phrase, "blind faith".  If what you believe doesn't depend on what you see, you've been blinded as effectively as by poking out your eyeballs.

Cihan Baran replied:

I can not conceive of a situation that would make 2+2 = 4 false. Perhaps for that reason, my belief in 2+2=4 is unconditional.

continue reading »

View more: Prev | Next