Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

37 Ways That Words Can Be Wrong

73 Eliezer_Yudkowsky 06 March 2008 05:09AM

Followup to:  Just about every post in February, and some in March

Some reader is bound to declare that a better title for this post would be "37 Ways That You Can Use Words Unwisely", or "37 Ways That Suboptimal Use Of Categories Can Have Negative Side Effects On Your Cognition".

But one of the primary lessons of this gigantic list is that saying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory.  You can always be wrong.  Even when it's theoretically impossible to be wrong, you can still be wrong.  There is never a Get-Out-Of-Jail-Free card for anything you do.  That's life.

Besides, I can define the word "wrong" to mean anything I like - it's not like a word can be wrong.

Personally, I think it quite justified to use the word "wrong" when:

  1. A word fails to connect to reality in the first place.  Is Socrates a framster?  Yes or no?  (The Parable of the Dagger.)
  2. Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition.  Socrates is a human, and humans, by definition, are mortal.  So if you defined humans to not be mortal, would Socrates live forever?  (The Parable of Hemlock.)
  3. You try to establish any sort of empirical proposition as being true "by definition".  Socrates is a human, and humans, by definition, are mortal.  So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock?  It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say.  Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth.  (The Parable of Hemlock.)
  4. You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave.  You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal.  (The Parable of Hemlock.)
  5. The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future.  But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg."  (Words as Hidden Inferences.)
continue reading »

Categorizing Has Consequences

25 Eliezer_Yudkowsky 19 February 2008 01:40AM

Followup toFallacies of Compression

Among the many genetic variations and mutations you carry in your genome, there are a very few alleles you probably know—including those determining your blood type: the presence or absence of the A, B, and + antigens.  If you receive a blood transfusion containing an antigen you don't have, it will trigger an allergic reaction.  It was Karl Landsteiner's discovery of this fact, and how to test for compatible blood types, that made it possible to transfuse blood without killing the patient.  (1930 Nobel Prize in Medicine.)  Also, if a mother with blood type A (for example) bears a child with blood type A+, the mother may acquire an allergic reaction to the + antigen; if she has another child with blood type A+, the child will be in danger, unless the mother takes an allergic suppressant during pregnancy.  Thus people learn their blood types before they marry.

Oh, and also: people with blood type A are earnest and creative, while people with blood type B are wild and cheerful.  People with type O are agreeable and sociable, while people with type AB are cool and controlled. (You would think that O would be the absence of A and B, while AB would just be A plus B, but no...)  All this, according to the Japanese blood type theory of personality.  It would seem that blood type plays the role in Japan that astrological signs play in the West, right down to blood type horoscopes in the daily newspaper.

This fad is especially odd because blood types have never been mysterious, not in Japan and not anywhere.  We only know blood types even exist thanks to Karl Landsteiner.  No mystic witch doctor, no venerable sorcerer, ever said a word about blood types; there are no ancient, dusty scrolls to shroud the error in the aura of antiquity.  If the medical profession claimed tomorrow that it had all been a colossal hoax, we layfolk would not have one scrap of evidence from our unaided senses to contradict them.

There's never been a war between blood types.  There's never even been a political conflict between blood types.  The stereotypes must have arisen strictly from the mere existence of the labels.

continue reading »

Empty Labels

16 Eliezer_Yudkowsky 14 February 2008 11:50PM

Followup toThe Argument from Common Usage

Consider (yet again) the Aristotelian idea of categories.  Let's say that there's some object with properties A, B, C, D, and E, or at least it looks E-ish.

Fred:  "You mean that thing over there is blue, round, fuzzy, and—"
Me: "In Aristotelian logic, it's not supposed to make a difference what the properties are, or what I call them.  That's why I'm just using the letters."

Next, I invent the Aristotelian category "zawa", which describes those objects, all those objects, and only those objects, which have properties A, C, and D.

Me:  "Object 1 is zawa, B, and E."
Fred:  "And it's blue—I mean, A—too, right?"
Me:  "That's implied when I say it's zawa."
Fred:  "Still, I'd like you to say it explicitly."
Me:  "Okay.  Object 1 is A, B, zawa, and E."

continue reading »

Disputing Definitions

48 Eliezer_Yudkowsky 12 February 2008 12:15AM

Followup toHow An Algorithm Feels From Inside

I have watched more than one conversation—even conversations supposedly about cognitive science—go the route of disputing over definitions.  Taking the classic example to be "If a tree falls in a forest, and no one hears it, does it make a sound?", the dispute often follows a course like this:

If a tree falls in the forest, and no one hears it, does it make a sound?

Albert:  "Of course it does.  What kind of silly question is that?  Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds.  I don't believe the world changes around when I'm not looking."

Barry:  "Wait a minute.  If no one hears it, how can it be a sound?"

In this example, Barry is arguing with Albert because of a genuinely different intuition about what constitutes a sound.  But there's more than one way the Standard Dispute can start.  Barry could have a motive for rejecting Albert's conclusion.  Or Barry could be a skeptic who, upon hearing Albert's argument, reflexively scrutinized it for possible logical flaws; and then, on finding a counterargument, automatically accepted it without applying a second layer of search for a counter-counterargument; thereby arguing himself into the opposite position.  This doesn't require that Barry's prior intuition—the intuition Barry would have had, if we'd asked him before Albert spoke—have differed from Albert's.

Well, if Barry didn't have a differing intuition before, he sure has one now.

continue reading »

How An Algorithm Feels From Inside

89 Eliezer_Yudkowsky 11 February 2008 02:35AM

Followup toNeural Categories

"If a tree falls in the forest, and no one hears it, does it make a sound?"  I remember seeing an actual argument get started on this subject—a fully naive argument that went nowhere near Berkeleyan subjectivism.  Just:

"It makes a sound, just like any other falling tree!"
"But how can there be a sound that no one hears?"

The standard rationalist view would be that the first person is speaking as if "sound" means acoustic vibrations in the air; the second person is speaking as if "sound" means an auditory experience in a brain.  If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious.  And so the argument is really about the definition of the word "sound".

I think the standard analysis is essentially correct.  So let's accept that as a premise, and ask:  Why do people get into such an argument?  What's the underlying psychology?

A key idea of the heuristics and biases program is that mistakes are often more revealing of cognition than correct answers.  Getting into a heated dispute about whether, if a tree falls in a deserted forest, it makes a sound, is traditionally considered a mistake.

So what kind of mind design corresponds to that error?

continue reading »

Neural Categories

24 Eliezer_Yudkowsky 10 February 2008 12:33AM

Followup toDisguised Queries

In Disguised Queries, I talked about a classification task of "bleggs" and "rubes".  The typical blegg is blue, egg-shaped, furred, flexible, opaque, glows in the dark, and contains vanadium.  The typical rube is red, cube-shaped, smooth, hard, translucent, unglowing, and contains palladium.  For the sake of simplicity, let us forget the characteristics of flexibility/hardness and opaqueness/translucency.  This leaves five dimensions in thingspace:  Color, shape, texture, luminance, and interior.

Suppose I want to create an Artificial Neural Network (ANN) to predict unobserved blegg characteristics from observed blegg characteristics.  And suppose I'm fairly naive about ANNs:  I've read excited popular science books about how neural networks are distributed, emergent, and parallel just like the human brain!! but I can't derive the differential equations for gradient descent in a non-recurrent multilayer network with sigmoid units (which is actually a lot easier than it sounds).

Then I might design a neural network that looks something like this:

continue reading »

Disguised Queries

57 Eliezer_Yudkowsky 09 February 2008 12:05AM

Followup toThe Cluster Structure of Thingspace

Imagine that you have a peculiar job in a peculiar factory:  Your task is to take objects from a mysterious conveyor belt, and sort the objects into two bins.  When you first arrive, Susan the Senior Sorter explains to you that blue egg-shaped objects are called "bleggs" and go in the "blegg bin", while red cubes are called "rubes" and go in the "rube bin".

Once you start working, you notice that bleggs and rubes differ in ways besides color and shape.  Bleggs have fur on their surface, while rubes are smooth.  Bleggs flex slightly to the touch; rubes are hard.  Bleggs are opaque; the rube's surface slightly translucent.

Soon after you begin working, you encounter a blegg shaded an unusually dark blue—in fact, on closer examination, the color proves to be purple, halfway between red and blue.

Yet wait!  Why are you calling this object a "blegg"?  A "blegg" was originally defined as blue and egg-shaped—the qualification of blueness appears in the very name "blegg", in fact.  This object is not blue.  One of the necessary qualifications is missing; you should call this a "purple egg-shaped object", not a "blegg".

But it so happens that, in addition to being purple and egg-shaped, the object is also furred, flexible, and opaque.  So when you saw the object, you thought, "Oh, a strangely colored blegg."  It certainly isn't a rube... right?

Still, you aren't quite sure what to do next.  So you call over Susan the Senior Sorter.

continue reading »

The Cluster Structure of Thingspace

41 Eliezer_Yudkowsky 08 February 2008 12:07AM

Followup toTypicality and Asymmetrical Similarity

The notion of a "configuration space" is a way of translating object descriptions into object positions.  It may seem like blue is "closer" to blue-green than to red, but how much closer?  It's hard to answer that question by just staring at the colors.  But it helps to know that the (proportional) color coordinates in RGB are 0:0:5, 0:3:2 and 5:0:0.  It would be even clearer if plotted on a 3D graph.

In the same way, you can see a robin as a robin—brown tail, red breast, standard robin shape, maximum flying speed when unladen, its species-typical DNA and individual alleles.  Or you could see a robin as a single point in a configuration space whose dimensions described everything we knew, or could know, about the robin.

A robin is bigger than a virus, and smaller than an aircraft carrier—that might be the "volume" dimension.  Likewise a robin weighs more than a hydrogen atom, and less than a galaxy; that might be the "mass" dimension.  Different robins will have strong correlations between "volume" and "mass", so the robin-points will be lined up in a fairly linear string, in those two dimensions—but the correlation won't be exact, so we do need two separate dimensions.

This is the benefit of viewing robins as points in space:  You couldn't see the linear lineup as easily if you were just imagining the robins as cute little wing-flapping creatures.

continue reading »

Typicality and Asymmetrical Similarity

25 Eliezer_Yudkowsky 06 February 2008 09:20PM

Followup toSimilarity Clusters

Birds fly.  Well, except ostriches don't.  But which is a more typical bird—a robin, or an ostrich?
Which is a more typical chair:  A desk chair, a rocking chair, or a beanbag chair?

Most people would say that a robin is a more typical bird, and a desk chair is a more typical chair.  The cognitive psychologists who study this sort of thing experimentally, do so under the heading of "typicality effects" or "prototype effects" (Rosch and Lloyd 1978).  For example, if you ask subjects to press a button to indicate "true" or "false" in response to statements like "A robin is a bird" or "A penguin is a bird", reaction times are faster for more central examples.  (I'm still unpacking my books, but I'm reasonably sure my source on this is Lakoff 1986.)  Typicality measures correlate well using different investigative methods—reaction times are one example; you can also ask people to directly rate, on a scale of 1 to 10, how well an example (like a specific robin) fits a category (like "bird").

So we have a mental measure of typicality—which might, perhaps, function as a heuristic—but is there a corresponding bias we can use to pin it down?

Well, which of these statements strikes you as more natural:  "98 is approximately 100", or "100 is approximately 98"?  If you're like most people, the first statement seems to make more sense.  (Sadock 1977.)  For similar reasons, people asked to rate how similar Mexico is to the United States, gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.)

And if that still seems harmless, a study by Rips (1975) showed that people were more likely to expect a disease would spread from robins to ducks on an island, than from ducks to robins.  Now this is not a logical impossibility, but in a pragmatic sense, whatever difference separates a duck from a robin and would make a disease less likely to spread from a duck to a robin, must also be a difference between a robin and a duck, and would make a disease less likely to spread from a robin to a duck.

continue reading »

Similarity Clusters

26 Eliezer_Yudkowsky 06 February 2008 03:34AM

Followup toExtensions and Intensions

Once upon a time, the philosophers of Plato's Academy claimed that the best definition of human was a "featherless biped".  Diogenes of Sinope, also called Diogenes the Cynic, is said to have promptly exhibited a plucked chicken and declared "Here is Plato's man."  The Platonists promptly changed their definition to "a featherless biped with broad nails".

No dictionary, no encyclopedia, has ever listed all the things that humans have in common.  We have red blood, five fingers on each of two hands, bony skulls, 23 pairs of chromosomes—but the same might be said of other animal species.  We make complex tools to make complex tools, we use syntactical combinatorial language, we harness critical fission reactions as a source of energy: these things may serve out to single out only humans, but not all humans—many of us have never built a fission reactor.  With the right set of necessary-and-sufficient gene sequences you could single out all humans, and only humans—at least for now—but it would still be far from all that humans have in common.

But so long as you don't happen to be near a plucked chicken, saying "Look for featherless bipeds" may serve to pick out a few dozen of the particular things that are humans, as opposed to houses, vases, sandwiches, cats, colors, or mathematical theorems.

continue reading »

View more: Next