Purchase Fuzzies and Utilons Separately
Previously in series: Money: The Unit of Caring
Yesterday:
There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone...
If the lawyer needs to work an hour at the soup kitchen to keep himself motivated and remind himself why he's doing what he's doing, that's fine. But he should also be donating some of the hours he worked at the office, because that is the power of professional specialization and it is how grownups really get things done. One might consider the check as buying the right to volunteer at the soup kitchen, or validating the time spent at the soup kitchen.
I hold open doors for little old ladies. I can't actually remember the last time this happened literally (though I'm sure it has, sometime in the last year or so). But within the last month, say, I was out on a walk and discovered a station wagon parked in a driveway with its trunk completely open, giving full access to the car's interior. I looked in to see if there were packages being taken out, but this was not so. I looked around to see if anyone was doing anything with the car. And finally I went up to the house and knocked, then rang the bell. And yes, the trunk had been accidentally left open.
Under other circumstances, this would be a simple act of altruism, which might signify true concern for another's welfare, or fear of guilt for inaction, or a desire to signal trustworthiness to oneself or others, or finding altruism pleasurable. I think that these are all perfectly legitimate motives, by the way; I might give bonus points for the first, but I wouldn't deduct any penalty points for the others. Just so long as people get helped.
But in my own case, since I already work in the nonprofit sector, the further question arises as to whether I could have better employed the same sixty seconds in a more specialized way, to bring greater benefit to others. That is: can I really defend this as the best use of my time, given the other things I claim to believe?
So You Think You're a Bayesian? The Natural Mode of Probabilistic Reasoning
Related to: The Conjunction Fallacy, Conjunction Controversy
The heuristics and biases research program in psychology has discovered many different ways that humans fail to reason correctly under uncertainty. In experiment after experiment, they show that we use heuristics to approximate probabilities rather than making the appropriate calculation, and that these heuristics are systematically biased. However, a tweak in the experiment protocols seems to remove the biases altogether and shed doubt on whether we are actually using heuristics. Instead, it appears that the errors are simply an artifact of how our brains internally store information about uncertainty. Theoretical considerations support this view.
EDIT: The view presented here is controversial in the heuristics and biases literature; see Unnamed's comment on this post below.
EDIT 2: The author no longer holds the views presented in this post. See this comment.
A common example of the failure of humans to reason correctly under uncertainty is the conjunction fallacy. Consider the following question:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
What is the probability that Linda is:
(a) a bank teller
(b) a bank teller and active in the feminist movement
In a replication by Gigerenzer, 91% of subjects rank (b) as more probable than (a), saying that it is more likely that Linda is active in the feminist movement AND a bank teller than that Linda is simply a bank teller (1993). The conjunction rule of probability states that the probability of two things being true is less than or equal to the probability of one of those things being true. Formally, P(A & B) ≤ P(A). So this experiment shows that people violate the conjunction rule, and thus fail to reason correctly under uncertainty. The representative heuristic has been proposed as an explanation for this phenomenon. To use this heuristic, you evaluate the probability of a hypothesis by comparing how "alike" it is to the data. Someone using the representative heuristic looks at the Linda question and sees that Linda's characteristics resemble those of a feminist bank teller much more closely than that of just a bank teller, and so they conclude that Linda is more likely to be a feminist bank teller than a bank teller.
This is the standard story, but are people really using the representative heuristic in the Linda problem? Consider the following rewording of the question:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
There are 100 people who fit the description above. How many of them are:
(a) bank tellers
(b) bank tellers and active in the feminist movement
Notice that the question is now strictly in terms of frequencies. Under this version, only 22% of subjects rank (b) as more probable than (a) (Gigerenzer, 1993). The only thing that changed is the question that is asked; the description of Linda (and the 100 people) remains unchanged, so the representativeness of the description for the two groups should remain unchanged. Thus people are not using the representative heuristic - at least not in general.
Money: The Unit of Caring
Previously in series: Helpless Individuals
Steve Omohundro has suggested a folk theorem to the effect that, within the interior of any approximately rational, self-modifying agent, the marginal benefit of investing additional resources in anything ought to be about equal. Or, to put it a bit more exactly, shifting a unit of resource between any two tasks should produce no increase in expected utility, relative to the agent's utility function and its probabilistic expectations about its own algorithms.
This resource balance principle implies that—over a very wide range of approximately rational systems, including even the interior of a self-modifying mind—there will exist some common currency of expected utilons, by which everything worth doing can be measured.
In our society, this common currency of expected utilons is called "money". It is the measure of how much society cares about something.
This is a brutal yet obvious point, which many are motivated to deny.
With this audience, I hope, I can simply state it and move on. It's not as if you thought "society" was intelligent, benevolent, and sane up until this point, right?
I say this to make a certain point held in common across many good causes. Any charitable institution you've ever had a kind word for, certainly wishes you would appreciate this point, whether or not they've ever said anything out loud. For I have listened to others in the nonprofit world, and I know that I am not speaking only for myself here...
Rationality: Common Interest of Many Causes
Previously in series: Church vs. Taskforce
It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts. The Institute Which May Not Be Named was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.
But of course, not all the rationalists I create will be interested in my own project—and that's fine. You can't capture all the value you create, and trying can have poor side effects.
If the supporters of other causes are enlightened enough to think similarly...
Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side. They won't capture all the value they create. And that's fine. They'll capture some of the value others create. Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.
But this requires—I know I'm repeating myself here, but it's important—that you be willing not to capture all the value you create. It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever. It requires that you don't regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support. You only reap some of your own efforts, but you reap some of others' efforts as well.
If you and they don't agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement. (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)
What is Evidence?
"The sentence 'snow is white' is true if and only if snow is white."
—Alfred Tarski
"To say of what is, that it is, or of what is not, that it is not, is true."
—Aristotle, Metaphysics IV
If these two quotes don't seem like a sufficient definition of "truth", read this. Today I'm going to talk about "evidence". (I also intend to discuss beliefs-of-fact, not emotions or morality, as distinguished here.)
Walking along the street, your shoelaces come untied. Shortly thereafter, for some odd reason, you start believing your shoelaces are untied. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace. There is a sequence of events, a chain of cause and effect, within the world and your brain, by which you end up believing what you believe. The final outcome of the process is a state of mind which mirrors the state of your actual shoelaces.
Why truth? And...
Some of the comments in this blog have touched on the question of why we ought to seek truth. (Thankfully not many have questioned what truth is.) Our shaping motivation for configuring our thoughts to rationality, which determines whether a given configuration is "good" or "bad", comes from whyever we wanted to find truth in the first place.
It is written: "The first virtue is curiosity." Curiosity is one reason to seek truth, and it may not be the only one, but it has a special and admirable purity. If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your personal aesthetic sense. A trickier challenge, with a greater probability of failure, may be worth more effort than a simpler one, just because it is more fun.
What Do We Mean By "Rationality"?
We mean:
- Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
- Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
If that seems like a perfectly good definition, you can stop reading here; otherwise continue.
My Bayesian Enlightenment
Followup to: The Magnitude of His Own Folly
I remember (dimly, as human memories go) the first time I self-identified as a "Bayesian". Someone had just asked a malformed version of an old probability puzzle, saying:
If I meet a mathematician on the street, and she says, "I have two children, and at least one of them is a boy," what is the probability that they are both boys?
In the correct version of this story, the mathematician says "I have two children", and you ask, "Is at least one a boy?", and she answers "Yes". Then the probability is 1/3 that they are both boys.
But in the malformed version of the story—as I pointed out—one would common-sensically reason:
If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2. There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.
So I pointed this out, and worked the answer using Bayes's Rule, arriving at a probability of 1/2 that the children were both boys. I'm not sure whether or not I knew, at this point, that Bayes's rule was called that, but it's what I used.
And lo, someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3. We just exclude the possibilities that are ruled out, and count the ones that are left, without trying to guess the probability that the mathematician will say this or that, since we have no way of really knowing that probability—it's too subjective."
I responded—note that this was completely spontaneous—"What on Earth do you mean? You can't avoid assigning a probability to the mathematician making one statement or another. You're just assuming the probability is 1, and that's unjustified."
To which the one replied, "Yes, that's what the Bayesians say. But frequentists don't believe that."
And I said, astounded: "How can there possibly be such a thing as non-Bayesian statistics?"
Generalizing From One Example
Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective
"Everyone generalizes from one example. At least, I do."
-- Vlad Taltos (Issola, Steven Brust)
My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:
There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?
Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.
The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.
Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.
Mental Crystallography
Brains organize things into familiar patterns, which are different for different people. This can make communication tricky, so it's useful to conceptualize these patterns and use them to help translation efforts.
Crystals are nifty things! The same sort of crystal will reliably organize in the same pattern, and always break the same way under stress.
Brains are also nifty things! The same person's brain will typically view everything through a favorite lens (or two), and will need to work hard to translate input that comes in through another channel or in different terms. When a brain acquires new concepts - even really vital ones - the new idea will result in recognizeably-shaped brain-bits. Different brains, therefore, handle concepts differently, and this can make it hard for us to talk to each other.
This works on a number of levels, although perhaps the most obvious is the divide between styles of thought on the order of "visual thinker", "verbal thinker", etc. People who differ here have to constantly reinterpret everything they say to one another, moving from non-native mode to native mode and back with every bit of data exchanged. People also store and retrieve memories differently, form first-approximation hypotheses and models differently, prioritize sensory input differently, have different levels of introspective luminosity1, and experience different affect around concepts and propositions. Over time, we accumulate different skills, knowledge, cognitive habits, shortcuts, and mental filing debris. Intuitions differ - appeals to intuition will only convert people who share the premises natively. We have lots in common, but high enough variance that it's impressive how much we do manage to communicate over not only inferential distances, but also fundamentally diverse brain plans. Basically, you can hit two crystals the same way with the same hammer, but they can still break along different cleavage planes.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)