Scope insensitivity in juries
Juries found to give harsher penalties to criminals who hurt few people than to those who hurt fewer people:
http://spp.sagepub.com/content/early/2010/08/24/1948550610382308.full.pdf+html
Intellectual Hipsters and Meta-Contrarianism
Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box
WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky
Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool?
As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.
The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.
This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.
In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1
If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.
Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model
Related to: Alien Parasite Technical Guy, A Master-Slave Model of Human Preferences
In Alien Parasite Technical Guy, Phil Goetz argues that mental conflicts can be explained as a conscious mind (the "alien parasite”) trying to take over from an unsuspecting unconscious.
Last year, Wei Dai presented a model (the master-slave model) with some major points of departure from Phil's: in particular, the conscious mind was a special-purpose subroutine and the unconscious had a pretty good idea what it was doing1. But Wei said at the beginning that his model ignored akrasia.
I want to propose an expansion and slight amendment of Wei's model so it includes akrasia and some other features of human behavior. Starting with the signaling theory implicit in Wei's writing, I'll move on to show why optimizing for signaling ability would produce behaviors like self-signaling and akrasia, speculate on why the same model would also promote some of the cognitive biases discussed here, and finish with even more speculative links between a wide range of conscious-unconscious conflicts.
The Signaling Theory of Consciousness
This model begins with the signaling theory of consciousness. In the signaling theory, the conscious mind is the psychological equivalent of a public relations agency. The mind-at-large (hereafter called U for “unconscious” and similar to Wei's “master”) has socially unacceptable primate drives you would expect of a fitness-maximizing agent like sex, status, and survival. These are unsuitable for polite society, where only socially admirable values like true love, compassion, and honor are likely to win you friends and supporters. U could lie and claim to support the admirable values, but most people are terrible liars and society would probably notice.
So you wall off a little area of your mind (hereafter called C for “conscious” and similar to Wei's “slave”) and convince it that it has only admirable goals. C is allowed access to the speech centers. Now if anyone asks you what you value, C answers "Only admirable things like compassion and honor, of course!" and no one detects a lie because the part of the mind that's moving your mouth isn't lying.
This is a useful model because it replicates three observed features of the real world: people say they have admirable goals, they honestly believe on introspection that they have admirable goals, but they tend to pursue more selfish goals. But so far, it doesn't explain the most important question: why do people sometimes pursue their admirable goals and sometimes not?
Book Review: The Root of Thought
Related to: Brain Breakthrough! It's Made of Neurons!
I can't really recommend Andrew Koob's The Root of Thought. It's poorly written, poorly proofread, lacking much more information than is in the Scientific American review, and comes across as about one part neuroscience to three parts angry rant. But it does present an interesting hypothesis and an interesting case study on a major failure of rationality.
Only about ten percent of the brain is made of neurons; the rest is a diverse group of cells called "glia". "Glia" is Greek for glue, because the scientists who discovered them decided that, since they were in the brain and they weren't neurons, they must just be there to glue the neurons together. Since then, new discoveries have assigned glial cells functions like myelination, injury repair, immune defense, and regulation of blood flow: all important, but mostly things only a biologist could love. The Root of Thought argues that glial cells, especially a kind called astrocytes, are also important in some of the higher functions of thought, including memory, cognition, and maybe even creativity. This is interesting to neuroscientists, and the story of how it was discovered is also interesting to us as aspiring rationalists.
Diseased thinking: dissolving questions about disease
Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses
Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity
-- George Will, townhall.com
Sandy is a morbidly obese woman looking for advice.
Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?
Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.
Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.
When she tells each of her friends about the opinions of the others, things really start to heat up.
Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.
Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.
Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.
Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.
Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.
The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.
Blue- and Yellow-Tinted Choices
A man comes to the rabbi and complains about his life: "I have almost no money, my wife is a shrew, and we live in a small apartment with seven unruly kids. It's messy, it's noisy, it's smelly, and I don't want to live."
The rabbi says, "Buy a goat."
"What? I just told you there's hardly room for nine people, and it's messy as it is!"
"Look, you came for advice, so I'm giving you advice. Buy a goat and come back in a month."
In a month the man comes back and he is even more depressed: "It's gotten worse! The filthy goat breaks everything, and it stinks and makes more noise than my wife and seven kids! What should I do?"
The rabbi says, "Sell the goat."
A few days later the man returns to the rabbi, beaming with happiness: "Life is wonderful! We enjoy every minute of it now that there's no goat - only the nine of us. The kids are well-behaved, the wife is agreeable - and we even have some money!"
-- traditional Jewish joke
Related to: Anchoring and Adjustment
Biases are “cognitive illusions” that work on the same principle as optical illusions, and a knowledge of the latter can be profitably applied to the former. Take, for example, these two cubes (source: Lotto Lab, via Boing Boing):

The “blue” tiles on the top face of the left cube are the same color as the “yellow” tiles on the top face of the right cube; if you're skeptical you can prove it with the eyedropper tool in Photoshop (in which both shades come out a rather ugly gray).
The illusion works because visual perception is relative. Outdoor light on a sunny day can be ten thousand times greater than a fluorescently lit indoor room. As one psychology book put it: for a student reading this book outside, the black print will be objectively lighter than the white space will be for a student reading the book inside. Nevertheless, both students will perceive the white space as subjectively white and the black space as subjectively black, because the visual system returns to consciousness information about relative rather than absolute lightness. In the two cubes, the visual system takes the yellow or blue tint as a given and outputs to consciousness the colors of each pixel compared to that background.
So this optical illusion occurs when the brain judges quantities relative to their surroundings rather than based on some objective standard. What's the corresponding cognitive illusion?
Antagonizing Opioid Receptors for (Prevention of) Fun and Profit
Related to: Ugh Fields, Are Wireheads Happy?
In his post Ugh Fields, Roko discussed "temporal difference learning", the process by which the brain propagates positive or negative feedback to the closest cause it can find for the feedback. For example, if he forgets to pay his bills and gets in trouble, the trouble (negative feedback) propagates back to thoughts about bills. Next time he gets a bill, he might paradoxically have even more trouble paying it, because it's become associated with trouble and negative emotions, and his brain tends to unconsciously flinch away from it.
He links to the associated Wikipedia article:
The TD algorithm has also received attention in the field of neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm. The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function, the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward, the error can be used to associate the stimulus with the future reward.
Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice. Initially the dopamine cells increased firing rates when exposed to the juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained, there was no increase in firing rate upon presentation of the predicted reward. This mimics closely how the error function in TD is used for reinforcement learning.
So if I understand this right, the monkey hears a bell and is unimpressed, having no expectation of reward. Then the monkey gets some juice that tastes really good and activates (opioid dependent?) reward pathways. The dopamine system is pretty surprised, and broadcasts that surprise back to all the neurons that have been especially active recently, most notably the neurons that activated upon hearing the bell. These neurons are now more heavily associated with the dopamine system. So the next time the monkey hears a bell, it has a greater expectation of reward.
And in this case it doesn't matter, because the monkey can't do anything about it. But if it were a circus monkey, and its trainer was trying to teach it to do a backflip to get juice, the association between backflips and juice would be pretty useful. As long as the monkey wanted juice, merely entertaining the plan of doing a backflip would have motivational value that promotes the correct action.
The Sinclair Method is a promising technique for treating alcoholics that elegantly demonstrates these pathways by sabotaging them.
Eight Short Studies On Excuses
The Clumsy Game-Player
You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button.
"Uh, sorry," says your partner. "My finger slipped."
"I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it."
"Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation."
"True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse."
"How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn."
You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.
After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful."
It's not like anything to be a bat
...at least not if you accept a certain line of anthropic argument.
Thomas Nagel famously challenged the philosophical world to come to terms with qualia in his essay "What is it Like to Be a Bat?". Bats, with sensory systems so completely different from those of humans, must have exotic bat qualia that we could never imagine. Even if we deduce all the physical principles behind echolocation, even if we could specify the movement of every atom in a bat's senses and nervous system that represents its knowledge of where an echolocated insect is, we still have no idea what it's like to feel a subjective echolocation quale.
Anthropic reasoning is the idea that you can reason conditioning on your own existence. For example, the Doomsday Argument says that you would be more likely to exist in the present day if the overall number of future humans was medium-sized instead of humongous, therefore since you exist in the present day, there must be only a medium-sized number of future humans, and the apocalypse must be nigh, for values of nigh equal to "within a few hundred years or so".
The Buddhists have a parable to motivate young seekers after enlightenment. They say - there are zillions upon zillions of insects, trillions upon trillions of lesser animals, and only a relative handful of human beings. For a reincarnating soul to be born as a human being, then, is a rare and precious gift, and an opportunity that should be seized with great enthusiasm, as it will be endless eons before it comes around again.
Whatever one thinks of reincarnation, the parable raises an interesting point. Considering the vast number of non-human animals compared to humans, the probability of being a human is vanishingly low. Therefore, chances are that if I could be an animal, I would be. This makes a strong anthropic argument that it is impossible for me to be an animal.
More thoughts on assertions
Response to: The "show, don't tell" nature of argument
Morendil says not to trust simple assertions. He's right, for the certain class of simple assertions he's talking about. But in order to see why, let's look at different types of assertions and see how useful it is to believe them.
Summary:
- Hearing an assertion can be strong evidence if you know nothing else about the proposition in question.
- Hearing an assertion is not useful evidence if you already have a reasonable estimate of how many people do or don't believe the proposition.
- An assertion by a leading authority is stronger than an assertion by someone else.
- An assertion plus an assertion that there is evidence makes no factual difference, but is a valuable signal.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)