Dangers of steelmanning / principle of charity
As far as I can tell, most people around these parts consider the principle of charity and its super saiyan form, steelmanning, to be Very Good Rationalist Virtues. I basically agree and I in fact operate under these principles more or less automatically now. HOWEVER, no matter how good the rule is, there are always exceptions, which I have found myself increasingly concerned about.
This blog post that I found in the responses to Yvain's anti-reactionary FAQ argues that even though the ancient Romans had welfare, this policy was motivated not for concern for the poor or for a desire for equality like our modern welfare policies, but instead "the Roman dole was wrapped up in discourses about a) the might and wealth of Rome and b) goddess worship... The dole was there because it made the emperor more popular and demonstrated the wealth of Rome to the people. What’s more, the dole was personified as Annona, a goddess to be worshiped and thanked."
So let's assume this guy is right, and imagine that an ancient Roman travels through time to the present day. He reads an article by some progressive arguing (using the rationale one would typically use) that Obama should increase unemployment benefits. "This makes no sense," the Roman thinks to himself. "Why would you give money to someone who doesn't work for it? Why would you reward lack of virtue? Also, what's this about equality? Isn't it right that an upper class exists to rule over a lower class?" Etc.
But fortunately, between when he hopped out of the time machine and when he found this article, a rationalist found him and explained to him steelmanning and the principle of charity. "Ah, yes," he thinks. "Now I remember what the rationalist said. I was not being so charitable. I now realize that this position kind of makes sense, if you read between the lines. Giving more unemployment benefits would, now that I think about it, demonstrate the power of America to the people, and certainly Annona would approve. I don't know why whoever wrote this article didn't just come out and say that, though. Maybe they were confused".
Hopefully you can see what I'm getting at. When you regularly use the principle of charity and steelmanning, you run the risk of:
1. Sticking rigidly to a certain worldview/paradigm/established belief set, even as you find yourself willing to consider more and more concrete propositions. The Roman would have done better to really read what the modern progressive's logic was, think about it, and try to see where he was coming from than to automatically filter it through his own worldview. If he consistently does this he will never find himself considering alternative ways of seeing the world that might be better.
2. Falsely developing the sense that your worldview/paradigm/established belief set is more popular than it is. Pretty much no one today holds the same values that an ancient Roman does, but if the Roman goes around being charitable all the time then he will probably see his own beliefs reflected back at him a fair amount.
3. Taking arguments more seriously than you possibly should. I feel like I see all the time on rationalist communities people say stuff like "this argument by A sort of makes sense, you just need to frame it in objective, consequentialist terms like blah blah blah blah blah" and then follow with what looks to me like a completely original thought that I've never seen before. But why didn't A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn't? And if we're sure that objective, consequentialist logic is The Way To Go, then shouldn't we be very skeptical of arguments that seem like their basis is in some other reasoning system entirely?
4. Just having a poor model of people's beliefs in general, which could lead to problems.
Hopefully this made sense, and I'm sorry if this is something that's been pointed out before.
Handshakes, Hi, and What's New: What's Going On With Small Talk?
This is an attempt to explicitly model what's going on in some small talk conversations. My hope is that at least one of these things will happen:
- There is a substantial flaw or missing element to my model that someone will point out.
- Many readers, who are bad at small talk because they don't see the point, will get better at it as a result of acquiring understanding.
Handshakes
I had some recent conversational failures online, that went roughly like this:
“Hey.”
“Hey.”
“How are you?”
The end.
At first I got upset at the implicit rudeness of my conversation partner walking away and ignoring the question. But then I decided to get curious instead and posted a sample exchange (names omitted) on Facebook with a request for feedback. Unsurprisingly I learned more this way.
Some kind friends helped me troubleshoot the exchange, and in the process of figuring out how online conversation differs from in-person conversation, I realized what these things do in live conversation. They act as a kind of implicit communication protocol by which two parties negotiate how much interaction they’re willing to have.
Consider this live conversation:
“Hi.”
“Hi.”
The end.
No mystery here. Two people acknowledged one another’s physical presence, and then the interaction ended. This is bare-bones maintenance of your status as persons who can relate to one another socially. There is no intimacy, but at least there is acknowledgement of someone else’s existence. A day with “Hi” alone is less lonely than a day without it.
“Hi.”
“Hi, how’s it going?”
“Can’t complain. And you?”
“Life.”
This exchange establishes the parties as mutually sympathetic – the kind of people who would ask about each other’s emotional state – but still doesn’t get to real intimacy. It is basically just a drawn-out version of the example with just “Hi”. The exact character of the third and fourth line don’t matter much, as there is no real content. For this reason, it isn’t particularly rude to leave the question totally unanswered if you’re already rounding a corner – but if you’re in each other’s company for a longer period of time, you’re supposed to give at least a pro forma answer.
This kind of thing drives crazy the kind of people who actually want to know how someone is, because people often assume that the question is meant insincerely. I’m one of the people driven crazy. But this kind of mutual “bidding up” is important because sometimes people don’t want to have a conversation, and if you just launch into your complaint or story or whatever it is you may end up inadvertently cornering someone who doesn’t feel like listening to it.
You could ask them explicitly, but people sometimes feel uncomfortable turning down that kind of request. So the way to open a substantive topic of conversation is to leave a hint and let the other person decide whether to pick it up. So here are some examples of leaving a hint:
“Hi.”
“Hi.”
“Anything interesting this weekend?”
“Oh, did a few errands, caught up on some reading. See you later.”
This is a way to indicate interest in more than just a “Fine, how are you?” response. What happened here is that one party asked about the weekend, hoping to elicit specific information to generate a conversation. The other politely technically answered the question without any real information, declining the opportunity to talk about their life.
“Hi.”
“Hi.”
“Anything interesting happen over the weekend?”
“Oh, did a few errands, caught up on some reading.”
“Ugh, I was going to go to a game, but my basement flooded and I had to take care of that instead.”
“That’s tough.”
“Yeah.”
“See you around.”
Here, the person who first asked about the weekend didn’t get an engaged response, but got enough of a pro forma response to provide cover for an otherwise out of context complaint and bid for sympathy. The other person offered perfunctory sympathy, and ended the conversation.
Here’s a way for the recipient of a “How are you?” to make a bid for more conversation:
“Hi.”
“Hi.”
“How are you?”
“Oh, my basement flooded over the weekend.”
“That’s tough.”
“Yeah.”
“See you around.”
So the person with the flooded basement provided a socially-appropriate snippet of information – enough to be a recognizable bid for sympathy, but little enough not to force the other person to choose between listening to a long complaint or rudely cutting off the conversation.
Here’s what it looks like if the other person accepts the bid:
“Hi.”
“Hi.”
“How are you?”
“Oh, my basement flooded over the weekend.”
“Wow, that’s tough. Is the upstairs okay?”
“Yeah, but it’s a finished basement so I’m going to have to get a bunch of it redone because of water damage.”
“Ooh, that’s tough. Hey, if you need a contractor, I had a good experience with mine when I had my kitchen done.”
“Thanks, that would be a big help, can you email me their contact info?”
By asking a specific follow-up question the other person indicated that they wanted to hear more about the problem – which gave the person with the flooded basement permission not just to answer the question directly, but to volunteer additional information / complaints.
You can do the same thing with happy events, of course:
“Hi.”
“Hi.”
“How are you?”
“I’m getting excited for my big California vacation.”
“Oh really, where are you going?”
“We’re flying out to Los Angeles, and then we’re going to spend a few days there but then drive up to San Francisco, spend a day or two in town, then go hiking in the area.“
“Cool. I used to live in LA, let me know if you need any recommendations.”
“Thanks, I’ll come by after lunch?”
So what went wrong online? Here’s the conversation again so you don’t have to scroll back up:
“Hey.”
“Hey.”
“How are you?”
The end.
Online, there are no external circumstances that demand a “Hi,” such as passing someone (especially someone you know) in the hallway or getting into an elevator.
If you import in-person conversational norms, the “Hi” is redundant – but instead online it can function as a query as to whether the other person is actually “present” and available for conversation. (You don’t want to start launching into a conversation just because someone’s status reads “available” only to find out they’re in the middle of something else and don’t have time to read what you wrote.)
Let’s say you’ve mutually said “Hi.” If you were conversing in person, the next thing to do would be to query for a basic status update, asking something like, “How are you?”. But “Hi” already did the work of “How are you?”. Somehow the norm of “How are you?” being a mostly insincere query doesn’t get erased, even though “Hi” does its work – so some people think you’re being bizarrely redundant. Others might actually tell you how they are.
To be safe, it’s best to open with a short question apropos to what you want to talk about – or, since it’s costless online and serves the same function as “Hi”, just start with “How are you?” as your opener.
What’s New?
I recently had occasion to explain to someone how to respond when someone asks “what’s new?”, and in the process, ended up explaining some stuff I hadn’t realized until the moment I tried to explain it. So I figured this might be a high-value thing to explain to others here on the blog.
Of course, sometimes “what’s new?” is just part of a passing handshake with no content – I covered that in the first section. But if you’re already in a context where you know you’re going to be having a conversation, you’re supposed to answer the question, otherwise you get conversations like this:
“Hi.”
“Hi.”
“What’s new?”
“Not much. How about you?”
“Can’t complain.”
Awkward silence.
So I’m talking about cases where you actually have to answer the question.
The problem is that some people, when asked “What’s New?”, will try to think about when they last met the person asking, and all the events in their life since then, sorted from most to least momentous. This is understandably an overwhelming task.
The trick to responding correctly is to think of your conversational partner’s likely motives for asking. They are very unlikely to want a complete list. Nor do they necessarily want to know the thing in your life that happened that’s objectively most notable. Think about it – when’s the last time you wanted to know those things?
Instead, what’s most likely the case is that they want to have a conversation about a topic you are comfortable with, are interested in, and have something to say about. “What’s New?” is an offer they are making, to let you pick the life event you most feel like discussing at that time. So for example, if the dog is sick but you’d rather talk about a new book you’re reading, you get to talk about the book and you can completely fail to mention the dog. You’re not lying, you’re answering the question as intended.
Cross-posted on my personal blog.
How the Grinch Ought to Have Stolen Christmas
On Dec. 24, 1957, a Mr. T. Grinch attempted to disrupt Christmas by stealing associated gifts and decorations. His plan failed, the occupants of Dr. Suess' narrative remained festive, and Mr. Grinch himself succumbed to cardiac hypertrophy. To help others avoid repeating his mistakes, I've written a brief guide to properly disrupting holidays. Holiday-positive readers should read this with the orthogonality thesis in mind. Fighting Christmas is tricky, because the obvious strategy - making a big demoralizing catastrophe - doesn't work. No matter what happens, the media will put the word Christmas in front of it and convert your scheme into even more free advertising for the holiday. It'll be a Christmas tragedy, a Christmas earthquake, a Christmas wave of foreclosures. That's no good; attacking Christmas takes more finesse.
The first thing to remember is that, whether you're stealing a holiday or a magical artifact of immense power, it's almost always a good idea to leave a decoy in its place. When people notice that something important is missing, they'll go looking to find or replace it. This rule can be generalized from physical objects to abstractions like sense of community. T. Grinch tried to prevent community gatherings by vandalizing the spaces where they would've taken place. A better strategy would've been to promise to organize a Christmas party, then skip the actual organizing and leave people to sit at home by themselves. Unfortunately, this solution is not scalable, but someone came up with a very clever solution: encourage people to watch Christmas-themed films instead of talking to each other, achieving almost as much erosion of community without the backlash.
I'd like to particularly applaud Raymond Arnold, for inventing a vaguely-Christmas-like holiday in December, with no gifts, and death (rather than cheer) as its central theme [1]. I really wish it didn't involve so much singing and community, though. I recommend raising the musical standards; people who can't sing at studio-recording quality should not be allowed to sing at all.
Gift-giving traditions are particularly important to stamp out, but stealing gifts is ineffective because they're usually cheap and replaceable. A better approach would've been to promote giving undesirable gifts, such as religious sculptures and fruitcake. Even better would be to convince the Mayor of Whoville to enact bad economic policies, and grind the Whos into a poverty that would make gift-giving difficult to sustain. Had Mr. Grinch pursued this strategy effectively, he could've stolen Christmas and Birthdays and gotten himself a Nobel Prize in Economics [2].
Finally, it's important to avoid rhyming. This is one of those things that should be completely obvious in hindsight, with a little bit of genre savvy; villains like us win much more often in prose and in life than we do in verse.
And with that, I'll leave you with a few closing thoughts. If you gave presents, your friends are disappointed with them. Any friends who didn't give you presents, it's because they don't care, and any fiends who did give you presents, they're cheap and lame presents for the same reason. If you have a Christmas tree, it's ugly, and if it's snowing, the universe is trying to freeze you to death.
Merry Christmas!
[1] I was initially concerned that the Solstice would pattern-match and mutate into a less materialistic version of Christmas, but running a Kickstarter campaign seems to have addressed that problem.
[2] This is approximately the reason why Alfred Nobel specifically opposed the existence of that prize.
Luck I: Finding White Swans
Quoth the Master, great in Wisdom, to the Novice: "Ye, carry with thee all thy days a cheque folded up in your wallet. For there may be many situations in which thou shalt have need of it."
And the Novice, of high intelligence but lesser wisdom, replied, saying unto the Master: "Of what situations dost thou speak?"
To which the Master replied: "imagine that thou dost come upon a nice piece of land, and wish to make a down payment on it. The real estate market moveth quickly in these troubled economic times, and you may soon find your opportunity dried up like dead leaves in summer. What would you do?" The Master, you see, did dabble in real estate development a little, and his knowledge was deep in these matters.
The Novice thought for a moment, saying: "But always I carry with me a credit card. Surely this is sufficient for my purposes."
And the Master replied: "Thou knoweth not the ways of commerce. Thinketh thee that all dealings are conducted within feet of a machine that can read credit cards?!"
The Novice knew the ways of Traditional Rationality and Skepticism, and felt it his duty to take the opposite stance to the Master, lest he unthinkingly obey an authority figure. Undeterred, he replied, saying unto the Master: "But always I carry with me cash. Surely this is sufficient for my purposes."
Upon hearing this, the Master did reply, incredulously: "Would thee carry with thee always an amount of cash equal to the reasonable asking price of a down payment for a piece of land?!"
And lo, the Novice did understand, though he could not put it into these words, that the Master did speak of a certain stance with respect to the unknown. The swirling chaos of reality may be impossible to predict, but there are things an aspiring empirimancer can do to make it more likely that ve will have good fortune.
Verily, know that that which people call 'luck' is not the smile of a beneficent god, but the outcome of how some people interact with chance.
______________________________________________________________________________________________________________
Consider for a moment two real people, whom we will call ''Martin" and "Brenda", that considers themselves lucky and unlucky, respectively. Both are part of the group of exceptionally lucky/unlucky people which psychologist Dr. Richard Wiseman has assembled to try and scientifically study the phenomenon of luck.
(The following is taken from his book "The Luck Factor", and interested parties should go there for more information.)
As part of the research, both people were placed in identical, fortuitous circumstances, but both handled the situation very differently. The setting: a small coffee shop, arranged so that there were four tables with a confederate (someone who knows about the experiment) sitting at each table. One of these confederates was a wealthy businessman, the kind of person that, should you happen to meet him in real life and make a good impression, could set you up with a well-paying job. All the confederates were told to act the same way for both Brenda and Martin. On the street right outside the coffee shop, the researchers placed a £5 note.
Brenda and Martin were told to go to the coffee shop at different times, and their behavior was covertly filmed. Martin noticed the money sitting on the street and picked it up. When he went into the coffee shop he sat down next to the businessman and struck up a conversation, even offering to buy him a coffee. Brenda walked past the money, never noticing it, and sat quietly in the shop without talking to anyone.
Fortune favors the...?
There are obvious differences in Brenda and Martin's behavior, but are they indicative of more far-reaching differences in how lucky and unlucky people live their lives? First, let's discuss what doesn't differentiate lucky from unlucky people. Wiseman, having assembled his initial group of subjects, tested them on two traits which could have an impact on luck: intelligence and psychic ability. Determining that intelligence wasn't a factor was as easy as administering an intelligence test. Psychic ability was ruled out by having both lucky and unlucky people pick lottery numbers, with the result being that neither group was more successful than the other.
Wiseman further tested for differences in personality using the Five Factor Model of Personality, which you will recall breaks personality up into Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (the acronym OCEAN makes for easy recall) . Lucky and unlucky people showed no differences in Conscientiousness or Agreeableness, but did show differences in Openness, Extraversion, and Neuroticism. It is here that an interesting picture began to emerge.
Ultimately, Wiseman was able to break luck down into four overarching principles and twelve subprinciples, summarized here:
Principle One: Maximize the number of chance opportunities you have in life.
sub-principle one: lucky people maintain a network of contacts with other people.
sub-principle two: lucky people are more relaxed and less neurotic than unlucky people
sub-principle three: lucky people have a strong drive towards novelty, and strive to introduce variety into their routines.
Principle Two: Use your intuition to make important decisions.
sub-principle one: pay attention to your hunches.
sub-principle two: try and make your intuition more accurate.
Principle Three: Expect good fortune.
sub-principle one: lucky people believe their lucky will continue.
sub-principle two: lucky people attempt to achieve their goals and persist through difficulty.
sub-principle three: lucky people think their interactions will be positive and successful.
Principle Four: Turn bad luck into good.
sub-principle one: lucky people see the silver lining in bad situations.
sub-principle two: lucky people believe that things will work out for them in the long run.
sub-principle three: lucky people spend less time brooding over bad luck.
sub-principle four: lucky people are more proactive in learning from their mistakes and preventing further bad luck.
I suspect that LWers will have a unique set of reactions to and problems with each of these principles, so let's take them one at a time. In this essay, I will examine the first two.
Facing up to randomness
First, how would you go about increasing the likelihood of positive chance encounters? Well, you could start spending more time talking to strangers and making friends with people. Indeed, one of the important differences between unlucky and lucky people is that lucky people are more outgoing, more friendly and open in their body language (lucky people smiled and made eye contact far, far more often), and keep in touch with people they meet longer. The age-old adage 'it's not what you know, but who you know' has more than a grain of truth in it, and a great way to get to know the right people is by simply getting to know more people, period. The chances of any given person being the contact you need are pretty slim, but the odds improve with every person you get to know.
This actually works on several levels. Since the complexity of the world greatly exceeds the cognitive abilities of any one person, cultivating a strong social network positions you to take advantage of the knowledge and experience of others. Even if you are so much smarter than person X that they can't compete with you along any dimension, they may still have information you don't, or they may know somebody who knows somebody who can help you out.
Moreover, I'm sure everyone is familiar with the experience of struggling with a problem, only to have a random conversation (with a stranger or a friend) shake loose a key insight. This can happen locally inside your own head when you have the necessary raw material laying around but haven't seen a certain connection. In this situation you would have eventually hit upon the insight but the process has been expedited. More valuable still is when two or more people enter a conversation that produces an insight that nobody had the necessary components to produce for themselves; I think this is part of what Matt Ridley means when he talks about ideas having sex.
So you're doing your best to meet more people and flex your extroversion muscles. Next, you might try and be more spontaneous and random in your life. Wiseman notes that many lucky people have a strong orientation towards variety and novel experiences. Some of them, facing an important decision like which car to buy, will do something like list their options on a piece of paper and then roll a die.
You don't need to go quite this far; it's also acceptable to shop different places, take different routes to work, or pick a new part of the city to explore every month. The takeaway here is that it's difficult to have positive chance encounters if you always do the same thing.
One of my favorite examples of someone positioning themselves to benefit from chance comes from HPMoR, when Harry and Hermione first read all the titles of the books in the library and then read all the tables of contents. From their point of view the books in the library are a vast store of unknown information, any bit of which they might need at a given time. Since reading every single book isn't an option, familiarizing themselves with the information in a systematic way means creating many potential sources of insight while simultaneously reducing the cost of doing future research. Hacker Eric Raymond made related point in the context of winning table-top board games:
I made chance work for me. Pay attention, because I am about to reveal why there is a large class of games (notably pick-up-and-carry games like Empire Builder, network-building games like Power Grid, and more generally games with a large variety of paths to the win condition) at which I am extremely difficult to beat. The technique is replicable.
I have a rule: when in doubt, play to maximize the breadth of your option tree. Actually, you should often choose option-maximizing moves over moves with a slightly higher immediate payoff, especially early in the game and most especially if the effect of investing in options is cumulative.
What's the common thread between extroversion, skimming the library shelves, and beating your friends at boardgames? Certain actions and certain states of mind make it more likely you'll benefit from white swans.
(Clever readers may be saying to themselves: "okay, but doesn't all this also make the chances of encountering black swans higher as well?" We will address these concerns when we talk about principles three and four.)
Attitude matters
We've covered extraversion and openness, but the lucky people Dr. Wiseman interviewed were also more relaxed and less neurotic than the unlucky ones. This has obvious consequences for when you are trying to meet new people, but research also hints that being less anxious may make you more likely to notice things you aren't specifically looking for. This is probably why several of Dr. Wiseman's lucky participants remarked on how often they found money on the street, found great opportunities while listening to the radio or reading the newspaper, and in general stumbled over opportunities in places where other people simply failed to notice them.
This attitude undergirds and complements much of what I discussed in the previous section; while you are trying to maximize your pathways to victory, don't forget that constantly worrying and mentally spinning your tires will make you less likely to see a chance opportunity.
Pump your intuition
Lucky people tend to have strong intuitions, and they have a habit of paying careful attention to them. I'm sure you're skeptical of this advice, as I was when I first started reading this section. Given present company I don't think I need to reiterate all the billion ways intuition can be derailed and misleading. That said, placing intuition and rationality as orthogonal to one another is a good example of the straw vulcan of rationality. Intuitions are of course not always wrong, and in some cases may be the only source of information a person has to go off of.
Two things put a little nuance on the proposition that you should listen to your intuitions. The first is that, as far as I can tell, lucky people don't trust their intuitions immediately and absolutely. They don't stand at a busy intersection, blindfolded, and trust their gut to tell them when it's safe to cross. Rather, their hunches act more like yellow traffic lights, telling them that they should proceed with caution here or do a bit more research there. In other words, it sounds to me like lucky people treat their intuitions in a pretty rational manner, as data points, to be used but not relied upon in isolation unless there is just nothing else available.
The other thing is that many lucky people take steps to sharpen their intuitions, utilizing quiet solitude or meditation. Dr. Wiseman goes into precious little detail about this, including just a few anecdotal descriptions of people's efforts to clear their mind. The rationalist community will be familiar with more quantitative methods like predictionbook, and googling for 'improving your intuitions' turned up about as much garbage as you'd probably expect. If anyone has leads to legitimate research on improving intuition, I'd be happy to add an addendum.
Suggested exercises
Throughout the book Dr. Wiseman includes exercises which are meant to help people utilize the principles uncovered in his research to become luckier. Here are the suggested exercises for the topics discussed in this post:
-To enhance your extraversion, strike up a conversation with four people you either don't know or don't know well. Do this each week for a month. Additionally, every week make contact with a person you haven't spoken to in a while.
-To relax, find a quiet place and picture yourself in a beautiful, calming scene. Make sure to visualize each and every detail of the location, including whatever sounds and smells are around you. When you've got the scene in place, visualize the tension leaving your body in the form of a liquid flowing out of you, starting with your head. once you feel sufficiently relaxed, slowly open your eyes.
-Inject some randomness in your life by making a list of 6 new experiences. These can be anything from trying a new type of food to taking a class on a subject you've always been interested in. Number them 1 to 6, roll a die, and then do whatever corresponds to the number you rolled.
This essay can also be found at Rulers To the Sky.
A Voting Puzzle, Some Political Science, and a Nerd Failure Mode
In grade school, I read a series of books titled Sideways Stories from Wayside School by Louis Sachar, who you may know as the author of the novel Holes which was made into a movie in 2003. The series included two books of math problems, Sideways Arithmetic from Wayside School and More Sideways Arithmetic from Wayside School, the latter of which included the following problem (paraphrased):
The students have Mrs. Jewl's class have been given the privilege of voting on the height of the school's new flagpole. She has each of them write down what they think would be the best hight for the flagpole. The votes are distributed as follows:
- 1 student votes for 6 feet.
- 1 student votes for 10 feet.
- 7 students vote for 25 feet.
- 1 student votes for 30 feet.
- 2 students vote for 50 feet.
- 2 students vote for 60 feet.
- 1 student votes for 65 feet.
- 3 students vote for 75 feet.
- 1 student votes for 80 feet, 6 inches.
- 4 students vote for 85 feet.
- 1 student votes for 91 feet.
- 5 students vote for 100 feet.
At first, Mrs. Jewls declares 25 feet the winning answer, but one of the students who voted for 100 feet convinces her there should be a runoff between 25 feet and 100 feet. In the runoff, each student votes for the height closest to their original answer. But after that round of voting, one of the students who voted for 85 feet wants their turn, so 85 feet goes up against the winner of the previous round of voting, and the students vote the same way, with each student voting for the height closest to their original answer. Then the same thing happens again with the 50 foot option. And so on, with each number, again and again, "very much like a game of tether ball."
Question: if this process continues until it settles on an answer that can't be beaten by any other answer, how tall will the new flagpole be?
Answer (rot13'd): fvkgl-svir srrg, orpnhfr gung'f gur zrqvna inyhr bs gur bevtvany frg bs ibgrf. Naq abj lbh xabj gur fgbel bs zl svefg rapbhagre jvgu gur zrqvna ibgre gurberz.
Why am I telling you this? There's a minor reason and a major reason. The minor reason is that this shows it is possible to explain little-known academic concepts, at least certain ones, in a way that grade schoolers will understand. It's a data point that fits nicely with what Eliezer has written about how to explain things. The major reason, though, is that a month ago I finished my systematic read-through of the sequences and while I generally agree that they're awesome (perhaps moreso than most people; I didn't see the problem with the metaethics sequence), I thought the mini-discussion of political parties and voting was on reflection weak and indicative of a broader nerd failure mode.
TLDR (courtesy of lavalamp):
- Politicians probably conform to the median voter's views.
- Most voters are not the median, so most people usually dislike the winning politicians.
- But people dislike the politicians for different reasons.
- Nerds should avoid giving advice that boils down to "behave optimally". Instead, analyze the reasons for the current failure to behave optimally and give more targeted advice.
How to Become a 1000 Year Old Vampire
This is based on a concept we developed at the Vancouver Rationalists meetup.
Different experiences level a person up at different rates. You could work some boring job all your life and be 60 and not be much more awesome than your average teenager. On the other hand, some people have such varied and so much life experience that by 30 they are as awesome as a 1000 year old vampire.
This reminds me that it's possible to conduct your life with more or less efficiency, sometimes by orders of magnitude. Further, while we don't have actual life extension, it's content we care about, not run time. If you can change your habits such that you get 3 times as much done, that's like tripling your effective lifespan.
So how might one get a 100x speedup and become like a 1000 year old vampire in 10 years? This is absurdly ambitious, but we can try:
Do Hard Things
Some experiences catapult you forward in personal development. You can probably systematically collect these to build formidability as fast as possible.
Paul Graham says that many of the founders he sees (as head of YC) become much more awesome very quickly as need forces them to. This seems plausible and it seems back up by other sources as well. Basically "learn to swim by jumping in the deep end"; people have a tendency to take the easy way that results in less development when given the chance, so the chance to slack off being removed can be beneficial.
That has definitely been my personal experience as well. At work, the head engineer got brain cancer and I got de-facto promoted to head of two of the projects, which I then leveled up to be able to do. It felt pretty scary at first, but now I'm bored and wishing something further would challenge me. (addendum: not bored right now at all; crazy crunch time for the other team, which which I am helping) It seems really hard to just do better without such forcing; as far as I can tell I could work much harder than now, but willpower basically doesn't exist so I don't.
On that note, a friend of mine got big results from joining the Army and getting tear gassed in a trench while wet, cold, exhausted, sleep deprived, and hungry, which pushed him through stuff he wouldn't have thought he could deal with. Apparently it sortof re-calibrated his feelings about how well he should be doing and how hard things are such that he is now a millionaire and awesome.
So the mechanism behind a lot of this seems to be recalibrating what seems hard or scary or beyond your normal sphere. I used to be afraid of phone calls and doing weird stuff like climbing trees in front of strangers, but not so much anymore; it feels like I just forget that they were scary. In the case of the phone there were a few times where I didn't have time to be scared, I needed to just get things done. In the case of climbing trees, I did it on my own enough for it to become normalized so that it didn't even come up that people would see me, because it didn't seem weird.
So tying that back in, there are experiences that you can put yourself into to force that normalization and acclimatization to hard stuff. For example, some people do this thing called "Rejection Therapy" or "Comfort Zone Expansion", basically going out and doing embarrassing or scary things deliberately to recalibrate your intuitions and teach your brain that they are not so scary.
On the failure end, self-improvement projects tend to fail when they require constant application of willpower. It's just a fact that you will fall off the wagon on those things. So you have to make it impossible to fall off the wagon. You have to make it scarier to fall off the wagon than it is to level up and just do it. This is the idea behind Beeminder, which takes your money if you don't do what your last-week self said you would.
I guess the thesis behind all this is that these level-ups are permanent, in that they make you more like a 1000 year old vampire, and you don't just go back to being your boring old mortal self. If this is true, the implication that you should seek out hard stuff seems pretty interesting and important.
Broadness of Experience
Think of a 1000 year old vampire; they would have done everything. Fought in battles, led armies, built great works, been in love, been everywhere, observed most aspects of the human experience, and generally seen it all.
Things you can do have sharply diminishing returns; the first few times you watch great movies is most of the benefit thereof, likewise with video games, 4chan, most jobs, and most experiences in general. Thus it's really important to switch around the things you do a lot so that you stay in that sharp initially growing part of the learning curve. You can get 90% of the vampire's experience with 10% of his time investment if you focus on those most enlightening parts of each experience.
So besides doing hard things that level you up, you can get big gains by doing many things and switching as soon as you get bored (which is hopefully calibrated to how challenged you are).
You may remember early in the Arabian revolutions in Libya, an American student took the summer off college to fight in the revolution. I bet he learned a lot. If you could do enough things like that, you'd be well on your way to matching the vampire.
This actually goes hand in hand with doing hard things; when you're not feeling challenged (you're on the flat part of that experience curve), its probably best to throw yourself face first into some new project, both because it's new, and because it's hard.
Switching often has the additional benefit of normalizing strategic changes and practicing "what should I be doing"-type thoughts, which can't hurt if you intend to actually do useful stuff with your life.
There are probably many cases where full on switching is not best. For example, you don't become an expert in X by switching out of X as soon as you know the basics. It might be that you want to switch often on side-things but go deep on X. Alternatively, you probably want to do some kind of switch every now and then in X, maybe look at things from a different perspective, tackle a different problem, or something like that. This is the Deliberate Practice theory of expertise.
So don't forget the shape of that experience curve. As soon as you start to feel that leveling off, find a way to make it fresh again.
Do Things Quickly
Another big angle on this idea is that every hour is an opportunity, and you want to make the best of them. This seems totally obvious but I definitely "get it" a lot more having thought about it in terms of becoming a 1000 year old vampire.
A big example is procrastination. I have a lot of things that have been hanging around on my todo list for a long time, basically oppressing me by their presence. I can't relax and look to new things to do while there's still that one stupid thing on my todo list. The key insight is that if you process the stuff on your todo list now instead of slacking now and doing it later, you get it out of the way and then you can do something else later, and thereby become a 1000 year old vampire faster.
So a friend and I have internalized this a bit more and started really noticing those opportunity costs, and actually started knocking things off faster. I'm sure there's more where that came from; we are nowhere near optimal in Doing It Now, so it's probably good to meditate on this more.
As a concrete example, I'm writing tonight because I realized that I need to just get all my writing ideas out of the way to make room for more awesomeness.
The flipside of this idea is that a lot of things are complete wastes of time, in the sense that they just burn up lifespan and don't get you anything, or even weaken you.
Bad habits like reading crap on the Internet, watching TV, watching porn, playing video games, sleeping in, and so on are obvious losses. It's really hard to internalize that, but this 1000-year-old-vampire concept has been helpful for me by making the magnitude of the cost more salient. Do you want to wake up when you're 30 and realize you wasted your youth on meaningless crap, or do you want to get off your ass and write that thing you've been meaning to right now, and be a fscking vampire in 10 years?
It's not just bad habits, though; a lot of it is your broader position in life that wastes time or doesn't. For example, repetitive wage work that doesn't challenge you is really just trading a huge chunk of your life for not even much money. Obviously sometimes you have to, but you have to realize that trading away half your life is a pretty raw deal that is to be avoided. You don't even really get anything for commuting and housework. Maybe I really should quit my job soon...
I have 168 hours a week, of which only 110 are feasible to use (sleep), and by the time we include all the chores, wage-work, bad habits, and procrastination, I probably only live 30 hours a week. That's bullshit; three quarters of my life pissed away. I could live four times as much if I could cut out that stuff.
So this is just the concept of time opportunity costs dressed up to be more salient. Basic economics concepts seem really quite valuable in this way.
Do it now so you can do something else later. Avoid crap work.
Social Environment and Stimulation
I notice that I'm most alive and do my best intellectual work when talking to other people who are smart and interested in having deep technical conversations. Other things like certain patterns of time pressure create this effect where I work many times harder and more effectively than otherwise. A great example is technical exams; I can blast out answers to hundreds of technical questions at quite a rate.
It seems like a good idea to induce this state where you are more alive (is it the "flow" state?) if you want to live more life. It also seems totally possible to do so more often by hanging out with the right people and exposing yourself to the right working conditions and whatnot.
One thing that will come up is that it's quite draining, in that I sometimes feel exhausted and can't get much done after a day of more intense work. Is this a real thing? Probably. Still, I'm nowhere near the limit even given the need to rest, in general.
I ought to do some research to learn more about this. If it's connected to "flow", there's been a lot of research, AFAIK.
I also ought to just hurry up and move to California where there is a proper intellectual community that will stimulate me much better than the meager group of brains I could scrape together in Vancouver.
The other benefit of a good intellectual community is that they can incentivize doing cooler things. When all your friends are starting companies or otherwise doing great work, sitting around on the couch feels like a really bad idea.
So if we want to live more life, finding more ways to enter that stimulated flow state seems like a prudent thing to do, whether that means just making way for it in your work habits, putting yourself in more challenging social and intellectual environments, or whatever.
Adding It Up
So how fast can we go overall if we do all of this?
By seeking many new experiences to keep learning, I think we can plausibly get 10x speedup over what you might do by default. Obviously this can be more or less, based on circumstances and things I'm not thinking of.
On top of that, it seems like I could do 4x as much by maintaining a habit of doing it now and avoiding crap work. How to do this, I don't know, but it's possible.
I don't know how to estimate the actual gains from a stimulating environment. It seems like it could be really really high, or just another incremental gain in efficiency, depending how it goes down. Let's say that on top of the other things, we can realistically push ourselves 2x or 3x harder by social and environmental effects.
Doing hard things seems huge, but also quite related to the doing new things angle that we already accounted for. So explicitly remembering to do hard things on top of that? Maybe 5x? This again will vary a lot based on what opportunities you are able to find, and unknown factors, but 5x seems safe enough given mortal levels of ingenuity and willpower.
So all together, someone who:
-
Often thinks about where they are on the experience curve for everything they do, and takes action on that when appropriate,
-
Maintains a habit of doing stuff now and visualizing those opportunity costs,
-
Puts themselves in a stimulating environment like the bay area intellectual community and surrounds themselves with stimulating people and events,
-
Seeks out the hardest character-building experiences like getting tear gassed in a trench or building a company from scratch,
Can plausibly get 500x speedup and live 1000 normal years in 2. That seems pretty wild, but none of these things are particularly out there, and people like Elon Musk or Eliezer Yudkowsky do seem to do around that magnitude more than the average joe.
Perhaps they don't multiply quite that conveniently, or there's some other gotcha, but the target seems reachable, and these things will help. On the other hand, they almost certainly self-reinforce; a 1000 year old vampire would have mastered the art of living life life at ever higher efficiencies.
This does seem to be congruent with all this stuff being power-law distributed, which of course makes it difficult to summarize by a single number like 500.
The final question of course is what real speedup we can expect you or I to gain from writing or reading this. Getting more than 2 or 3 times by having a low-level insight or reading a blog post seems stretching of the imagination, never mind 500 times. But still, power laws happen. There's probably massive payoff to taking this idea seriously.
Probability, knowledge, and meta-probability
This article is the first in a sequence that will consider situations where probability estimates are not, by themselves, adequate to make rational decisions. This one introduces a "meta-probability" approach, borrowed from E. T. Jaynes, and uses it to analyze a gambling problem. This situation is one in which reasonably straightforward decision-theoretic methods suffice. Later articles introduce increasingly problematic cases.
Three ways CFAR has changed my view of rationality
The Center for Applied Rationality's perspective on rationality is quite similar to Less Wrong's. In particular, we share many of Less Wrong's differences from what's sometimes called "traditional" rationality, such as Less Wrong's inclusion of Bayesian probability theory and the science on heuristics and biases.
But after spending the last year and a half with CFAR as we've developed, tested, and attempted to teach hundreds of different versions of rationality techniques, I've noticed that my picture of what rationality looks like has shifted somewhat from what I perceive to be the most common picture of rationality on Less Wrong. Here are three ways I think CFAR has come to see the landscape of rationality differently than Less Wrong typically does – not disagreements per se, but differences in focus or approach. (Disclaimer: I'm not speaking for the rest of CFAR here; these are my own impressions.)
1. We think less in terms of epistemic versus instrumental rationality.
Formally, the methods of normative epistemic versus instrumental rationality are distinct: Bayesian inference and expected utility maximization. But methods like "use Bayes' Theorem" or "maximize expected utility" are usually too abstract and high-level to be helpful for a human being trying to take manageable steps towards improving her rationality. And when you zoom in from that high-level description of rationality down to the more concrete level of "What five-second mental habits should I be training?" the distinction between epistemic and instrumental rationality becomes less helpful.
Here's an analogy: epistemic rationality is like physics, where the goal is to figure out what's true about the world, and instrumental rationality is like engineering, where the goal is to accomplish something you want as efficiently and effectively as possible. You need physics to do engineering; or I suppose you could say that doing engineering is doing physics, but with a practical goal. However, there's plenty of physics that's done for its own sake, and doesn't have obvious practical applications, at least not yet. (String theory, for example.) Similarly, you need a fair amount of epistemic rationality in order to be instrumentally rational, though there are parts of epistemic rationality that many of us practice for their own sake, and not as a means to an end. (For example, I appreciate clarifying my thinking about free will even though I don't expect it to change any of my behavior.)
In this analogy, many skills we focus on at CFAR are akin to essential math, like linear algebra or differential equations, which compose the fabric of both physics and engineering. It would be foolish to expect someone who wasn't comfortable with math to successfully calculate a planet's trajectory or design a bridge. And it would be similarly foolish to expect you to successfully update like a Bayesian or maximize your utility if you lacked certain underlying skills. Like, for instance: Noticing your emotional reactions, and being able to shift them if it would be useful. Doing thought experiments. Noticing and overcoming learned helplessness. Visualizing in concrete detail. Preventing yourself from flinching away from a thought. Rewarding yourself for mental habits you want to reinforce.
These and other building blocks of rationality are essential both for reaching truer beliefs, and for getting what you value; they don't fall cleanly into either an "epistemic" or an "instrumental" category. Which is why, when I consider what pieces of rationality CFAR should be developing, I've been thinking less in terms of "How can we be more epistemically rational?" or "How can we be more instrumentally rational?" and instead using queries like, "How can we be more metacognitive?"
2. We think more in terms of a modular mind.
The human mind isn't one coordinated, unified agent, but rather a collection of different processes that often aren't working in sync, or even aware of what each other is up to. Less Wrong certainly knows this; see, for example, discussions of anticipations versus professions, aliefs, and metawanting. But in general we gloss over that fact, because it's so much simpler and more natural to talk about "what I believe" or "what I want," even if technically there is no single "I" doing the believing or wanting. And for many purposes that kind of approximation is fine.
But a rationality-for-humans usually can't rely on that shorthand. Any attempt to change what "I" believe, or optimize for what "I" want, forces a confrontation of the fact that there are multiple, contradictory things that could reasonably be called "beliefs," or "wants," coexisting in the same mind. So a large part of applied rationality turns out to be about noticing those contradictions and trying to achieve coherence, in some fashion, before you can even begin to update on evidence or plan an action.
Many of the techniques we're developing at CFAR fall roughly into the template of coordinating between your two systems of cognition: implicit-reasoning System 1 and explicit-reasoning System 2. For example, knowing when each system is more likely to be reliable. Or knowing how to get System 2 to convince System 1 of something ("We're not going to die if we go talk to that stranger"). Or knowing what kinds of questions System 2 should ask of System 1 to find out why it's uneasy about the conclusion at which System 2 has arrived.
This is all, of course, with the disclaimer that the anthropomorphizing of the systems of cognition, and imagining them talking to each other, is merely a useful metaphor. Even the classification of human cognition into Systems 1 and 2 is probably not strictly true, but it's true enough to be useful. And other metaphors prove useful as well – for example, some difficulties with what feels like akrasia become more tractable when you model your future selves as different entities, as we do in the current version of our "Delegating to yourself" class.
3. We're more focused on emotions.
There's relatively little discussion of emotions on Less Wrong, but they occupy a central place in CFAR's curriculum and organizational culture.
It used to frustrate me when people would say something that revealed they held a Straw Vulcan-esque belief that "rationalist = emotionless robot". But now when I encounter that misconception, it just makes me want to smile, because I'm thinking to myself: "If you had any idea how much time we spend at CFAR talking about our feelings…"
Being able to put yourself into particular emotional states seems to make a lot of pieces of rationality easier. For example, for most of us, it's instrumentally rational to explore a wider set of possible actions – different ways of studying, holding conversations, trying to be happy, and so on – beyond whatever our defaults happen to be. And for most of us, inertia and aversions get in the way of that exploration. But getting yourself into "playful" mode (one of the hypothesized primary emotional circuits common across mammals) can make it easier to branch out into a wider swath of Possible-Action Space. Similarly, being able to call up a feeling of curiosity or of "seeking" (another candidate for a primary emotional circuit) can help you conquer motivated cognition and learned blankness.
And simply being able to notice your emotional state is rarer and more valuable than most people realize. For example, if you're in fight-or-flight mode, you're going to feel more compelled to reject arguments that feel like a challenge to your identity. Being attuned to the signs of sympathetic nervous system activation – that you're tensing up, or that your heart rate is increasing – means you get cues to double-check your reasoning, or to coax yourself into another emotional state.
We also use emotions as sources of data. You can learn to tap into feelings of surprise or confusion to get a sense of how probable you implicitly expect some event to be. Or practice simulating hypotheticals ("What if I knew that my novel would never sell well?") and observing your resultant emotions, to get a clearer picture of your utility function.
And emotions-as-data can be a valuable check on your System 2's conclusions. One of our standard classes is "Goal Factoring," which entails finding some alternate set of actions through which you can purchase the goods you want more cheaply. So you might reason, "I'm doing martial arts for the exercise and self-defense benefits... but I could purchase both of those things for less time investment by jogging to work and carrying Mace." If you listened to your emotional reaction to that proposal, however, you might notice you still feel sad about giving up martial arts even if you were getting the same amount of exercise and self-defense benefits somehow else.
Which probably means you've got other reasons for doing martial arts that you haven't yet explicitly acknowledged -- for example, maybe you just think it's cool. If so, that's important, and deserves a place in your decisionmaking. Listening for those emotional cues that your explicit reasoning has missed something is a crucial step, and to the extent that aspiring rationalists sometimes forget it, I suppose that's a Steel-Manned Straw Vulcan (Steel Vulcan?) that actually is worth worrying about.
Conclusion
I'll name one more trait that unites, rather than divides, CFAR and Less Wrong. We both diverge from "traditional" rationality in that we're concerned with determining which general methods systematically perform well, rather than defending some set of methods as "rational" on a priori criteria alone. So CFAR's picture of what rationality looks like, and how to become more rational, will and should change over the coming years as we learn more about the effects of our rationality training efforts.
To what degree do you model people as agents?
The idea for this post came out of a conversation during one of the Less Wrong Ottawa events. A joke about being solipsist turned into a genuine question–if you wanted to assume that people were figments of your imagination, how much of a problem would this be? (Being told "you would be problematic if I were a solipsist" is a surprising compliment.)
You can rephrase the question as "do you model people as agents versus complex systems?" or "do you model people as PCs versus NPCs?" (To me these seem like a reframing of the same question, with a different connotation/focus; to other people they might seem like different questions entirely). Almost everyone at the table immediately recognized what we were talking about and agreed that modelling some people as agents and some people as complex systems was a thing they did. However, pretty much everything else varied–how much they modelled people as agents overall, how much it varied in between different people they knew, and how much this impacted the moral value that they assigned to other people. I suspect that another variable is "how much you model yourself as an agent"; this probably varies between people and impacts how they model others.
What does it mean to model someone as an agent?
The conversation didn't go here in huge amounts of detail, but I expect that due to typical mind fallacy, it's a fascinating discussion to have–that the distinctions that seem clear and self-evident to me probably aren't what other people use at all. I'll explain mine here.
1. Reliability and responsibility. Agenty people are people I feel I can rely on, who I trust to take heroic responsibility. If I have an unsolved problem and no idea what to do, I can go to them in tears and say "fix this please!" And they will do it. They'll pull out a solution that surprises me and that works. If the first solution doesn't work, they will keep trying.
In this sense, I model my parents strongly as agents–I have close to 100% confidence that they will do whatever it takes to solve a problem for me. There are other people who I trust to execute a pre-defined solution for me, once I've thought of it, like "could you do me a huge favour and drive me to the bike shop tomorrow at noon?" but whom I wouldn't go to with "AAAAH my bike is broken, help!" There are other people who I wouldn't ask for help, period. Some of them are people I get along with well and like a lot, but they aren't reliable, and they're further down the mental gradient towards NPC.
The end result of this is that I'm more likely to model people as agents if I know them well and have some kind of relationship where I would expect them to want to help me. Of course, this is incomplete, because there are brilliant, original people who I respect hugely, but who I don't know well, and I wouldn't ask or expect them to solve a problem in my day-to-day life. So this isn't the only factor.
2. Intellectual formidability. To what extent someone comes up with ideas that surprise me and seem like things I would never have thought of on my own. This also includes people who have accomplished things that I can't imagine myself succeeding at, like startups. In this sense, there are a lot of bloggers, LW posters, and people on the CFAR mailing list who are major PCs in my mental classification system, but who I may not know personally at all.
3. Conventional "agentiness". The degree to which a person's behaviour can be described by "they wanted X, so they took action Y and got what they wanted", as opposed to "they did X kind of at random, and Y happened." When people seem highly agenty to me, I model their mental processes like this–my brother is one of them. I take the inside view, imagining that I wanted the thing they want and had their characteristics, i.e. relative intelligence, domain-specific expertise, social support, etc, and this gives better predictions than past behaviour. There are other people whose behaviour I predict based on how they've behaved in the past, using the outside view, while barely taking into account what they say they want in the future, and this is what gives useful predictions.
This category also includes the degree to which people have a growth mindset, which approximates how much they expect themselves to behave in an agenty way. My parents are a good example of people who are totally 100% reliable, but don't expect or want to change their attitudes or beliefs much in the next twenty years.
These three categories probably don't include all the subconscious criteria I use, but they're the main ones I can think of.
How does this affect relationships with people?
With people who I model as agents, I'm more likely to invoke phrases like "it was your fault that X happened" or "you said you would do Y, why didn't you?" The degree to which I feel blame or judgement towards people for not doing things they said they would do is almost directly proportional to how much I model them as agents. For people who I consider less agenty, whom I model more as complex systems, I'm more likely to skip the blaming step and jump right to "what are the things that made it hard for you to do Y? Can we fix them?"
On reflection, it seems like the latter is a healthier way to treat myself, and I know this (and consistently fail at doing this). However, I want to be treated like an agent by other people, not a complex system; I want people to give me the benefit of the doubt and assume that I know what I want and am capable of planning to get it. I'm not sure what this means for how I should treat other people.
How does this affect moral value judgements?
For me, not at all. My default, probably hammered in by years of nursing school, is to treat every human as worthy of dignity and respect. (On a gut level, it doesn't include animals, although it probably should. On an intellectual level, I don't think animals should be mistreated, but animal suffering doesn't upset me on the same visceral level that human suffering does. I think that on a gut level, my "circle of empathy" includes human dead bodies more than it includes animals).
One of my friends asked me recently if I got frustrated at work, taking care of people who had "brought their illness on themselves", i.e. by smoking, alcohol, drug use, eating junk food for 50 years, or whatever else people usually put in the category of "lifestyle choices." Honestly, I don't; it's not a distinction my brain makes. Some of my patients will recover, go home, and make heroic efforts to stay healthy; others won't, and will turn up back in the ICU at regular intervals. It doesn't affect how I feel about treating them; it feels meaningful either way. The one time I'm liable to get frustrated is when I have to spend hours of hard work on patients who are severely neurologically damaged and are, in a sense, dead already, or at least not people anymore. I hate this. But my default is still to talk to them, keep them looking tidy and comfortable, et cetera...
In that sense, I don't know if modelling different people differently is, for me, a morally a right or a wrong thing to do. However, I spoke to someone whose default is not to assign people moral value, unless he models them as agents. I can see this being problematic, since it's a high standard.
Conclusion
As usual for when I notice something new about my thinking, I expect to pay a lot of attention to this over the next few weeks, and probably notice some interesting things, and quite possibly change the way I think and behave. I think I've already succeeded in finding the source of some mysterious frustration with my roommate; I want to model her as an agent because of #1–she's my best friend and we've been through a lot together–but in the sense of #3, she's one of the least agenty people I know. So I consistently, predictably get mad at her for things like saying she'll do the dishes and then not doing them, and getting mad doesn't help either of us at all.
I'm curious to hear what other people think of this idea.
Why Eat Less Meat?
Previously, I wrote on LessWrong about the preliminary evidence in favor of using leaflets to promote veganism as a way of cost-effectively reducing suffering. In response, there was a large discussion with 530+ comments. In this discussion, I found that a lot of people wanted me to write about why I think nonhuman animals deserve our concern anyway.
Therefore, I wrote this essay with an attempt to defend the view that if one cares about suffering, one should also care about nonhuman animals, since (1) they are capable of suffering, (2) they do suffer quite a lot, and (3) we can prevent their suffering. I hope that we can have a sober, non mind-killing discussion about this topic, since it’s possibly quite important.
Introduction
For the past two years, the only place I ate meat was at home with my family. As of October 2012, I've finally stopped eating meat altogether and can't see a reason why I would want to go back to eating meat. This kind of attitude toward eating is commonly classified as "vegetarianism" where one refrains from eating the flesh of all animals, including fish, but still will consume animal products like eggs and milk (though I try to avoid egg as best I can).
Why might I want to do this? And why might I see it as a serious issue? It's because I'm very concerned about the reality of suffering done to our "food animals" in the process of making them into meat, because I see vegetarianism as a way to reduce this suffering by stopping the harmful process, and because vegetarianism has not been hard at all for me to accomplish.
Animals Can Suffer
Back in the 1600s, Réné Descartes thought nonhuman animals were soulless automatons that could respond to their environment and react to stimuli, but could not feel anything — humans were the only species that were truly conscious. Descartes hit on an important point — since feelings are completely internal to the animal doing the feeling, it is impossible to demonstrate that anyone is truly conscious.
However, when it comes to humans, we don’t let that stop us from assuming other people feel pain. When we jab a person with a needle, no matter who they are, where they come from, or what they look like, they share a rather universal reaction of what we consider to be evidence of pain. We also extend this to our pets — we make great strides to avoid harming kittens, puppies, or other companion animals, and no one would want to kick a puppy or light a kitten on fire just because their consciousness cannot be directly observed. That’s why we even go as far as having laws against animal cruelty.
The animals we eat are no different. Pigs, chickens, cows, and fish all have incredibly analogous responses to stimuli that we would normally agree cause pain to humans and pets. Jab a pig with a needle, kick a chicken, or light a cow on fire, and they will react aversively like any cat, dog, horse, or human.
The Science
But we don't need to rely on just our intuition -- instead, we can look at the science. Animal scientists Temple Grandin and Mark Deesing conclude that "[o]ur review of the literature on frontal cortex development enables us to conclude that all mammals, including rats, have a sufficiently developed prefrontal cortex to suffer from pain". An interview of seven different scientists concludes that animals can suffer.
Dr. Jane Goodall, famous for having studied animals, writes in her introduction to The Inner World of Farm Animals that "farm animals feel pleasure and sadness, excitement and resentment, depression, fear, and pain. They are far more aware and intelligent than we ever imagined…they are individuals in their own right." Farm Sanctuary, an animal welfare organization, has a good overview documenting this research on animal emotion.
Lastly, among much other evidence, in the "Cambridge Declaration On Consciousness", prominent international group of cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists states:
Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.
Factory Farming Causes Considerable Suffering
However, the fact that animals can suffer is just one piece of the picture; we next have to establish that animals do suffer as a result of people eating meat. Honestly, this is easier shown than told -- there's an extremely harrowing and shocking 11-minute video about the cruelty available. Watching that video is perhaps the easiest way to see the suffering of nonhuman animals first hand in these "factory farms".
In making the case clear, Vegan Outreach writes "Many people believe that animals raised for food must be treated well because sick or dead animals would be of no use to agribusiness. This is not true."
They then go on to document, with sources, how virtually all birds raised for food are from factory farms where "resulting ammonia levels [from densely populated sheds and accumulated waste] commonly cause painful burns to the birds' skin, eyes, and respiratory tracts" and how hens "become immobilized and die of asphyxiation or dehydration", having been "[p]acked in cages (usually less than half a square foot of floor space per bird)". In fact, 137 million chickens suffer to death each year before they can even make it to slaughter -- more than the number of animals killed for fur, in shelters and in laboratories combined!
Farm Sanctuary also provides an excellent overview of the cruelty of factory farming, writing "Animals on factory farms are regarded as commodities to be exploited for profit. They undergo painful mutilations and are bred to grow unnaturally fast and large for the purpose of maximizing meat, egg, and milk production for the food industry."
It seems clear that factory farming practices are truly deplorable, and certainly are not worth the benefit of eating a slightly tastier meal. In "An Animal's Place", Michael Pollan writes:
To visit a modern CAFO (Confined Animal Feeding Operation) is to enter a world that, for all its technological sophistication, is still designed according to Cartesian principles: animals are machines incapable of feeling pain. Since no thinking person can possibly believe this any more, industrial animal agriculture depends on a suspension of disbelief on the part of the people who operate it and a willingness to avert your eyes on the part of everyone else.
Vegetarianism Can Make a Difference
Many people see the staggering amount of suffering in factory farms, and if they don't aim to dismiss it outright will say that there's no way they can make a difference by changing their eating habits. However, this is certainly not the case!
How Many Would Be Saved?
Drawing from the 2010 Livestock Slaughter Animal Summary and the Poultry Slaughter Animal Summary, 9.1 billion land animals are either grown in the US or imported (94% of which are chickens!), 1.6 billion are exported, and 631 million die before anyone can eat them, leaving 8.1 billion land animals for US consumption each year.
A naïve average would divide this total among the population of the US, which is 311 million, assigning 26 land animals for each person's annual consumption. Thus, by being vegetarian, you are saving 26 land animals a year you would have otherwise eaten. And this doesn't even count fish, which could be quite high given how many fish need to be grown just to be fed to bigger fish!
Yet, this is not quite true. It's important to note that supply and demand aren't perfectly linear. If you reduce your demand for meat, the suppliers will react by lowering the price of meat a little bit, making it so more people can buy it. Since chickens dominate the meat market, we'll adjust by the supply elasticity of chickens, which is 0.22 and the demand elasticity of chickens, which is -0.52, and calculate the change in supply, which is 0.3. Taking this multiplier, it's more accurate to say you're saving 7.8 land animals a year or more. Though, there are a lot of complex considerations in calculating elasticity, so we should take this figure to have some uncertainty.
Collective Action
One might critique this response by responding that since meat is often bought in bulk, reducing meat consumption won't affect the amount of meat bought, and thus the suffering will still be the same, except with meat gone to waste. However, this ignores the effect of many different vegetarians acting together.
Imagine that you're supermarket buys cases of 200 chicken wings. It would thus take 200 people together to agree to buy 1 less wing in order for the supermarket to buy less wings. However, you have no idea if you're vegetarian #1 or vegetarian #56 or vegetarian #200, making the tipping point for 200 less wings to be bought. You thus can estimate that by buying one less wing you have a 1 in 200 chance of reducing 200 wings, which is equivalent to reducing the supply by one wing. So the effect basically cancels out. See here or here for more.
Every time you buy factory farmed meat, you are creating demand for that product, essentially saying "Thank you, I liked what you are doing and want to encourage you to do it more". By eating less meat, we can stop our support of this industry.
Vegetarianism Is Easier Than You Think
So nonhuman animals can suffer and do suffer in factory farms, and we can help stop this suffering by eating less meat. I know people who get this far, but then stop and say that, as much as they would like to, there's no way they could be a vegetarian because they like meat too much! However, such a joy for meat shouldn't count much compared to the massive suffering each animal undergoes just to be farmed -- imagine if someone wouldn't stop eating your pet just because they like eating your pet so much!
This is less than a problem than you might think, because being a vegetarian is really easy. Most people only think about what they would have to give up and how good it tastes, and don't think about what tasty things they could eat instead that have no meat in them. When I first decided to be a vegetarian, I simply switched from tasty hamburgers to tasty veggieburgers and there was no problem at all.
A Challenge
To those who say that vegetarianism is too hard, I’d like to simply challenge you to just try it for a few days. Feel free to give up afterward if you find it too hard. But I imagine that you should do just fine, find great replacements, and be able to save animals from suffering in the process.
If reducing suffering is one of your goals, there’s no reason why you must either be a die-hard meat eater or a die-hard vegetarian. Instead, feel free to explore some middle ground. You could be a vegetarian on weekdays but eat meat on weekends, or just try Meatless Mondays, or simply try to eat less meat. You could try to eat bigger animals like cows instead of fish or chicken, thus getting the same amount of meat with significantly less suffering.
-
(This was also cross-posted on my blog.)View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)