Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
It's that time of year again.
If you are reading this post and self-identify as a LWer, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn't matter if you don't post much. Doesn't matter if you're a lurker. Take the survey.
This year's census contains a "main survey" that should take about ten or fifteen minutes, as well as a bunch of "extra credit questions". You may do the extra credit questions if you want. You may skip all the extra credit questions if you want. They're pretty long and not all of them are very interesting. But it is very important that you not put off doing the survey or not do the survey at all because you're intimidated by the extra credit questions.
It also contains a chance at winning a MONETARY REWARD at the bottom. You do not need to fill in all the extra credit questions to get the MONETARY REWARD, just make an honest stab at as much of the survey as you can.
Please make things easier for my computer and by extension me by reading all the instructions and by answering any text questions in the simplest and most obvious possible way. For example, if it asks you "What language do you speak?" please answer "English" instead of "I speak English" or "It's English" or "English since I live in Canada" or "English (US)" or anything else. This will help me sort responses quickly and easily. Likewise, if a question asks for a number, please answer with a number such as "4", rather than "four".
The planned closing date for the survey is Friday, November 14. Instead of putting the survey off and then forgetting to do it, why not fill it out right now?
Okay! Enough preliminaries! Time to take the...
[EDIT: SURVEY CLOSED, DO NOT TAKE!]
Thanks to everyone who suggested questions and ideas for the 2014 Less Wrong Census/Survey. I regret I was unable to take all of your suggestions into account, because of some limitations in Google Docs, concern about survey length, and contradictions/duplications among suggestions. The current survey is a mess and requires serious shortening and possibly a hard and fast rule that it will never get longer than it is right now.
By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.
This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.
I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".
Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.
The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.
I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples intenally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.
This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.
For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.
The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.
Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.
And this is a problem.
Followup to: Announcing the 2014 program equilibrium iterated PD tournament
In August, I announced an iterated prisoner's dilemma tournament in which bots can simulate each other before making a move. Eleven bots were submitted to the tournament. Today, I am pleased to announce the final standings and release the source code and full results.
All of the source code submitted by the competitors and the full results for each match are available here. See here for the full set of rules and tournament code.
Before we get to the final results, here's a quick rundown of the bots that competed:
AnderBot follows a simple tit-for-tat-like algorithm that eschews simulation:
- On the first turn, Cooperate.
- For the next 10 turns, play tit-for-tat.
- For the rest of the game, Defect with 10% probability or Defect if the opposing bot has defected more times than AnderBot.
We've recently published a guide to MIRI's research on MIRI's website. It overviews some of the major open problems in FAI research, and provides reading lists for those who want to get familiar with MIRI's technical agenda.
This guide updates and replaces the MIRI course list that started me on the path of becoming a MIRI researcher over a year ago. Many thanks to Louie Helm, who wrote the previous version.
This guide is a bit more focused than the old course list, and points you not only towards prerequisite textbooks but also towards a number of relevant papers and technical reports in something approximating the "appropriate order." By following this guide, you can get yourself pretty close to the cutting edge of our technical research (barring some results that we haven't written up yet). If you intend to embark on that quest, you are invited to let me know; I can provide both guidance and encouragement along the way.
I've reproduced the guide below. The canonical version is at intelligence.org/research-guide, and I intend to keep that version up to date. This post will not be kept current.
Finally, a note on content: the guide below discusses a number of FAI research subfields. The goal is to overview, rather than motivate, those subfields. These sketches are not intended to carry any arguments. Rather, they attempt to convey our current conclusions to readers who are already extending us significant charity. We're hard at work producing a number of documents describing why we think these particular subfields are important. (The first was released a few weeks ago, the rest should be published over the next two months.) In the meantime, please understand that the research guide is not able nor intended to provide strong motivation for these particular problems.
Friendly AI theory currently isn't about implementation, it's about figuring out how to ask the right questions. Even if we had unlimited finite computing resources and a solid understanding of general intelligence, we still wouldn't know how to specify a system that would reliably have a positive impact during and after an intelligence explosion. Such is the state of our ignorance.
For now, MIRI's research program aims to develop solutions that assume access to unbounded finite computing power, not because unbounded solutions are feasible, but in the hope that these solutions will help us understand which questions need to be answered in order to the lay the groundwork for the eventual specification of a Friendly AI. Hence, our current research is primarily in mathematics (as opposed to software engineering or machine learning, as many expect).
This guide outlines the topics that one can study to become able to contribute to one or more of MIRI’s active research areas.
Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.
I've heard of the concept of "weirdness points" many times before, but after a bit of searching I can't find a definitive post describing the concept, so I've decided to make one. As a disclaimer, I don't think the evidence backing this post is all that strong and I am skeptical, but I do think it's strong enough to be worth considering, and I'm probably going to make some minor life changes based on it.
Chances are that if you're reading this post, you're probably a bit weird in some way.
No offense, of course. In fact, I actually mean it as a compliment. Weirdness is incredibly important. If people weren't willing to deviate from society and hold weird beliefs, we wouldn't have had the important social movements that ended slavery and pushed back against racism, that created democracy, that expanded social roles for women, and that made the world a better place in numerous other ways.
Many things we take for granted now as why our current society as great were once... weird.
Joseph Overton theorized that policy develops through six stages: unthinkable, then radical, then acceptable, then sensible, then popular, then actual policy. We could see this happen with many policies -- currently same-sex marriage is making its way from popular to actual policy, but not to long ago it was merely acceptable, and not too long before that it was pretty radical.
Some good ideas are currently in the radical range. Effective altruism itself is such a collection of beliefs typical people would consider pretty radical. Many people think donating 3% of their income is a lot, let alone the 10% demand that Giving What We Can places, or the 50%+ that some people in the community do.
And that's not all. Others would suggest that everyone become vegetarian, advocating for open borders and/or universal basic income, theabolishment of gendered language, having more resources into mitigating existential risk, focusing on research into Friendly AI, cryonicsand curing death, etc.
While many of these ideas might make the world a better place if made into policy, all of these ideas are pretty weird.
Weirdness, of course, is a drawback. People take weird opinions less seriously.
The absurdity heuristic is a real bias that people -- even you -- have. If an idea sounds weird to you, you're less likely to try and believe it,even if there's overwhelming evidence. And social proof matters -- if less people believe something, people will be less likely to believe it. Lastly, don't forget the halo effect -- if one part of you seems weird, the rest of you will seem weird too!
...But we can use this knowledge to our advantage. The halo effect can work in reverse -- if we're normal in many ways, our weird beliefs will seem more normal too. If we have a notion of weirdness as a kind of currency that we have a limited supply of, we can spend it wisely, without looking like a crank.
All of this leads to the following actionable principles:
Recognize you only have a few "weirdness points" to spend. Trying to convince all your friends to donate 50% of their income to MIRI, become a vegan, get a cryonics plan, and demand open borders will be met with a lot of resistance. But -- I hypothesize -- that if you pick one of these ideas and push it, you'll have a lot more success.
Spend your weirdness points effectively. Perhaps it's really important that people advocate for open borders. But, perhaps, getting people to donate to developing world health would overall do more good. In that case, I'd focus on moving donations to the developing world and leave open borders alone, even though it is really important. You should triage your weirdness effectively the same way you would triage your donations.
Clean up and look good. Lookism is a problem in society, and I wish people could look "weird" and still be socially acceptable. But if you're a guy wearing a dress in public, or some punk rocker vegan advocate, recognize that you're spending your weirdness points fighting lookism, which means less weirdness points to spend promoting veganism or something else.
Advocate for more "normal" policies that are almost as good. Of course, allocating your "weirdness points" on a few issues doesn't mean you have to stop advocating for other important issues -- just consider being less weird about it. Perhaps universal basic income truly would be a very effective policy to help the poor in the United States. But reforming the earned income tax credit and relaxing zoning laws would also both do a lot to help the poor in the US, and such suggestions aren't weird.
Use the foot-in-door technique and the door-in-face technique. The foot-in-door technique involves starting with a small ask and gradually building up the ask, such as suggesting people donate a little bit effectively, and then gradually get them to take the Giving What We Can Pledge. The door-in-face technique involves making a big ask (e.g., join Giving What We Can) and then substituting it for a smaller ask, like the Life You Can Save pledge or Try Out Giving.
Reconsider effective altruism's clustering of beliefs. Right now, effective altruism is associated strongly with donating a lot of money and donating effectively, less strongly with impact in career choice, veganism, and existential risk. Of course, I'm not saying that we should drop some of these memes completely. But maybe EA should disconnect a bit more and compartmentalize -- for example, leaving AI risk to MIRI, for example, and not talk about it much, say, on 80,000 Hours. And maybe instead of asking people to both give more AND give more effectively, we could focus more exclusively on asking people to donate what they already do more effectively.
Evaluate the above with more research. While I think the evidence base behind this is decent, it's not great and I haven't spent that much time developing it. I think we should look into this more with a review of the relevant literature and some careful, targeted, market research on the individual beliefs within effective altruism (how weird are they?) and how they should be connected or left disconnected. Maybe this has already been done some?
This is crossposted from my blog. In this post, I discuss how Newcomblike situations are common among humans in the real world. The intended audience of my blog is wider than the readerbase of LW, so the tone might seem a bit off. Nevertheless, the points made here are likely new to many.
Last time we looked at Newcomblike problems, which cause trouble for Causal Decision Theory (CDT), the standard decision theory used in economics, statistics, narrow AI, and many other academic fields.
These Newcomblike problems may seem like strange edge case scenarios. In the Token Trade, a deterministic agent faces a perfect copy of themself, guaranteed to take the same action as they do. In Newcomb's original problem there is a perfect predictor Ω which knows exactly what the agent will do.
Both of these examples involve some form of "mind-reading" and assume that the agent can be perfectly copied or perfectly predicted. In a chaotic universe, these scenarios may seem unrealistic and even downright crazy. What does it matter that CDT fails when there are perfect mind-readers? There aren't perfect mind-readers. Why do we care?
The reason that we care is this: Newcomblike problems are the norm. Most problems that humans face in real life are "Newcomblike".
These problems aren't limited to the domain of perfect mind-readers; rather, problems with perfect mind-readers are the domain where these problems are easiest to see. However, they arise naturally whenever an agent is in a situation where others have knowledge about its decision process via some mechanism that is not under its direct control.
[Originally posted to my personal blog, reposted here with edits.]
You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.
Bold attempts aren't enough, roads can't be paved with intentions...You probably don’t even got what it takes,But you better try anyway, for everyone's sakeAnd you won’t find the answer until you escape from theLabyrinth of your conventions.Its time to just shut up, and do the impossible.Can’t walk away...Gotta break off those shackles, and shake off those chainsGotta make something impossible happen today...
The Well-Functioning Gear
I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander
Recursive Heroic Responsibility
Heroic responsibility for average humans under average conditions
I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever.
Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.
But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them.
And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me.
Around 2009, I embarked on being a serious amateur historian. I wouldn't have called it that at the time, but since then, I've basically nonstop studied various histories.
The payoffs of history come slow at first, and then fast. History is often written as a series of isolated events, and events are rarely put in total context. You can easily draw a straight line from Napoleon's invasions of the fragmented German principalities to how Bismarck and Moltke were able to unify a German Confederation under Prussian rule a few decades later; from there, it's a straight line to World War I due to great power rivalry; the Treaty of Versailles is easily understood in retrospect by historical French/German enmity; this gives rise to World War II.
That series of events is hard enough to truly get one's mind around, not just in abstract academic terms, but in actually getting a feel of how and why the actors did what they did, which shaped the outcomes that built the world.
And that's only the start of it: once you can flesh out the rest of the map, history starts coming brilliantly alive.
Without Prime Minister Stolypin's assassination in 1911, likely the Bolsheviks don't succeed in Russia; without that, Stalin is not at the helm when the Nazis invade.
On the other side of the Black Sea, in 1918, the Ottoman Empire is having terms worse than the Treaty of Versailles imposed on it -- until Mustafa Kemal leads the Turkish War of Independence, building one of the most stable states in the Middle East. Turkey, following Kemal's skill at governance and diplomacy, is able to (with great difficulty) stay neutral in World War II, not be absorbed by the Soviets, and not have its government taken over by hard-line Muslims.
This was not-at-all an obvious course of events. Without Kemal, Turkey almost certainly becomes crippled under the Treaty of Sevres, and eventually likely winds up as a member of the Axis during World War II, or gets absorbed as another Soviet/Warsaw Pact satellite state.
The chain of events goes on and on. There is an eminently clear chain of events from Martin Luther at Worms in 1521 to the American Revolution. Meanwhile, the non-success the Lord Protectorate and Commonwealth of England turned out less promising than was hoped -- ironically, arguably predisposing England to being less sympathetic to greater democracy. But the colonies were shielded from this, and their original constitutions and charters were never amended in the now-becoming-more-disenchanted-with-democracy England. Following a lack of consistent colonial policy and a lot of vacillating by various British governments, the American Revolution happens, and Britain loses control of the land and people would come to supplant it as the dominant world power one and a half centuries later.
Until you can start seeing the threads and chains of history across nations, interactions, and long stretches of time, history is a set of often-interesting stories -- but the larger picture remains blurry and out-of-focus. The lessons come once you can synthesize it all.
Hideyoshi Toyotomi's 1588 sword hunt was designed to take away weapons and chances of rebellious factions overthrowing his unified government of Japan. The policy was continued by his successor after the Toyotomi/Tokugawa Civil War, which leads to the Tokugawa forces losing to the Imperial Restoration in 1868 as their skill at warfare had atrophied; common soldiers with Western artillery were able to out-combat samurai with obsolete weapons.
Nurhaci founded the Qing Dynasty around the time Japan was being unified, with a mix of better command structures and tactics. But the dynasty hardened into traditionalism and was backwards-looking when Western technology and imperialists came with greater frequency in the late 1800's. The Japanese foreign minister Ito Hirobumi offered to help the Qing modernize along the lines Imperial Japan had modernized while looking for a greater alliance with the Chinese. But, Empress Dowager Cixi arrests and executes the reform-minded ministers of Emperor Guangxu and later, most likely, poisoned the reform-minded Emperor Guangxu. (He died of arsenic poisoning when Cixi was on her deathbed; someone poisoned him; Cixi or someone acting under her orders is the most likely culprit.)
The weak Qing Dynasty starts dealing with ever-more-frequent invasions, diplomatic extortions, and rebellions and revolutions. The Japanese invade China a generation after Hirobumi was rebuffed, and the Qing Dynasty entirely falls apart. After the Japanese unconditional surrender, the Chinese Civil War starts; the Communists win.
From this, we can start drawing lessons and tracing histories, seeing patterns. We start to see how things could have broken differently. Perhaps Germany and France were doomed to constant warfare due to geopolitics; maybe this is true.
But certainly, it's not at all obvious that Mustafa Kemal would lead the ruins of the Ottoman Empire into modern Turkey, and (seemingly against overwhelming odds) keep neutrality during World War II, rebuff Stalin and stay removed from Soviet conquest, and maintain a country with secular and modern laws that honors Muslim culture without giving way to warlordism as happened to much of the rest of the Middle East.
Likewise, we can clearly see how the policies of Empress Dowager Cixi ended the chance for a pan-East-Asian alliance, trade bloc, or federation; it's not inconceivable to imagine a world today were China and Japan are incredibly close allies, and much of the world's centers of commerce, finance, and power are consolidated in a Tokyo-Beijing-Seoul alliance. Sure, it's inconceivable with hindsight, but Japan in 1910 and Japan in 1930 are very different countries; and the struggling late Qing Dynasty is different than the fledgling competing factions in China after the fall of the Qing.
We can see, observing historical events from broad strokes, the huge differences individuals can make at leveraged points, the eventual outcomes in Turkey and East Asia were not-at-all foreordained by geography, demographics, or trends.
Originally, I was sketching out some of these trends of history to make a larger point about how modern minds have a hard time understanding older governments -- in a world where "personal rule" is entirely rebuffed in the more developed countries, it is hard to imagine how the Qing Dynasty or Ottoman Empire actually functioned. The world after the Treaty of Westphalia is incredibly different than the world before it, and the world before strict border controls pre-WWI is largely unrecognizable to us.
That was the piece I was going to write, about how we project modern institutions and understandings backwards, and how that means we can't understand what actually happened. The Ottomans and Qing were founded before modern nationalism had emerged, and the way their subjects related to them is so alien to us that it's almost impossible to conceive of how their culture and governance actually ran.
I might still pen that piece, if there's interest in it -- my attempt at a brief introduction came to result in this very different one, focused on a different particular point: the threshold effect in learning history.
I would say there's broadly three thresholds:
The first looks at a series of isolated events. You wind up with some witty quips, like: Astor saying, "Sir, if you were my husband, I would poison your drink." Churchill: "If I were married to you, I'd drink it."
Or moments of great drama: "And so the die is cast." "Don't fire until you see the whites of their eyes." "There is nothing to fear except fear itself."
These aren't so bad to learn; they're an okay jumping-off place. Certainly, Caesar's decision to march on Rome, Nobunaga's speech before the Battle of Okehazama, or understanding why Washington made the desperate gamble to cross the Delaware all offerlessons.
But seeing how the Marian military reforms, Sulla's purges, and the Gracchi brothers created the immediate situation before Julius Caesar's fateful crossing is more interesting, and tracing the lines backwards, seeing how Rome's generations-long combat with Hannibal's Carthage turned the city-state into a fully militarized conquest machine, and then following the lines onwards to see how the Romans relied on unit cohesion which, once learned by German adversaries, led to the fall of Rome -- this is much more interesting.
That's the second threshold of history to me: when isolated events start becoming regional chains; that's tracing Napoleon's invasion of Germany to Bismarck to the to World War I to the Treaty of Versailles to WWII.
Some people get to this level of history, and it makes you quickly an expert in a particular country.
But I think that's a poor place to stop learning: if you can truly get your mind around a long stretch of time in a nation, it's time to start coloring the map. When you can broadly know how Korea is developing simultaneous with Japan; how the Portugese/Spanish rivalry and Vatican compromises are affecting Asia's interactions with the Age of Sail Westerners; how Protestantism is creating rivals to Catholic power, two of which later equip the Japanese's Imperial Faction, which kicks off the Asian side of World War II -- this is when history starts really paying dividends and teaching worthwhile lessons.
The more you get into it, the more there is to learn. Regions that don't get much historical interest from Americans like Tito's Yugoslavia become fascinating to look at how they stayed out of Soviet Control and played the Western and Eastern blocs against each other; the chain of events takes a sad turn when Tito's successors can't keep the country together, the Yugoslav Wars follow, and its successor states still don't have the levels of relative prosperity and influence that Yugoslavia had in its heyday.
Yugoslavia is hard to get one's mind around by itself, but it's easy to color the map in with a decent understanding of Turkey, Germany, and Russia. Suddenly, figures and policies and conflicts and economics and culture start coming alive; lessons and patterns are everywhere.
I don't read much fiction any more, because most fiction can't compete with the sheer weight, drama, and insightfulness of history. Apparently some Kuomintang soldiers held out against the Chinese Communists and fought irregular warfare while funding their conflicts with heroin production in the regions of Burma and Thailand -- I just got a book on it, further coloring in the map of the aftermath of the Chinese Civil War, and that aspect of it upon the backdrop of the Cold War and containment, and how the Sino/Soviet split led to America normalizing relations with China, and...
...it never ends, and it's been one of the most insightful areas of study across my life.
History in that first threshold -- isolated battles, quotes, the occasional drama -- frankly, it offers only a slight glimmer of what's possible to learn.
Likewise, the second level of knowing a particular country's rise and fall over time can be insightful, but I would encourage anyone that has delved into history that much to not stop there: you're not far from the gates unlocking to large wellsprings of knowledge, a nearly infinite source of ideas, inspiration, case studies, and all manner of other sources of new and old ideas and very practical guidance.
“Your instinct is to talk your way out of the situation, but that is an instinct born of prior interactions with reasonable people of good faith, and inapplicable to this interaction…” – Ken White
One of the Less Wrong Study Hall denizens has been having a bit of an issue recently. He became an atheist some time ago. His family was in denial about it for a while, but in recent days they have 1. stopped with the denial bit, and 2. been less than understanding about it. In the course of discussing the issue during break, this line jumped out at me:
“I can defend my views fine enough, just not to my parents.”
And I thought: Well, of course you can’t, because they’re not interested in your views. At all.
I never had to deal with the Religion Argument with my parents, but I did spend my fair share of time failing to argumentatively defend myself. I think I have some useful things to say to those younger and less the-hell-out-of-the-house than me.
A clever arguer is someone that has already decided on their conclusion and is making the best case they possibly can for it. A clever arguer is not necessarily interested in what you currently believe; they are arguing for proposition A and against proposition B. But there is a specific sort of clever arguer, one that I have difficulty defining explicitly but can characterize fairly easily. I call it, as of today, the Hostile Arguer.
It looks something like this:
When your theist parents ask you, “What? Why would you believe that?! We should talk about this,” they do not actually want to know why you believe anything, despite the form of the question. There is no genuine curiosity there. They are instead looking for ammunition. Which, if they are cleverer arguers than you, you are likely to provide. Unless you are epistemically perfect, you believe things that you cannot, on demand, come up with an explicit defense for. Even important things.
In accepting that the onus is solely on you to defend your position – which is what you are implicitly doing, in engaging the question – you are putting yourself at a disadvantage. That is the real point of the question: to bait you into an argument that your interlocutor knows you will lose, whereupon they will expect you to acknowledge defeat and toe the line they define.
Someone in the chat compared this to politics, which makes sense, but I don’t think it’s the best comparison. Politicians usually meet each other as equals. So do debate teams. This is more like a cop asking a suspect where they were on the night of X, or an employer asking a job candidate how much they made at their last job. Answering can hurt you, but can never help you. The question is inherently a trap.
The central characteristic of a hostile arguer is the insincere question. “Why do you believe there is/isn’t a God?” may be genuine curiosity from an impartial friend, or righteous fury from a zealous authority, even though the words themselves are the same. What separates them is the response to answers. The curious friend updates their model of you with your answers; the Hostile Arguer instead updates their battle plan.
So, what do you do about it?
Advice often fails to generalize, so take this with a grain of salt. It seems to me that argument in this sense has at least some of the characteristics of the Prisoner’s Dilemma. Cooperation represents the pursuit of mutual understanding; defection represents the pursuit of victory in debate. Once you are aware that they are defecting, cooperating in return is highly non-optimal. On the other hand, mutual defection – a flamewar online, perhaps, or a big fight in real life in which neither party learns much of anything except how to be pissed off – kind of sucks, too. Especially if you have reason to care, on a personal level, about your opponent. If they’re family, you probably do.
It seems to me that getting out of the game is the way to go, if you can do it.
Never try to defend a proposition against a hostile arguer. They do not care. Your best arguments will fall on deaf ears. Your worst will be picked apart by people who are much better at this than you. Your insecurities will be exploited. If they have direct power over you, it will be abused.
This is especially true for parents, where obstinate disagreement can be viewed as disrespect, and where their power over you is close to absolute. I’m sort of of the opinion that all parents should be considered epistemically hostile until one moves out, as a practical application of the SNAFU Principle. If you find yourself wanting to acknowledge defeat in order to avoid imminent punishment, this is what is going on.
If you have some disagreement important enough for this advice to be relevant, you probably genuinely care about what you believe, and you probably genuinely want to be understood. On some level, you want the other party to “see things your way.” So my second piece of advice is this: Accept that they won’t, and especially accept that it will not happen as a result of anything you say in an argument. If you must explain yourself, write a blog or something and point them to it a few years later. If it’s a religious argument, maybe write the Atheist Sequences. Or the Theist Sequences, if that’s your bent. But don’t let them make you defend yourself on the spot.
The previous point, incidentally, was my personal failure through most of my teenage years (although my difficulties stemmed from school, not religion). I really want to be understood, and I really approach discussion as a search for mutual understanding rather than an attempt at persuasion, by default. I expect most here do the same, which is one reason I feel so at home here. The failure mode I’m warning against is adopting this approach with people who will not respect it and will, in fact, punish your use of it.
It takes two to have an argument, so don’t be the second party, ever, and they will eventually get tired of talking to a wall. You are not morally obliged to justify yourself to people who have pre-judged your justifications. You are not morally obliged to convince the unconvinceable. Silence is always an option. “No comment” also works well, if repeated enough times.
There is the possibility that the other party is able and willing to punish you for refusing to engage. Aside from promoting them from “treat as Hostile Arguer” to “treat as hostile, period”, I’m not sure what to do about this. Someone in the Hall suggested supplying random, irrelevant justifications, as requiring minimal cognitive load while still subverting the argument. I’m not certain how well that will work. It sounds plausible, but I suspect that if someone is running the algorithm “punish all responses that are not ‘yes, I agree and I am sorry and I will do or believe as you say’”, then you’re probably screwed (and should get out sooner rather than later if at all possible).
None of the above advice implies that you are right and they are wrong. You may still be incorrect on whatever factual matter the argument is about. The point I’m trying to make is that, in arguments of this form, the argument is not really about correctness. So if you care about correctness, don’t have it.
Above all, remember this: Tapping out is not just for Less Wrong.
(thanks to all LWSH people who offered suggestions on this post)
After reading the comments and thinking some more about this, I think I need to revise my position a bit. I’m really talking about three different characteristics here:
- People who have already made up their mind.
- People who are personally invested in making you believe as they do.
- People who have power over you.
For all three together, I think my advice still holds. MrMind puts it very concisely in the comments. In the absence of 3, though, JoshuaZ notes some good reasons one might argue anyway; to which I think one ought to add everything mentioned under the Fifth Virtue of Argument.
But one thing that ought not to be added to it is the hope of convincing the other party – either of your position, or of the proposition that you are not stupid or insane for holding it. These are cases where you are personally invested in what they believe, and all I can really say is “don’t do that; it will hurt.” Even if you are correct, you will fail for the reasons given above and more besides. It’s very much a case of Just Lose Hope Already.
I’m using religious authorities harshing on atheists as the example here because that was the immediate cause of this post, but atheists take caution: If you’re asking someone “why do you believe in God?” with the primary intent of cutting their answer down, you’re guilty of this, too. ↩
Someone commenting on a draft of this post asked how to tell when you’re dealing with a Hostile Arguer. This is the sort of micro-social question that I’m not very good at and probably shouldn’t opine on. Suggestions requested in the comments. ↩
It occurs to me that the Gay Talk might have a lot in common with this as well. For those who’ve been on the wrong side of that: Did that also feel like a mismatched battle, with you trying to be understood, and them trying to break you down? ↩
View more: Next