Conspiracy Theories as Agency Fictions
Related to: Consider Conspiracies, What causes people to believe in conspiracy theories?
Here I consider in some detail a failure mode that classical rationality often recognizes. Unfortunately nearly all heuristics normally used to detect it seem remarkably vulnerable to misfiring or being exploited by others. I advocate an approach where we try our best to account for the key bias, seeing agency where there is none, while trying to minimize the risk of being tricked into dismissing claims because of boo lights.
What does calling something a "conspiracy theory" tell us?
What is a conspiracy theory? Explanations that invoke plots orchestrated by covert groups are easily called or thought of as such. In a more legal sense conspiracy is an agreement between persons to mislead or defraud others. This simple story gets complicated because people aren't very clear on what they consider a conspiracy.
To give an example, is explicit negotiation or agreement really necessary to call something a conspiracy? Does silent cooperation on Prisoner's Dilemma count? What if the players are deceiving themselves that they are really following a different goal and the resulting cooperation is just a side effect? How could we tell the difference and would it matter? The latter is especially interesting if one applies the anthropic principle to social attitudes and norms.
The phrase is also a convenient tool to mark an opponent's tale as low status and unworthy of further investigation. A boo light easily applied to anything that has people acting in something that can be framed as self-interest and happens to be few inferential jumps away from the audience. Not only is its use in this way well known, this is arguably the primary meaning of calling an argument a conspiracy theory.
We have plenty of historical examples of high-stakes conspiracies so we know they can be the right answer. Noting this and putting aside the misuse of the label, people do engage in crafting conspiracy theories when they just aren't needed. Entire communities can fixate on them or fail to call such bad thinking out. Why does this happen? Humans being the social animals that we are, the group dynamics at work probably need an article or sequence of their own. It should suffice for now to point to belief as attire, the bandwagon effect and Robin Hanson's take on status. Let's rather consider the question of why individuals may be biased towards such explanations. Why do they privilege the hypothesis?
When do they seem more likely than they are?
First off we have a hard time understanding that coordination is hard. Seeing a large pay off available and thinking it easily in reach if "we could just get along" seems like a classical failing. Our pro-social sentiments lead us to downplay such barriers in our future plans. Motivated cognition on behalf of assessing the threat potential of perceived enemies or strangers likely shares this problem. Even if we avoid this, we may still be lost since the second big relevant thing is our tendency for anthropomorphizing things that better not be. Ours is a paranoid brain seeing agency in every shadow or strange sound. The cost of false positives was once reasonably low, while the cost of a false negative very high.
Our minds are also just plain lazy. We are pretty good at modelling other human minds and considering just how hard the task really is, we do a pretty remarkable job of it. If you are stuck in relative ignorance on a subject, say the weather, dancing to appease the sky spirits makes sense. After all the weather is pretty capricious and angry sky spirits is a model that makes as much or more sense as any other model you know. Unlike some other models this one is at least cheap to run on your brain! The modern world is remarkably complex. Do we see ghosts in it?
Our Dunbarian minds probably just plain can't get how a society can be that complex and unpredictable without it being "planned" by a cabal of Satan or Heterosexual White Males or the Illuminati (but I repeat myself twice) scheming to make weird things happen in our oblivious small stone age tribe. Learning about useful models helps people escape anthropomorphizing human society or the economy or government. The latter is particularly salient. I think most people slip up occasionally in assuming that say something like the United States government can be successfully modelled as a single agent to explain most of its "actions". To make matters worse it is a common literary device used by pundits.
A mysterious malignant agency or someone keeping a secret playing the role of the villain makes a good story. Humans love stories. Its fun to think in stories. Any real conspiracy revealed will probably be widely publicized. Peter Knight in his 2003 book cites historians who have put forward the idea, that the United States is something of a home for popular conspiracy theories because so many high-level ones have been undertaken and uncovered since the 1960s. We are more likely to hear about real confirmed conspiracies today than ever before.
Wishful thinking also plays a role. A universe where bad things happen because bad people make them to is appealing. Getting rid of bad people, even very bad people, is easy compared to all the different things one has to do to make sure bad things don't happen in a universe that doesn't care about us and where really bad things are allowed to happen. Finding bad people whether there are or aren't is a problematic tendency. The sad thing is that this may also be how we often manage to coordinate. Do all theories of legitimacy also perhaps rest on the same cognitive failings that conspiracy theories do? The difference between a shadowy cabal we need to get rid of and an institution worthy of respect may be just some bad luck.
How this misleads us
Putting aside such wild speculation, what should we take away from this? When do conspiracy theories seem more likely than they are?
- The phenomena is unpredictable or can't be modelled very well
- Models used by others are hard to understand or are very counter-intuitive
- Thinking about the subject significantly strains cognitive resources
- The theory explains why bad things happen or why something went wrong
- The theory requires coordination
When you see these features you probably find the theory more plausible than it is.
But how many here are likely to accept "conspiracy theories"? To do so with stuff that actually gets called a conspiracy theory doesn't fit our tribal attire. Reverse stupidity may be particularly problematic for us on this topic. Being open to thinking conspiracy is recommended. Just remember to compare how probable it is in relation to other explanations. It is important to call out people who misuse the tag for rhetorical gain.
This applies to debunking as well. Don't go wildly contrarian. But remember that even things that are tagged conspiracy theories are surprisingly popular. How popular might false theories that avoid that tag be? History shows us we don't have the luxury of hoping that kind of thing just doesn't happen in human societies. When assessing an explanation sharing the key features that make conspiracy theories seem more plausible than they are, compensate as you would with a conspiracy theory.
But don't listen to me, I'm talking conspiracy theories.
Note: This article started out as a public draft, feedback to other such drafts is always welcomed. Special thanks to user Villiam_Bur for his commentary and user copt for proofreading and suggestions. Also thanks to the LessWrong IRC chatroom for last minute corrections and stylistic tips.
[Link] SMBC on choosing your simulations carefully
I'm increasingly impressed by the power of Zach Wiener's comic to demonstrate in a few images why hard problems are hard. It would be a vast task, but perhaps it would be useful to create an index of such problem-demonstrating comics to add to the Wiki, giving us something to point newbies at which would be less intimidating than formal Sequence postings. I get the impression that a common hurdle is just to get people to accept that problems of AI (and simulation, ethics, what have you) are actually difficult.
Focus on rationality
(This is my view in the recent debate about posts giving a "rational" discussion of some random topic. It was originally at comment level but I've extended it and posted it in discussion because I want to know if and where people disagree with me, and for what reasons.)
I come to Less Wrong to learn about how to think and how to act effectively. I care about general algorithms that are useful for many problems, like "Hold off on proposing solutions" or "Habits are ingrained faster when you pay concious attention to your thoughts when you perform the action". These posts have very high value to me because they improve my effectiveness across a wide range of areas.
Another such technique is "Dissolving the question". Yvain's "Diseased thinking: dissolving questions about disease" is valuable as an exemplary performance of this technique. It adds to Eliezer's description of question-dissolving by giving a demonstration of its use on a real question. It's main value comes from this, anything I learnt about disease whilst reading it is just a bonus.
To quote badger in the recent thread "Rational Toothpaste: A Case Study"
I claim a post on "rational toothpaste buying" could be on-topic and useful, if correctly written to illustrate determining goals, assessing tradeoffs, and implementing the final conclusions. A post detailing the pros and cons of various toothpaste brands is for a dentistry or personal hygiene forum; a post about algorithms for how to determine the best brands or whether to do so at all is for a rationality forum.
But we don't need more than one or two such examples! Yvain's post about question-dissolving was the only such post I ever need to read.
Posts about toothpaste, house-buying, room-decoration, fashion, shaving or computer hardware only tell me about that particular thing. As good as many of them are they'll never be as useful as a post that teaches me a general method of thought applicable on many problems. And if I want to know about some particular topic I'll just look it up on Google, or go to a library.
It's not possible for LessWrong to give a rational treatment of every subject. There are just too many of them. Even if we did I wouldn't be able to carry all that info around in my head. That's why I need to learn general algorithms for producing rational decisions.
Even though badger makes it clear in the quote I gave that the post is supposed to about the algorithms used, the in the rest of the post almost all the discussion is on the object level (although the conclusion is good). That is, even though badger talks about which methods he's using and why, the focus is still on "What can these methods teach us about toothpaste?" and not "What can optimising toothpaste teach us about our methods?". I'd prefer it if posts tried to answer questions more like the latter. The comments exhibit the same phenomenon. Only one of the comments (kilobug's) is talking about the methods used. Most of the rest are actually talking about toothpaste.
So what I'm suggesting is that LessWrong posts (don't forget there's a whole internet to post things on) should focus on rationality. They can talk about other things too, but the question should always be "What can X teach us about rationality?" and not "What can rationality teach us about X?"
Suggestion: Less Wrong Writing Circle?
This community has a recurring interest in "rationalist fiction," and several members who are writers. I wonder if it would be useful to create a space where Less Wrong members could provide each other constructive criticism and encouragement on in-progress original writing projects?
Disclosure: I'm working on a sci-fi novel right now, and my regular circle of "beta readers" are fantasy fans and aren't providing much feedback on the new project. I am much, much more productive as a writer when I get steady feedback, so I have a personal interest in looking for something like this. Less Wrong came to mind as a community of intelligent, creative, forward-looking types who are likely to enjoy sci-fi.
Son of Shit Rationalists Say
A long time ago, in the colder seasons, I asked for suggestions for a Shit Rationalists Say video. Due to other concerns it took me this long to put it together, and the meme has long since passed. However, here it is.
It is my first time in front of a camera, so I'm shakey. But I learned, and there it is.
Is this rule of thumb useful for gauging low probabilities?
Does something like this seem to you to be a reasonable rule of thumb, for helping handle scope insensitivity to low probabilities?
There's a roughly 30 to 35 out of a million chance that you will die on any given day; and so if I'm dealing with a probability of one in a million, then I 'should' spend 30 times as much time preparing for my imminent death within the next 24 hours as I do playing with the one-in-a-million shot. If it's not worth spending 30 seconds preparing for dying within the next day, then I should spend less than one second dealing with that one-in-a-million shot.
Relatedly, can you think of a way to improve it, such as to make it more memorable? Are there any pre-existing references - not just to micromorts, but to comparing them to other probabilities - which I've missed?
Only say 'rational' when you can't eliminate the word
Almost all instances of the word "true" can be eliminated from the sentences in which they appear by applying Tarski's formula. For example, if you say, "I believe the sky is blue, and that's true!" then this can be rephrased as the statement, "I believe the sky is blue, and the sky is blue." For every "The sentence 'X' is true" you can just say X and convey the same information about what you believe - just talk about the territory the map allegedly corresponds to, instead of talking about the map.
When can't you eliminate the word "true"? When you're generalizing over map-territory correspondences, e.g., "True theories are more likely to make correct experimental predictions." There's no way to take the word 'true' out of that sentence because it's talking about a feature of map-territory correspondences in general.
Similarly, you can eliminate the sentence 'rational' from almost any sentence in which it appears. "It's rational to believe the sky is blue", "It's true that the sky is blue", and "The sky is blue", all convey exactly the same information about what color you think the sky is - no more, no less.
When can't you eliminate the word "rational" from a sentence?
When you're generalizing over cognitive algorithms for producing map-territory correspondences (epistemic rationality) or steering the future where you want it to go (instrumental rationality). So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence "It's epistemically rational to increase belief in hypotheses that make successful experimental predictions." You can Taboo the word, of course, but then the sentence just becomes, "To increase map-territory correspondences, follow the cognitive algorithm of increasing belief in hypotheses that make successful experimental predictions." You can eliminate the word, but you can't eliminate the concept without changing the meaning of the sentence, because the primary subject of discussion is, in fact, general cognitive algorithms with the property of producing map-territory correspondences.
The word 'rational' should never be used on any occasion except when it is necessary, i.e., when we are discussing cognitive algorithms as algorithms.
If you want to talk about how to buy a great car by applying rationality, but you're primarily talking about the car rather than considering the question of which cognitive algorithms are best, then title your post Optimal Car-Buying, not Rational Car-Buying.
Thank you for observing all safety precautions.
A Premature Word on AI
Followup to: A.I. Old-Timers, Do Scientists Already Know This Stuff?
In response to Robin Hanson's post on the disillusionment of old-time AI researchers such as Roger Schank, I thought I'd post a few premature words on AI, even though I'm not really ready to do so:
Anyway:
I never expected AI to be easy. I went into the AI field because I thought it was world-crackingly important, and I was willing to work on it if it took the rest of my whole life, even though it looked incredibly difficult.
I've noticed that folks who actively work on Artificial General Intelligence, seem to have started out thinking the problem was much easier than it first appeared to me.
In retrospect, if I had not thought that the AGI problem was worth a hundred and fifty thousand human lives per day - that's what I thought in the beginning - then I would not have challenged it; I would have run away and hid like a scared rabbit. Everything I now know about how to not panic in the face of difficult problems, I learned from tackling AGI, and later, the superproblem of Friendly AI, because running away wasn't an option.
Try telling one of these AGI folks about Friendly AI, and they reel back, surprised, and immediately say, "But that would be too difficult!" In short, they have the same run-away reflex as anyone else, but AGI has not activated it. (FAI does.)
Roger Schank is not necessarily in this class, please note. Most of the people currently wandering around in the AGI Dungeon are those too blind to see the warning signs, the skulls on spikes, the flaming pits. But e.g. John McCarthy is a warrior of a different sort; he ventured into the AI Dungeon before it was known to be difficult. I find that in terms of raw formidability, the warriors who first stumbled across the Dungeon, impress me rather more than most of the modern explorers - the first explorers were not self-selected for folly. But alas, their weapons tend to be extremely obsolete.
When is Winning not Winning?
Lately I'd gotten jaded enough that I simply accepted that different rules apply to the elite class. As Hanson would say, most rules are there specifically to curtail those who don't have the ability to avoid them and to be side-stepped by those who do - it's why we evolved such big, manipulative brains. So when this video recently made the rounds it shocked me to realize how far my values had drifted over the past several years.
(the video is not about politics, it is about status. My politics are far from those of Penn)
http://www.youtube.com/watch?v=wWWOJGYZYpk&feature=sharek
It's good we have people like Penn around to remind us what it was like to be teenagers and still expect the world to be fair, so our brains can be used for more productive things.
By the measure our society currently uses, Obama was winning. Penn was not. Yet Penn’s approach is the winning strategy for society. Brain power is wasted on status games and social manipulation when it could be used for actually making things better. The machinations of the elite class are a huge drain of resources that could be better used in almost any other pursuit. And yet the elites are admired high-status individuals who are viewed as “winning” at life. They sit atop huge piles of utility. Idealists like Penn are regarded as immature for insisting on things as low-status as “the rules should be fair and apply identically to every one, from the inner-city crack-dealer to the Harvard post-grad.”
The “Rationalists Should Win” meme is a good one, but it risks corrupting our goals. If we focus too much on “Rationalist Should Win” we risk going for near-term gains that benefit us. Status, wealth, power, sex. Basically hedonism – things that feel good because we’ve evolved to feel good when we get them. Thus we feel we are winning, and we’re even told we are winning by our peers and by society. But these things aren’t of any use to society. A society of such “rationalists” would make only feeble and halting progress toward grasping the dream of defeating death and colonizing the stars.
It is important to not let one’s concept of “winning” be corrupted by Azathoth.
ADDED 5/23:
It seems the majority of comments on this post are people who disagree on the basis of rationality being a tool for achieving ends, but not for telling you what ends are worth achieving.
I disagree. As is written "The Choice between Good and Bad is not a matter of saying 'Good!' It is about deciding which is which." And rationality can help to decide which is which. In fact without rationality you are much more likely to be partially or fully mistaken when you decide.
I Stand by the Sequences
Edit, May 21, 2012: Read this comment by Yvain.
Forming your own opinion is no more necessary than building your own furniture.
There's been a lot of talk here lately about how we need better contrarians. I don't agree. I think the Sequences got everything right and I agree with them completely. (This of course makes me a deranged, non-thinking, Eliezer-worshiping fanatic for whom the singularity is a substitute religion. Now that I have admitted this, you don't have to point it out a dozen times in the comments.) Even the controversial things, like:
- I think the many-worlds interpretation of quantum mechanics is the closest to correct and you're dreaming if you think the true answer will have no splitting (or I simply do not know enough physics to know why Eliezer is wrong, which I think is pretty unlikely but not totally discountable).
- I think cryonics is a swell idea and an obvious thing to sign up for if you value staying alive and have enough money and can tolerate the social costs.
- I think mainstream science is too slow and we mere mortals can do better with Bayes.
- I am a utilitarian consequentialist and think that if allow someone to die through inaction, you're just as culpable as a murderer.
- I completely accept the conclusion that it is worse to put dust specks in 3^^^3 people's eyes than to torture one person for fifty years. I came up with it independently, so maybe it doesn't count; whatever.
- I tentatively accept Eliezer's metaethics, considering how unlikely it is that there will be a better one (maybe morality is in the gluons?)
- "People are crazy, the world is mad," is sufficient for explaining most human failure, even to curious people, so long as they know the heuristics and biases literature.
- Edit, May 27, 2012: You know what? I forgot one: Gödel, Escher, Bach is the best.
There are two tiny notes of discord on which I disagree with Eliezer Yudkowsky. One is that I'm not so sure as he is that a rationalist is only made when a person breaks with the world and starts seeing everybody else as crazy, and two is that I don't share his objection to creating conscious entities in the form of an FAI or within an FAI. I could explain, but no one ever discusses these things, and they don't affect any important conclusions. I also think the sequences are badly-organized and you should just read them chronologically instead of trying to lump them into categories and sub-categories, but I digress.
Furthermore, I agree with every essay I've ever read by Yvain, I use "believe whatever gwern believes" as a heuristic/algorithm for generating true beliefs, and don't disagree with anything I've ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Dai; policy debates should not appear one-sided, so it's good that they don't.
I write this because I'm feeling more and more lonely, in this regard. If you also stand by the sequences, feel free to say that. If you don't, feel free to say that too, but please don't substantiate it. I don't want this thread to be a low-level rehash of tired debates, though it will surely have some of that in spite of my sincerest wishes.
Holden Karnofsky said:
I believe I have read the vast majority of the Sequences, including the AI-foom debate, and that this content - while interesting and enjoyable - does not have much relevance for the arguments I've made.
I can't understand this. How could the sequences not be relevant? Half of them were created when Eliezer was thinking about AI problems.
So I say this, hoping others will as well:
I stand by the sequences.
And with that, I tap out. I have found the answer, so I am leaving the conversation.
Even though I am not important here, I don't want you to interpret my silence from now on as indicating compliance.
After some degree of thought and nearly 200 comment replies on this article, I regret writing it. I was insufficiently careful, didn't think enough about how it might alter the social dynamics here, and didn't spend enough time clarifying, especially regarding the third bullet point. I also dearly hope that I have not entrenched anyone's positions, turning them into allied soldiers to be defended, especially not my own. I'm sorry.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)