If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Related to: List of public drafts on LessWrong
Article based on this draft: Conspiracy Theories as Agency Fictions
I was recently thinking about a failure mode that classical rationality often recognizes and even reasonably competently challenges, yet nearly all the heuristics it uses to detect it, seem remarkably easy to use to misuse. Not only that they seem easily hackable to win a debate. How much has the topic been discussed on LW? Wondering about this I sketched out my thoughts in the following paragraphs.
On conspiracy theories
What does the phrase even mean? They are generally used to explain events or trends as the results of plots orchestrated by covert groups. Sometimes people use the term to talk about theories that important events are the products of secret plots that are largely unknown to the general public. Conspiracy in a somewhat more legal sense is also used to describe agreement between persons to deceive, mislead, or defraud others of their legal rights, or to gain an unfair advantage in some endeavour. And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people ac...
At Reason Rally a couple of months ago, we noticed that a lot of atheists there seemed to be there for mutual support - because their own communities rejected atheists, because they felt outnumbered and threatened by their peers, and the rally was a way for them to feel part of an in-group.
There seem to be differing concentrations of people who have had this sort of experience on LessWrong. Some of us felt ostracized by our local communities while growing up, others have felt pretty much free to express atheist or utilitarian views for their whole lives. Does anyone else think this would be worth doing a poll on / have experiences they want to share?
I get a ridiculous amount of benefit by abusing store return deadlines. I've tested and returned an iPhone, $400 Cole Haan bag, multiple coats, jeans, software, video games, and much more. It's surprising how long many return periods are, and it's a fantastic way to try new stuff and make sure you like it.
Because there is an unspoken understanding, that michaelcurzi is clearly aware of, that a no-questions-asked returns policy is intended for cases where the buyer found the item unsuitable in some way, rather than to provide free temporary use of their stuff.
Re-reading my own post on the 10,000 year explosion, a thought struck me. There's evidence that the humans populations in various regions have adapted to their local environment and diet, with e.g. lactose tolerance being more common in people of central and northern European descent. At the same time, there are studies that try to look at the diet, living habits etc. of various exceptionally long-lived populations, and occasionally people suggest that we should try to mimic the diet of such populations in order to be healthier (e.g. the Okinawa diet).
That made me wonder. How generalizable can we consider any findings from such studies? What odds should one assign to the hypothesis that any health benefits such long-lived populations get from their diet are mostly due to local adaptation for that diet, and would not benefit people with different ancestry?
I feel as though this cobbled-together essay from '03 has a lot of untapped potential.
An economics question:
Which economic school of thought most resembles "the standard picture" of cogsci rationality? In other words, which economists understand probability theory, heuristics & biases, reductionism, evolutionary psychology, etc. and properly incorporate it into their work? If these economists aren't of the neo-classical school, how closely does neo-classical economics resemble the standard picture, if at all?
Unnecessary Background Information:
Feel free to not read this. It's just an explanation of why I'm asking these questions.
I'm somewhat at a loss when it comes to economics. When I was younger (maybe 15 or so?) I began reading Austrian economics. The works of Murray Rothbard, Ludwig von Mises, etc., served as my first rigorous introduction to economics. I self-identified as an Austrian for several years, up until a few months ago.
For the past year, I have learned a lot about cognsci rationality through LW sequences and related works. I think I have a decent grasp of what cognsci rationality is, why it is correct, and how to conflicts with the method of the Austrian school. (For those who aren't aware, Austrians use an apriori method and claim absolu...
Econ grad student here (and someone else converted away from Austrian econ in part from Caplan's article + debate with Block). Most of economics just chugs right along with the standard rationality (instrumental rationality, not epistemic) assumptions. Not because economists actually believe humans are rational - well some do, but I digress - but largely because we can actually get answers to real world problems out of the rationality assumptions, and sometimes (though not always) these answers correspond to reality. In short, rationality is a model and economists treat it as such - it's false, but it's an often useful approximation of reality. The same goes for always assuming we're in equilibrium. The trick is finding when and where the approximation isn't good enough and what your criteria for "good enough" is.
Now, this doesn't mean mainstream economists aren't interested in cogsci rationality. An entire subfield of economics - Behavioral Economics - rose up in tandem with the rise of the cogsci approach to studying human decision making. In fact, Kahneman won the nobel prize in economics. AFAICT there's a large market for economic research that applies behavioral econ...
Far from being batshit crazy, Mises was an eminently reasonable thinker. It's just that he didn't do a very good job communicating his epistemological insights (which was understandable, given the insanely difficult nature of explaining what he was trying to get at), but did fine with enough of the economic theory, and thus ended up with a couple generations of followers who extended his economics rather well in plenty of ways, but systematically butchered their interpretation of his epistemological insights.
People compartmentalize, they operate under obstructive identity issues, their beliefs in one area don't propagate to all others, much of what they say or write is signaling that's incompatible with epistemic rationality, etc. Many of these are tangled together. Yeah, it's more than possible for people to say batshit insane things and then turn around and make a bunch of useful insights. The epistemological commentary could almost be seen as signaling team affiliation before actually getting to the useful stuff.
Just consider the kind of people who are bound to become Austrian economists. Anti-authority etc. They have no qualms with breaking from the mainstream in any way whatso...
Your proposed synthesis of Mises and Yudkowsky(?) is moderately interesting, although your claims for the power and importance of such a synthesis suggest naivete. You say that "what's going so wrong in society" can be understood given two ingredients, one of which can be obtained by distilling the essence of the Austrian school, the other of which can be found here on LW but you don't say what it is. As usual, the idea that the meaning of life or the solution to the world-problem or even just the explanation of the contemporary world can be found in a simple juxtaposition of ideas will sound naive and unbelievable to anyone with some breadth of life experience (or just a little historical awareness). I give friendly AI an exemption from such a judgement because by definition it's about superhuman AI and the decoding of the human utility function, apocalyptic developments that would be, not just a line drawn in history, but an evolutionary transition; and an evolutionary transition is a change big enough to genuinely transform or replace the "human condition". But just running together a few cool ideas is not a big enough development to do that. The human condition would continue to contain phenomena which are unbearable and yet inevitable, and that in turn guarantees that whatever intellectual and cultural permutations occur, there will always be enough dissatisfaction to cause social dysfunction. Nonetheless, I do urge you to go into more detail regarding what you're talking about and what the two magic insights are.
Sure. He wrote about it a lot. Here is a concise quote:
The concepts of chance and contingency, if properly analyzed, do not refer ultimately to the course of events in the universe. They refer to human knowledge, prevision, and action. They have a praxeological [relating to human knowledge and action], not an ontological connotation.
Also:
...Calling an event contingent is not to deny that it is the necessary outcome of the preceding state of affairs. It means that we mortal men do not know whether or not it will happen. The present epistemological situation in the field of quantum mechanics would be correctly described by the statement: We know the various patterns according to which atoms behave and we know the proportion in which each of these patterns becomes actual. This would describe the state of our knowledge as an instance of class probability: We know all about the behavior of the whole class; about the behavior of the individual members of the class we know only that they are members. A statement is probable if our knowledge concerning its content is deficient. We do not know everything which would be required for a definite decision between true and not true. But, on t
Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century.
And Ludwig and Richard themselves were arch enemies. Well only sort of, but they certainly didn't agree on everything, and the idea that Ludwig simply took his philosophy of probability from his brother couldn't be further from the truth. Ludwig devoted an entire chapter in his Magnum Opus to uncertainty and probability theory, and I've seen it mentioned many times that this chapter could be seen as his response to his brother's philosophy of probability.
I see what you're saying in your post, but the confusion stems from the fact that Ludwig did in fact believe that frequency probability, logical positivism, etc., were useful epistemologies in the natural sciences, and led to plenty of advancements etc., but that they were strictly incorrect when extended to "the sciences of human action" (economics and others). "Class probability" is what he called the instances where frequency worked, and "case probability" where it didn't.
The most concise quote ...
In many artificial rule systems used in games there often turn out to be severe loopholes that allow an appropriate character to drastically increase their abilities and power. Examples include how in Morrowind you can use a series of intelligence potions to drastically increase your intelligence and make yourself effectively invincible or how in Dungeons and Dragons 3.5 a low level character can using the right tricks ascend to effective godhood in minutes.
So, two questions which sort of go against each other: First is this evidence that randomized rule systems that are complicated enough to be interesting are also likely to allow some sort of drastic increase in effective abilities using some sort of loopholes? (essentially going FOOM in a general sense). Second, and in the almost exact opposite direction, such aspects are common in games and one has quite a few science fiction and fantasy novels where a character (generally evil) tries to do something similar. Less Wrong does have a large cadre of people involved in nerd-literature and the like. Is this aspect of such literature and games acting as fictional evidence which is acting in our backgrounds to improperly make such scenarios seem likely or plausible?
I found this person's anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.
...A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.
But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more s
I've just uploaded an updated version of my comment scroller. Download here. This update makes the script work correctly when hidden comments are loaded (e.g. via the "load all comments" link). Thanks to Oscar Cunningham for prompting me to finally fix it!
Note: Upgrading on Chrome is likely to cause a "Downgrading extension error" (I'd made a mistake with the version numbers previously), the fix is to uninstall and then reinstall the new version. (Uninstall via Tools > Extensions)
For others who aren't using it: I wrote a small user...
Yesterday I was lying in bed thinking about the LW community and had a little epiphany, guessing the reason as to why discussions on gender relations and the traditional and new practices of inter-gender choice and manipulation (or "seduction", more narrowly) around here consistently "fail" as people say - that is, produce genuine disquiet and anger on all sides of the discussion.
The reason is that both opponents and proponents of controversial things in this sphere - be it a techincal approach to romantic relations ("PUA") o...
Just posted today: a small rant about hagiographic biographers who switch off their critical thinking in the presence of literary effect and a cool story. A case study in smart people being stupid.
Has anybody actually followed through on Louie's post about optimal employment (i.e. worked a hospitality job in Australia on a work visa)? How did you go about it? Did you just go there without a job lined up like he suggests? That seems risky. And even if you get a job, what if you get fired after a couple of weeks?
I really like the idea, but I'd also like a few more data points.
Argument for Friendly Universe:
Pleasure/pain is one of the simplest control mechanism, thus it seems probable that it would be discovered by any sufficiently-advanced evolutionary processes anywhere.
Once general intelligence arises as a result of an evolutionary process, it will apply itself to optimizing the (unnecessary) pain away.
Generally, it will succeed. (General intelligence = power of general-purpose optimization.)
Although in a big universe there would exist worlds where unnecessary suffering does not decrease to zero, it would only happen via a lo...
I'm trying to put together an aesthetically pleasing thought experiment / narrative, and am struggling to come up with a way of framing it that won't attract nitpickers.
In a nutshell, the premise is "what similarities and differences are there between real-world human history and culture, and those of a different human history and culture that diverged from ours at some prehistoric point but developed to a similar level of cultural and technological sophistication?"
As such, I need some semi-plausible way for the human population to be physically ...
It's not clear to me why you don't just appeal to Many Worlds, or more generally to alternate histories. These are fairly well-understood concepts among the sort of people who'd be interested in such a thought experiment. Why not simply say "Imagine Carthage had won the Punic Wars" and go from there?
What, costs/benefits are there to pursing a research career in psychology. Both from a personal perspective and in terms of societal benefit?
When assessing societal benefit, consider: are you likely to increase the total number of research psychologists, or just increase the size of the pool from which they are drawn? See Just what is ‘making a difference’? - Counterfactuals and career choice on 80000hours.org.
The decision of what career to pursue is one of the largest you will ever take. The value of information here is very high, and I recommend putting a very large amount of work and thought into it - much more than most people do. In particular there is a great deal of valuable stuff on 80000hours.org - it's worth reading it all!
John Derbyshire on Ridding Myself of the Day
...I used to console myself with the thought that at least I’d been reading masses of news and informed opinion, making myself wiser and better equipped to add my own few cents to the pile. This is getting harder and harder to believe. There’s something fleeting, something trivializing about the Internet. I think what I have actually done is wasted five or six perfectly good hours when I could have been working up a book proposal, fixing a side door in the garage, doing bench presses, or…or…reading a novel.
...
I sh
The idea that a stone falls because it is 'going home' brings it no nearer to us than a homing pigeon, but the notion that 'the stone falls because it is obeying a law' makes it like a man, and even a citizen.
--C. S. Lewis
Is it a problem to think of matter/energy as obeying laws which are outside of itself? Is it a problem to think of it as obeying more than one law? Is mathematics a way of getting away from the idea of laws of nature? Is there a way of expressing behaviors as intrinsic to matter/energy in English? Is there anything in the Sequences or elsewhere on the subject?
A dicussion on the IRC LessWrong channel about how to provide an incentive to learning the basic mathematics of cool stuff for the mathphobic aspiring rationalists on LW (here is the link to a discussion of that idea, gave us another one.
The Sequences are long
Longer than Lord of the Rings. There is reason rational wiki translates our common phrase of "read the sequences" as "f##k you". I have been here for nearly 2 years and I still haven't read all of them systematically. And even among people read them, how much of them will they rec...
......the outstanding feature of any famous and accomplished person, especially a reputed genius, such as Feynman, is never their level of g (or their IQ), but some special talent and some other traits (e.g., zeal, persistence). Outstanding achievements(s) depend on these other qualities besides high intelligence. The special talents, such as mathematical musical, artistic, literary, or any other of the various “multiple intelligences” that have been mentioned by Howard Gardner and others are more salient in the achievements of geniuses than is their typical
How much is this statistically correct? I agree with the fact that most high-IQ people are not outstanding geniuses, but neither are most non-high-IQ people. This only proves that high IQ alone is not a guarantee for great achievements.
I suspect a statistical error: ignoring a low prior probability that a human has very high IQ. Let me explain it by analogy -- you have 1000 white boxes and 10 black boxes. Probability that a white box contains a diamond is 1%. Probability that a black box contains a diamond is 10%. It is better to choose a black box? Well, let's look at the results: there are 10 white boxes with a diamond and only 1 black box with a diamond... so perhaps choosing a black box is not so great idea; perhaps is there some other mysterious factor that explains why most diamonds end in the white boxes? No, the important factor is that a random box has only 0.01 prior probability of being black, so even the 1:10 ratio is not enough to make the black boxes contain the majority of diamonds.
The higher the IQ, the less people have it, especially for very high values. So even if these people were on average more successful, we would still see more total success achieved by people with not so high IQ.
(Disclaimer: I am not saying that IQ has a monotonous impact on success. I'm just saying that seeing most success achieved by people with not so high IQ does not disprove this hypothesis.)
I think the Ship of Theseus problem is good reductionism practice. Anyone else think similarly?
Sure. Relatedly, the Mona Lisa currently hanging in the Louvre isn't the original... that only existed in the early 1500s. All we have now is the 500-year-old descendent of the original Mona Lisa, which is not the same, it is merely a descendent.
Fortunately for art collectors, human biases are such that the 500-year-old descendent is more valuable in most people's minds than the original would be.
You're off by a couple months. (should read "May 1-15," instead of "March 1-15").
Edit: It's fixed now
I won't be wasting any more time on TVTropes. The reason is that I've become so goddamn angry at the recent shocking triumph of hypocrisy, opportunism, idiocy and moral panic that I literally start growling after looking at any page for more than five seconds. Never again will I become "trapped" in that holier-than-thou fascist little place. Every cloud has a silver lining, I guess. Still, I'm kinda sad that this utter madness happened.
(One particular thing I'm mad about is their perverted treatment of Sengoku Rance, an excellent and engaging vid...
I'm worried I'm too much of a jerk. I used to think I had solved this problem, but I've recently encountered (or, more accurately, stopped ignoring) evidence that my tone is usually too hostile, negative, mean, horrible &c.
Could some people go through my comment history, and point out where I could improve? Sometimes think I'm exactly enough of a jerk, but other times I bet I cross the line.
Anonymous feedback can go here. Else reply to this comment or send a private message.
I think I see a problem in Robin Hanson's I'm a Sim, or You're Not. He argues:
...Today, small-scale coarse simulations are far cheaper than large-scale detailed simulations, and so we run far more of the first type than the second. I expect the same to hold for posthuman simulations of humans – most simulation resources will be allocated to simulations far smaller than an entire human history, and so most simulated humans would be found in such smaller simulations.
Furthermore I expect simulations to be quite unequal in who they simulate in great detail –
From the fact that all of Shadowzerg's comments in this thread have at least three upvotes, I can only assume that the karma sockpuppets are out in force.
http://lesswrong.com/lw/c4k/why_is_it_that_i_need_to_create_an_article_to/
I registered the domains maxipok.com and maxipok.org and set them up to redirect to http://www.existential-risk.org/concept.html .
I know there are many programmers on LW, and thought they might appreciate word of the following Kickstarter project. I don't code myself, but from my understanding it's like Scrivener for programmers:
http://www.kickstarter.com/projects/ibdknox/light-table?ref=discover_pop
So mstevens, Konkvistador and me had an IRC discussion which sparked an interesting idea.
The basic idea is to get people to read and understand the sequences. As a reward for doing this, there could either be some sort of "medals" or "badges" for users or a karma reward. The "badges" solution would require that changes are made to the site design, but the karma rewards could work without changes, by getting upvotes from the lw crowd.
To confirm that the person actually understood what is written in the sequences, "instruct...
An interesting read I stumbled upon in gwern's Google+ feed.
I need advice on proof reading. Specifically:
How can I effectively read through 10-20 page reports, searching for spelling, formatting and similar mistakes?
and, more importantly, how can I effectively check calculations and tables done in excel for errors?
What I'm looking for is some kind of method to do those tasks. Currently, I try to check my results, but it is hard for me not just to glaze over the finished work - I'm familiar with it and it is hard for me to read a familiar text/table/calculation thoroughly.
Does anybody know how one can improve in this respect?
I've recently spent a lot of time thinking about and researching the sociology and the history of ethics. Based on this I'm going to make a prediction that may be unsettling for some. At least it was unsettling for me when I made it less abstract, shifted to near mode thinking and imagined actually living in such a world. If my model proves accurate I probably eventually will.
"Between consenting adults." as the core of modern sexual morality and the limit of law is going to prove to be a poor Schelling fence.
An interesting debate has surfaced after a small group of people have claimed to have success inducing hallucinations through autosuggestive techniques.
http://www.tulpa.info/index.xhtml
http://louderthanthunder.net/tulpa/
Maths is great.
Many of us don't know as much maths as we might like.
Khan Academy has many educational videos and exercises for learning maths. Many people might enjoy and benefit from working through them, but suffer from Akrasia that means they won't actually do this without external stimulus.
I propose we have a KA Competition - people compete to do the maths videos and exercises on that site, and post the results in terms of KA badges and karma (they can link to their profiles there so this can be verified).
The community here will vote up impressive achi...
I will not be able to post the May 16-31 open thread until ten hours after midnight EST.
Edit: Circumstances have changed. I will be able to post it on time. (Thus, comment retracted.)
Requesting Help on Applying Instrumental Rationality
I'm faced with a dilemma and need a big dose of instrumental rationality. I'll describe the situation:
I'm entering my first semester of college this fall. I'm aiming to graduate in 3-4 years with a Mathematics B.S. In order for my course progression to go smoothly, I need to take Calculus I Honors this fall and Calc II in the spring. These two courses serve as a prerequisite bottleneck. They prevent me from taking higher level math courses.
My SAT scores have exempted me from all placement tests, includin...
There was a thread a while ago where somebody converted probabilities via logarithms to numbers, so it's easier to use conditional probabilities. Unfortunately, I didn't bookmark it. Doe snaybody know which thread I'm talking about?
A poem about getting out of the box....
...Siren Song
This is the one song everyone
would like to learn: the song
that is irresistible:the song that forces men
to leap overboard in squadrons
even though they see beached skullsthe song nobody knows
because anyone who had heard it
is dead, and the others can’t remember.Shall I tell you the secret
and if I do, will you get me
out of this bird suit?I don’t enjoy it here
squatting on this island
looking picturesque and mythicalwith these two feathery maniacs, I don’t enjoy singing
this trio, fatal and valuable.I wi
Update on the accountability system started about a month ago:it worked for about three weeks with everyone regularly turning in work, now I'm the only one still doing it. Lessons learnt: it seems that the half-life of a motivational technique is about 2 weeks. The importance of not breaking the chain (I suspect it's not coincidence that I'm the only one still going and I'm also the only one who hasn't had unavoidable missed days from travelling). Alternatively, I'm very good at committing to commitment devices, and they're not.
How can I improve my ability to manipulate mental images?
When I try to visualize a scene in my mind I find that edges of the visualization fade away until I only have a tiny image in the center of my visual field or lose the visualization entirely.
Here are some things I have noticed:
In any decision involving an Omega like entity that can run perfect simulations of you, there wouldn't be a way to tell if you were inside the simulation or in the real universe. Therefore, in situations where the outcome depends on the results of the simulation, you should act as though you are in the simulation. For example, in counterfactual mugging, you should take the lesser amount because if you're in Omega's simulation you guarantee your "real life" counterpart the larger sum.
Of course this only applies if the entity you're dealing with happens to be able to run perfect simulations of reality.
I've been reading up on working memory training (the general consensus is that training is useless or very nearly so). However, what I find interesting is how strongly working memory is correlated with performance on a wide variety of intelligence tests. While it seems that you can't train working memory, does anyone know what would stand in the way of artificial enhancements to working memory? (If there are no major problems aside from BCIs not yet being at that point, I know what I will be researching over the next few months. If there is something that would prevent this from working, it would be best to know now.)
Why doesn't someone like Jaan Tallinn or Peter Thiel donate a lot more to SIAI? I don't intend this to mean that I think they should or that I know better than them, I just am not sure what their reasoning is. They have both already donated $100k+ each, but they could easily afford much more (well, I know Peter Thiel could. I don't know exactly how much money Jaan Tallinn actually has). I am just imagining myself in their positions, and I can't easily imagine myself considering an organization like SIAI to be worth donating $100k to, but not to be worth do...
I may try emailing Jaan Tallinn to ask him myself, depending on how others react to this post
The Singularity Institute is in regular contact with its largest donors. Please do not bother them.
It occurred to me that I have no idea what people mean by the word "observer". Rather, I don't know if a solid reductionist definition for observation exists. The best I can come up with is "an optimization process that models its environment". This is vague enough to include everything we associate with the word, but it would also include non-conscious systems. Is that okay? I don't really know.
Related to: List of public drafts on LessWrong
Article based on this draft: Conspiracy Theories as Agency Fictions
I was recently thinking about a failure mode that classical rationality often recognizes and even reasonably competently challenges, yet nearly all the heuristics it uses to detect it, seem remarkably easy to use to misuse. Not only that they seem easily hackable to win a debate. How much has the topic been discussed on LW? Wondering about this I sketched out my thoughts in the following paragraphs.
On conspiracy theories
What does the phrase even mean? They are generally used to explain events or trends as the results of plots orchestrated by covert groups. Sometimes people use the term to talk about theories that important events are the products of secret plots that are largely unknown to the general public. Conspiracy in a somewhat more legal sense is also used to describe agreement between persons to deceive, mislead, or defraud others of their legal rights, or to gain an unfair advantage in some endeavour. And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people acting in anything that can be described as self-interest and is a few inferential jumps away. One could argue this is the primary meaning of calling an argument a conspiracy theory in on-line debates.
But putting aside the misuse of the label and associated cached thoughts, people do engage in constructing conspiracy theories when they just aren't needed. Note that we do have plenty of historical examples of real conspiracies with pretty high stakes, so we do know they can be the right answer. Sometimes entire on-line communities fixate on them or just don't call such bad thinking out. Why does this happen? Groups are complicated since we are social monkeys. This is something I don't feel like going into right now, since plenty of fancy phrases like tribal attire or bandwagon effect would abound not to mention the obligatory Hansonian status based explanations, packed in a even bigger wall of text. Let us then first take a look at why individuals may be biased towards such explanations.
First off we have a hard time understanding that coordination is hard. If we see a large pay off available and thinking it easily in reach if "we could just get along" seems like a classical failing. Our pro-social sentiments lead us to downplay such barriers in our future plans. Motivated cognition on behalf of assessing the threat potential of perceived enemies or strangers likely shares this problem. Even if we avoid this, we may still be lost since the second big relevant thing is our tendency for anthropomorphizing things that better not be. A paranoid brain seeing agency in every shadow or strange sound, seems like something evolution would favour over one that fails to see it every now and then. In other words the cost of false positives was reasonably low. Also our brains are just plain lazy, the general population is pretty good at modelling other human minds and considering just how hard the task is, we do a pretty remarkable job of it. So when you want rain you do a rain dance to appease the sky spirits since the weather is pretty capricious and angry sky spirits is a model that makes as much sense as any other (when you are stuck in relative ignorance) and is cheap to run on your brain. The modern world is remarkably complex. Our Dunbarian minds probably just plain can't get how a society can be that complex and unpredictable without it being "planned" by a cabal of Satan or Heterosexual White Males or the Illuminati (but I repeat myself twice) scheming to make weird things happen in the small stone age tribe. Learning about and gaining confidence in some models helps people escape anthropomorphizing human society (this might sound strange but here on LW we are wary of doing this to people, ha beat that!) or the economy or government. The latter is particularly salient since the idea that say something like the United States government can be successfully modelled as a single agent to explain most of its actions is something I dare say most people slip up on occasionally. And lastly... naughty secret conspiracy and malignant agency just plain make a good story.
Humans loooove stories.
Related: Reversed stupidity is not intelligence; Knowing About Biases Can Hurt People.
An argument against a conspiracy theory is probabilistic, because we don't deny that conspiracies exist, only that in this specific case, a non-conspiracy explanation is more probable than a conspiracy explanation, therefore focusing on the conspiracy explanation is privileging a hypothesis.
People are not very good at probabilistic reasoning. So some of them prefer an in... (read more)