Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I feel that a lot of what's in LW (written by Eliezer or others) should be in mainstream academia. Not necessarily the most controversial views (the insistence on the MW hypothesis, cryonics, the FAI ...), but a lot of the work on overcoming biases should be there, be criticized there and be improved there.
For example, a few debiasing methods and a more formal explanation of LW's peculiar solution to free will (and more, these are only examples).
I don't really get why LW's content isn't in mainstream academia to be honest.
I get that peer review is not the best (far from it, although it's still the best we have, and post-publication peer-review is also improving, see PubPeer), that some would too readily dismiss LW's content, but not all. Lots would play by the rules and provide genuine criticisms during peer-review (which will lead to the alteration of the content of course), along with criticisms post publication. This is in my opinion something that has to happen.
LW, Eliezer, etc, can't stay on the "crank" level, not playing by the rules, publishing books and no papers. Blogs are indeed faster and reach a bigger amount of people, but I'm not arguing for only publishing in academia. Blogs can (and should) continue.
Tell me what you think, as I seem to have missed something with this topic.
Disclaimer: this is entirely a personal viewpoint, formed by a few years of publication in a few academic fields. EDIT: Many of the comments are very worth reading as well.
Having recently finished a very rushed submission (turns out you can write a novel paper in a day and half, if you're willing to sacrifice quality and sanity), I've been thinking about how academic papers are structured - and more importantly, how they should be structured.
It seems to me that the key is to consider the audience. Or, more precisely, to consider the audiences - because different people will read you paper to different depths, and you should cater to all of them. An example of this is the "inverted pyramid" structure for many news articles - start with the salient facts, then the most important details, then fill in the other details. The idea is to ensure that a reader who stops reading at any point (which happens often) will nevertheless have got the most complete impression that it was possible to convey in the bit that they did read.
So, with that model in mind, lets consider the different levels of audience for a general academic paper (of course, some papers just can't fit into this mould, but many can):
This would count toward my major, and if I weren't going to take it, the likely replacement would be a course in experimental/"folk" philosophy. But I'd also like to hear your thoughts on the virtues of academic rationality courses in general.
(The main counterargument, I'd imagine, is that the Sequences cover most of the same material in a more fluid and comprehensible fashion.)
Here is the syllabus: http://www.yale.edu/darwall/PHIL+333+Syllabus.pdf
Other information: I sampled one lecture for the course last year. It was a noncommital discussion of Newcomb's problem, which I found somewhat interesting despite having read most of the LW material on the subject.
When I asked what Omega would do if we activated a random number generator with a 50.01% chance of one-boxing us, the professors didn't dismiss the question as irrelevant, but they also didn't offer any particular answer.
I help run a rationality meetup at Yale, and this seems like a good place to meet interested students. On the other hand, I could just as easily leave flyers around before the class begins.
Related question: Could someone quickly sum up what might be meant by the "feminist critique" of rationality, as would be discussed in the course? I've read a few abstracts, but I'm still not sure I know the most important points of these critiques.
What can we learn about science from the divide during the Cold War?
I have one example in mind: America held that coal and oil were fossil fuels, the stored energy of the sun, while the Soviets held that they were the result of geologic forces applied to primordial methane.
At least one side is thoroughly wrong. This isn't a politically charged topic like sociology, or even biology, but a physical science where people are supposed to agree on the answers. This isn't a matter of research priorities, where one side doesn't care enough to figure things out, but a topic that both sides saw to be of great importance, and where they both claimed to apply their theories. On the other hand, Lysenkoism seems to have resulted from the practical importance of crop breeding.
First of all, this example supports the claim that there really was a divide, that science was disconnected into two poorly communicating camps. It suggests that when the two sides reached the same results on other topics, they did so independently. Even if we cannot learn from this example, it suggests that we may be able to learn from other consequences of dividing the scientific community.
My understanding is that although some Russian language research papers were available in America, they were completely ignored and the scientists failed to even acknowledge that there was a community with divergent opinions. I don't know about the other direction.
- Are there other topics, ideally in physical science, on which such a substantial disagreement persisted for decades? not necessarily between these two parties?
- Did the Soviet scientists know that their American counterpoints disagreed?
- Did Warsaw Pact (eg, Polish) scientists generally agree with the Soviets about the origin of coal and oil? Were they aware of the American position? Did other Western countries agree with America? How about other countries, such as China and Japan?
- What are the current Russian beliefs about coal and oil? I tried running Russian Wikipedia through google translate and it seemed to support the biogenic theory. (right?) Has there been a reversal among Russian scientists? When? Or does Wikipedia represent foreign opinion? If a divide remains, does it follow the Iron Curtain, or some new line?
- Have I missed some detail that would make me not classify this as an honest disagreement between two scientific establishments?
- Finally, the original question: what can we learn about the institution of science?
Related: The Real End of Science
From the Economist.
“I SEE a train wreck looming,” warned Daniel Kahneman, an eminent psychologist, in an open letter last year. The premonition concerned research on a phenomenon known as “priming”. Priming studies suggest that decisions can be influenced by apparently irrelevant actions or events that took place just before the cusp of choice. They have been a boom area in psychology over the past decade, and some of their insights have already made it out of the lab and into the toolkits of policy wonks keen on “nudging” the populace.
Dr Kahneman and a growing number of his colleagues fear that a lot of this priming research is poorly founded. Over the past few years various researchers have made systematic attempts to replicate some of the more widely cited priming experiments. Many of these replications have failed. In April, for instance, a paper in PLoS ONE, a journal, reported that nine separate experiments had not managed to reproduce the results of a famous study from 1998 purporting to show that thinking about a professor before taking an intelligence test leads to a higher score than imagining a football hooligan.
The idea that the same experiments always get the same results, no matter who performs them, is one of the cornerstones of science’s claim to objective truth. If a systematic campaign of replication does not lead to the same results, then either the original research is flawed (as the replicators claim) or the replications are (as many of the original researchers on priming contend). Either way, something is awry.
I recommend reading the whole thing.
Last time Christopher Lee and I described some problems with scholarly publishing. The big problems are expensive journals and ineffective peer review. But we argued that solving these problems require new methods of
• selection—assessing papers
• endorsement—making the quality of papers known, thus giving scholars the prestige they need to get jobs and promotions.
The Selected Papers Network is an infrastructure for doing both these jobs in an open, distributed way. It’s not yet the solution to the big visible problems—just a framework upon which we can build those solutions. It’s just getting started, and it can use your help.
LessWrong has twice discussed the PhilPapers Survey of professional philosophers' views on thirty controversies in their fields — in early 2011 and, more intensively, in late 2012. We've also been having some lively debates, prompted by LukeProg, about the general value of contemporary philosophical assumptions and methods. It would be swell to test some of our intuitions about how philosophers go wrong (and right) by looking closely at the aggregate output and conduct of philosophers, but relevant data is hard to come by.
Fortunately, Davids Chalmers and Bourget have done a lot of the work for us. They released a paper summarizing the PhilPapers Survey results two days ago, identifying, by factor analysis, seven major components consolidating correlations between philosophical positions, influences, areas of expertise, etc.
1. Anti-Naturalists: Philosophers of this stripe tend (more strongly than most) to assert libertarian free will (correlation with factor .66), theism (.63), the metaphysical possibility of zombies (.47), and A theories of time (.28), and to reject physicalism (.63), naturalism (.57), personal identity reductionism (.48), and liberal egalitarianism (.32).
Anti-Naturalists tend to work in philosophy of religion (.3) or Greek philosophy (.11). They avoid philosophy of mind (-.17) and cognitive science (-.18) like the plague. They hate Hume (-.14), Lewis (-.13), Quine (-.12), analytic philosophy (-.14), and being from Australasia (-.11). They love Plato (.13), Aristotle (.12), and Leibniz (.1).
2. Objectivists: They tend to accept 'objective' moral values (.72), aesthetic values (.66), abstract objects (.38), laws of nature (.28), and scientific posits (.28). Note 'Objectivism' is being used here to pick out a tendency to treat value as objectively binding and metaphysical posits as objectively real; it isn't connected to Ayn Rand.
A disproportionate number of objectivists work in normative ethics (.12), Greek philosophy (.1), or philosophy of religion (.1). They don't work in philosophy of science (-.13) or biology (-.13), and aren't continentalists (-.12) or Europeans (-.14). Their favorite philosopher is Plato (.1), least favorites Hume (-.2) and Carnap (-.12).
3. Rationalists: They tend to self-identify as 'rationalists' (.57) and 'non-naturalists' (.33), to accept that some knowledge is a priori (.79), and to assert that some truths are analytic, i.e., 'true by definition' or 'true in virtue of 'meaning' (.72). Also tend to posit metaphysical laws of nature (.34) and abstracta (.28). 'Rationalist' here clearly isn't being used in the LW or freethought sense; philosophical rationalists as a whole in fact tend to be theists.
Rationalists are wont to work in metaphysics (.14), and to avoid thinking about the sciences of life (-.14) or cognition (-.1). They are extremely male (.15), inordinately British (.12), and prize Frege (.18) and Kant (.12). They absolutely despise Quine (-.28, the largest correlation for a philosopher), and aren't fond of Hume (-.12) or Mill (-.11) either.
4. Anti-Realists: They tend to define truth in terms of our cognitive and epistemic faculties (.65) and to reject scientific realism (.6), a mind-independent and knowable external world (.53), metaphysical laws of nature (.43), and the notion that proper names have no meaning beyond their referent (.35).
They are extremely female (.17) and young (.15 correlation coefficient for year of birth). They work in ethics (.16), social/political philosophy (.16), and 17th-19th century philosophy (.11), avoiding metaphysics (-.2) and the philosophies of mind (-.15) and language (-.14). Their heroes are Kant (.23), Rawls (.14), and, interestingly, Hume (.11). They avoid analytic philosophy even more than the anti-naturalists do (-.17), and aren't fond of Russell (-.11).
5. Externalists: Really, they just like everything that anyone calls 'externalism'. They think the content of our mental lives in general (.66) and perception in particular (.55), and the justification for our beliefs (.64), all depend significantly on the world outside our heads. They also think that you can fully understand a moral imperative without being at all motivated to obey it (.5).
6. Star Trek Haters: This group is less clearly defined than the above ones. The main thing uniting them is that they're thoroughly convinced that teleportation would mean death (.69). Beyond that, Trekophobes tend to be deontologists (.52) who don't switch on trolley dilemmas (.47) and like A theories of time (.41).
Trekophobes are relatively old (-.1) and American (.13 affiliation). They are quite rare in Australia and Asia (-.18 affiliation). They're fairly evenly distributed across philosophical fields, and tend to avoid weirdo intuitions-violating naturalists — Lewis (-.13), Hume (-.12), analytic philosophers generally (-.11).
7. Logical Conventionalists: They two-box on Newcomb's Problem (.58), reject nonclassical logics (.48), and reject epistemic relativism and contextualism (.48). So they love causal decision theory, think all propositions/facts are generally well-behaved (always either true or false and never both or neither), and think there are always facts about which things you know, independent of who's evaluating you. Suspiciously normal.
They're also fond of a wide variety of relatively uncontroversial, middle-of-the-road views most philosophers agree about or treat as 'the default' — political egalitarianism (.33), abstract object realism (.3), and atheism (.27). They tend to think zombies are metaphysically possible (.26) and to reject personal identity reductionism (.26) — which aren't metaphysically innocent or uncontroversial positions, but, again, do seem to be remarkably straightforward and banal approaches to all these problems. Notice that a lot of these positions are intuitive and 'obvious' in isolation, but that they don't converge upon any coherent world-view or consistent methodology. They clearly aren't hard-nosed philosophical conservatives like the Anti-Naturalists, Objectivists, Rationalists, and Trekophobes, but they also clearly aren't upstart radicals like the Externalists (on the analytic side) or the Anti-Realists (on the continental side). They're just kind of, well... obvious.
Conventionalists are the only identified group that are strongly analytic in orientation (.19). They tend to work in epistemology (.16) or philosophy of language (.12), and are rarely found in 17th-19th century (-.12) or continental (-.11) philosophy. They're influenced by notorious two-boxer and modal realist David Lewis (.1), and show an aversion to Hegel (-.12), Aristotle (-.11), and and Wittgenstein (-.1).
An observation: Different philosophers rely on — and fall victim to — substantially different groups of methods and intuitions. A few simple heuristics, like 'don't believe weird things until someone conclusively demonstrates them' and 'believe things that seem to be important metaphysical correlates for basic human institutions' and 'fall in love with any views starting with "ext"', explain a surprising amount of diversity. And there are clear common tendencies to either trust one's own rationality or to distrust it in partial (Externalism) or pathological (Anti-Realism, Anti-Naturalism) ways. But the heuristics don't hang together in a single Philosophical World-View or Way Of Doing Things, or even in two or three such world-views.
There is no large, coherent, consolidated group that's particularly attractive to LWers across the board, but philosophers seem to fall short of LW expectations for some quite distinct reasons. So attempting to criticize, persuade, shame, praise, or even speak of or address philosophers as a whole may be a bad idea. I'd expect it to be more productive to target specific 'load-bearing' doctrines on dimensions like the above than to treat the group as a monolith, for many of the same reasons we don't want to treat 'scientists' or 'mathematicians' as monoliths.
Another important result: Something is going seriously wrong with the high-level training and enculturation of professional philosophers. Or fields are just attracting thinkers who are disproportionately bad at critically assessing a number of the basic claims their field is predicated on or exists to assess.
Philosophers working in decision theory are drastically worse at Newcomb than are other philosophers, two-boxing 70.38% of the time where non-specialists two-box 59.07% of the time (normalized after getting rid of 'Other' answers). Philosophers of religion are the most likely to get questions about religion wrong — 79.13% are theists (compared to 13.22% of non-specialists), and they tend strongly toward the Anti-Naturalism dimension. Non-aestheticians think aesthetic value is objective 53.64% of the time; aestheticians think it's objective 73.88% of the time. Working in epistemology tends to make you an internalist, philosophy of science tends to make you a Humean, metaphysics a Platonist, ethics a deontologist. This isn't always the case; but it's genuinely troubling to see non-expertise emerge as a predictor of getting any important question in an academic field right.
EDIT: I've replaced "cluster" talk above with "dimension" talk. I had in mind gjm's "clusters in philosophical idea-space", not distinct groups of philosophers. gjm makes this especially clear:
The claim about these positions being made by the authors of the paper is not, not even a little bit, "most philosophers fall into one of these seven categories". It is "you can generally tell most of what there is to know about a philosopher's opinions if you know how well they fit or don't fit each of these seven categories". Not "philosopher-space is mostly made up of these seven pieces" but "philosopher-space is approximately seven-dimensional".
I'm particularly guilty of promoting this misunderstanding (including in portions of my own brain) by not noting that the dimensions can be flipped to speak of (anti-anti-)naturalists, anti-rationalists, etc. My apologies. As Douglas_Knight notes below, "If there are clusters [of philosophers], PCA might find them, but PCA might tell you something interesting even if there are no clusters. But if there are clusters, the factors that PCA finds won't be the clusters, but the differences between them. [...] Actually, factor analysis pretty much assumes that there aren't clusters. If factor 1 put you in a cluster, that would tell pretty much all there is to say and would pin down your factor 2, but the idea in factor analysis is that your factor 2 is designed to be as free as possible, despite knowing factor 1."
I had an interesting recent conversation with a fellow academic that I think worth a blog post. It started with my commenting that I thought support for "diversity" in the sense in which the term is usually used in the academic context—having students or faculty from particular groups, in particular blacks but also, in some contexts, gays, perhaps hispanics, perhaps women—in practice anticorrelated with support for the sort of diversity, diversity of ideas, that ought to matter to a university.I offered my standard example. Imagine that a university department has an opening and is down to two or three well qualified candidates. They learn that one of them is an articulate supporter of South African Apartheid. Does the chance of hiring him go up or down? If the university is actually committed to intellectual diversity, the chance should go up—it is, after all, a position that neither faculty nor students are likely to have been exposed to. In fact, in any university I am familiar with, it would go sharply down.The response was that that he considered himself very open minded, getting along with people across the political spectrum, but that that position was so obviously beyond the bounds of reasonable discourse that refusing to hire the candidate was the correct response.The question I should have asked and didn't was whether he had ever been exposed to an intelligent and articulate defense of apartheid. Having spent my life in the same general environment—American academia—as he spent his, I think the odds are pretty high that he had not been. If so, he was in the position of a judge who, having heard the case for the prosecution, convicted the defendant without bothering to hear the defense. Worse still, he was not only concluding that the position was wrong—we all have limited time and energy, and so must often reach such conclusions on an inadequate basis—he was concluding it with a level of certainty so high that he was willing to rule out the possibility that the argument on the other side might be worth listening to.An alternative question I might have put to him was whether he could make the argument for apartheid about as well as a competent defender of that system could. That, I think, is a pretty good test of whether one has an adequate basis to reject a position—if you don't know the arguments for it, you probably don't know whether those arguments are wrong, although there might be exceptions. I doubt that he could have. At least, in the case of political controversies where I have been a supporter of the less popular side, my experience is that those on the other side considerably overestimate their knowledge of the arguments they reject.Which reminds me of something that happened to me almost fifty years ago—in 1964, when Barry Goldwater was running for President. I got into a friendly conversation with a stranger, probably set off by my wearing a Goldwater pin and his curiosity as to how someone could possibly support that position.We ran through a series of issues. In each case, it was clear that he had never heard the arguments I was offering in defense of Goldwater's position and had no immediate rebuttal. At the end he asked me, in a don't-want-to-offend-you tone of voice, whether I was taking all of these positions as a joke.I interpreted it, and still do, as the intellectual equivalent of "what is a nice girl like you doing in a place like this?" How could I be intelligent enough to make what seemed like convincing arguments for positions he knew were wrong, and yet stupid enough to believe them?
"Hjernevask" a well known (in Norway at least) documentary series that I am sure will be interesting to rationalists here is now available with English subtitles online. Produced by Ole Martin Ihle and Harald Eia a Norwegian documentarian and comedian, it casts a light on both ways in which we know people to be different as well as the culture that is academia in the Nordic country and probably elsewhere as well.
- The Gender Equality Paradox - Why do girls tend to go into empathizing professions and boys into systemizing professions? Why does the labor market become more gender segregated the more economic prosperity a country has?
- The Parental Effect - How much influence do parents really have on their children? To what degree is intelligence inherited?
- Gay/Straight - To what extent is sexual preference innate? Are there differences between heterosexual and homosexual brains? Is homosexuality a result of a choice or is it innate?
- Violence - Are people from some cultures more aggressive than others?
- Sex - Are there biological reasons men have a greater tendency than women to want sex without obligation?
- Race - Are there significant genetic differences between different peoples?
- Nature or Nurture - Is personality acquired or inherited?
The link go to the YouTube videos with English subtitles. Because linkrot sucks I'm providing another source for the videos.
There was very little in the series that I found new and disagreed with some presentations. But this is not surprising given my eccentric interest in humans. (^_^) I found the interviews with the scientists and academics interesting and think that overall the series presents a good overview something well worth watching especially considering some of the debates I've seen taken place here recently. (;_;)
I'm somewhat frustrated by the frequent posts warning us about the dangers of Ev. Psych reasoning. (It seems like we average at least one of these per month).
It seems like a lot of this widespread hostility (the reaction to Harald Eia's Hjernevask is a good example of this hostility) stems from the fact that ev. psych is new. New ideas are held to much higher standard than old ones. The early reaction to ev. psych within psychology was characteristic of this effect. Behaviorists, Freudians, and Social Psychologists all had created their own theories of "ultimate causation" for human behaviour. None of those theories would have stood up to the strenuous demands for experimental validation that Ev. psych endured.
But science started to suffer. With so much easy money, few wanted to study the hard sciences. And the social sciences suffered in another way: The ties with the government became too tight, and created a culture where controversial issues, and tough discussions were avoided. Too critical, and you could risk getting no more money.
It was in this culture Harald Eia started his studies, in sociology, early in the nineties. He made it as far as becoming a junior researcher, but then dropped off, and started a career as a comedian instead. He has said that he suddenly, after reading some books which not were on the syllabus, discovered that he had been cheated. What he was taught in his sociology classes was not up-to-date with international research, and more based on ideology than science.
The latter wrote that in a 2010 article on the documentary series that I would also recommend reading. HT to iSteve where it is quoted in full.
I recommend reading the piece, but below are some excerpts and commentary.
Power of Suggestion
By Tom Bartlett
Along with personal upheaval, including a lengthy child-custody battle, [Yale social psychologist John Bargh] has coped with what amounts to an assault on his life's work, the research that pushed him into prominence, the studies that Malcolm Gladwell called "fascinating" and Daniel Kahneman deemed "classic."
What was once widely praised is now being pilloried in some quarters as emblematic of the shoddiness and shallowness of social psychology. When Bargh responded to one such salvo with a couple of sarcastic blog posts, he was ridiculed as going on a "one-man rampage." He took the posts down and regrets writing them, but his frustration and sadness at how he's been treated remain.
Psychology may be simultaneously at the highest and lowest point in its history. Right now its niftiest findings are routinely simplified and repackaged for a mass audience; if you wish to publish a best seller sans bloodsucking or light bondage, you would be well advised to match a few dozen psychological papers with relatable anecdotes and a grabby, one-word title. That isn't true across the board. ... But a social psychologist with a sexy theory has star potential. In the last decade or so, researchers have made astonishing discoveries about the role of consciousness, the reasons for human behavior, the motivations for why we do what we do. This stuff is anything but incremental.
At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers.
Psychology isn't the only field with fakers, but it has its share. Plus there's the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another's work, instead pressing on toward the next headline-making outcome.
Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the "anchoring effect," which happens, for instance, when a store lists a competitor's inflated price next to its own to make you think you're getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient.
A small group of skeptical psychologists—let's call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.
What have they found? Mostly that they can't get those results. The studies don't check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.
... When the walking times of the two groups were compared, the Florida-knits-alone subjects walked, on average, more slowly than the control group. Words on a page made them act old.
It's a cute finding. But the more you think about it, the more serious it starts to seem. What if we are constantly being influenced by subtle, unnoticed cues? If "Florida" makes you sluggish, could "cheetah" make you fleet of foot? Forget walking speeds. Is our environment making us meaner or more creative or stupider without our realizing it? We like to think we're steering the ship of self, but what if we're actually getting blown about by ghostly gusts?
Steve Sailer comments on this:
Advertisers, from John Wanamaker onward, sure as heck hope they are blowing you about by ghostly gusts.
Not only advertisers the industry where he worked in but indeed our little community probably loves any results confirming such a picture. We need to be careful about that. Bartlett continues:
John Bargh and his co-authors, Mark Chen and Lara Burrows, performed that experiment in 1990 or 1991. They didn't publish it until 1996. Why sit on such a fascinating result? For starters, they wanted to do it again, which they did. They also wanted to perform similar experiments with different cues. One of those other experiments tested subjects to see if they were more hostile when primed with an African-American face. They were. (The subjects were not African-American.) In the other experiment, the subjects were primed with rude words to see if that would make them more likely to interrupt a conversation. It did.
The researchers waited to publish until other labs had found the same type of results. They knew their finding would be controversial. They knew many people wouldn't believe it. They were willing to stick their necks out, but they didn't want to be the only ones.
Since that study was published in the Journal of Personality and Social Psychology, it has been cited more than 2,000 times. Though other researchers did similar work at around the same time, and even before, it was that paper that sparked the priming era. Its authors knew, even before it was published, that the paper was likely to catch fire. They wrote: "The implications for many social psychological phenomena ... would appear to be considerable." Translation: This is a huge deal.
The last year has been tough for Bargh. Professionally, the nadir probably came in January, when a failed replication of the famous elderly-walking study was published in the journal PLoS ONE. It was not the first failed replication, but this one stung. In the experiment, the researchers had tried to mirror Bargh's methods with an important exception: Rather than stopwatches, they used automatic timing devices with infrared sensors to eliminate any potential bias. The words didn't make subjects act old. They tried the experiment again with stopwatches and added a twist: They told those operating the stopwatches which subjects were expected to walk slowly. Then it worked. The title of their paper tells the story: "Behavioral Priming: It's All in the Mind, but Whose Mind?"
The paper annoyed Bargh. He thought the researchers didn't faithfully follow his methods section, despite their claims that they did. But what really set him off was a blog post that explained the results. The post, on the blog Not Exactly Rocket Science, compared what happened in the experiment to the notorious case of Clever Hans, the horse that could supposedly count. It was thought that Hans was a whiz with figures, stomping a hoof in response to mathematical queries. In reality, the horse was picking up on body language from its handler. Bargh was the deluded horse handler in this scenario. That didn't sit well with him. If the PLoS ONE paper is correct, the significance of his experiment largely dissipates. What's more, he looks like a fool, tricked by a fairly obvious flaw in the setup.
Pashler, a professor of psychology at the University of California at San Diego, is the most prolific of the Replicators. He started trying priming experiments about four years ago because, he says, "I wanted to see these effects for myself." That's a diplomatic way of saying he thought they were fishy. He's tried more than a dozen so far, including the elderly-walking study. He's never been able to achieve the same results. Not once.
This fall, Daniel Kahneman, the Nobel Prize-winning psychologist, sent an e-mail to a small group of psychologists, including Bargh, warning of a "train wreck looming" in the field because of doubts surrounding priming research. He was blunt: "I believe that you should collectively do something about this mess. To deal effectively with the doubts you should acknowledge their existence and confront them straight on, because a posture of defiant denial is self-defeating," he wrote.
Strongly worded e-mails from Nobel laureates tend to get noticed, and this one did. He sent it after conversations with Bargh about the relentless attacks on priming research. Kahneman cast himself as a mediator, a sort of senior statesman, endeavoring to bring together believers and skeptics. He does have a dog in the fight, though: Kahneman believes in these effects and has written admiringly of Bargh, including in his best seller Thinking, Fast and Slow.
On the heels of that message from on high, an e-mail dialogue began between the two camps. The vibe was more conciliatory than what you hear when researchers are speaking off the cuff and off the record. There was talk of the type of collaboration that Kahneman had floated, researchers from opposing sides combining their efforts in the name of truth. It was very civil, and it didn't lead anywhere.
In one of those e-mails, Pashler issued a challenge masquerading as a gentle query: "Would you be able to suggest one or two goal priming effects that you think are especially strong and robust, even if they are not particularly well-known?" In other words, put up or shut up. Point me to the stuff you're certain of and I'll try to replicate it. This was intended to counter the charge that he and others were cherry-picking the weakest work and then doing a victory dance after demolishing it. He didn't get the straightforward answer he wanted. "Some suggestions emerged but none were pointing to a concrete example," he says.
One possible explanation for why these studies continually and bewilderingly fail to replicate is that they have hidden moderators, sensitive conditions that make them a challenge to pull off. Pashler argues that the studies never suggest that. He wrote in that same e-mail: "So from our reading of the literature, it is not clear why the results should be subtle or fragile."
Bargh contends that we know more about these effects than we did in the 1990s, that they're more complicated than researchers had originally assumed. That's not a problem, it's progress. And if you aren't familiar with the literature in social psychology, with the numerous experiments that have modified and sharpened those early conclusions, you're unlikely to successfully replicate them. Then you will trot out your failure as evidence that the study is bogus when really what you've proved is that you're no good at social psychology.
Pashler can't quite disguise his disdain for such a defense. "That doesn't make sense to me," he says. "You published it. That must mean you think it is a repeatable piece of work. Why can't we do it just the way you did it?"
That's how David Shanks sees things. He, too, has been trying to replicate well-known priming studies, and he, too, has been unable to do so. In a forthcoming paper, Shanks, a professor of psychology at University College London, recounts his and his several co-authors' attempts to replicate one of the most intriguing effects, the so-called professor prime. In the study, one group was told to imagine a professor's life and then list the traits that brought to mind. Another group was told to do the same except with a soccer hooligan rather than a professor.
The groups were then asked questions selected from the board game Trivial Pursuit, questions like "Who painted 'Guernica'?" and "What is the capital of Bangladesh?" (Picasso and Dhaka, for those playing at home.) Their scores were then tallied. The subjects who imagined the professor scored above a control group that wasn't primed. The subjects who imagined soccer hooligans scored below the professor group and below the control. Thinking about a professor makes you smart while thinking about a hooligan makes you dumb. The study has been replicated a number of times, including once on Dutch television.
Shanks can't get the result. And, boy, has he tried. Not once or twice, but nine times.
The skepticism about priming, says Shanks, isn't limited to those who have committed themselves to reperforming these experiments. It's not only the Replicators. "I think more people in academic psychology than you would imagine appreciate the historical implausibility of these findings, and it's just that those are the opinions that they have over the water fountain," he says. "They're not the opinions that get into the journalism."
Like all the skeptics I spoke with, Shanks believes the worst is yet to come for priming, predicting that "over the next two or three years you're going to see an avalanche of failed replications published." The avalanche may come sooner than that. There are failed replications in press at the moment and many more that have been completed (Shanks's paper on the professor prime is in press at PLoS ONE). A couple of researchers I spoke with didn't want to talk about their results until they had been peer reviewed, but their preliminary results are not encouraging.
Ap Dijksterhuis is the author of the professor-prime paper. At first, Dijksterhuis, a professor of psychology at Radboud University Nijmegen, in the Netherlands, wasn't sure he wanted to be interviewed for this article. That study is ancient news—it was published in 1998, and he's moved away from studying unconscious processes in the last couple of years, in part because he wanted to move on to new research on happiness and in part because of the rancor and suspicion that now accompany such work. He's tired of it.
The outing of Diederik Stapel made the atmosphere worse. Stapel was a social psychologist at Tilburg University, also in the Netherlands, who was found to have committed scientific misconduct in scores of papers. The scope and the depth of the fraud were jaw-dropping, and it changed the conversation. "It wasn't about research practices that could have been better. It was about fraud," Dijksterhuis says of the Stapel scandal. "I think that's playing in the background. It now almost feels as if people who do find significant data are making mistakes, are doing bad research, and maybe even doing fraudulent things."
Here is a link to the wiki article on the mentioned misconduct. I recall some of the drama that unfolded around the outing and the papers themselves... looking at the kinds of results Stapel wanted to fake or thought would advance his career reminds me of some other older examples of scientific misconduct.
In the e-mail discussion spurred by Kahneman's call to action, Dijksterhuis laid out a number of possible explanations for why skeptics were coming up empty when they attempted priming studies. Cultural differences, for example. Studying prejudice in the Netherlands is different from studying it in the United States. Certain subjects are not susceptible to certain primes, particularly a subject who is unusually self-aware. In an interview, he offered another, less charitable possibility. "It could be that they are bad experimenters," he says. "They may turn out failures to replicate that have been shown by 15 or 20 people already. It basically shows that it's something with them, and it's something going on in their labs."
Joseph Cesario is somewhere between a believer and a skeptic, though these days he's leaning more skeptic. Cesario is a social psychologist at Michigan State University, and he's successfully replicated Bargh's elderly-walking study, discovering in the course of the experiment that the attitude of a subject toward the elderly determined whether the effect worked or not. If you hate old people, you won't slow down. He is sympathetic to the argument that moderators exist that make these studies hard to replicate, lots of little monkey wrenches ready to ruin the works. But that argument only goes so far. "At some point, it becomes excuse-making," he says. "We have to have some threshold where we say that it doesn't exist. It can't be the case that some small group of people keep hitting on the right moderators over and over again."
Cesario has been trying to replicate a recent finding of Bargh's. In that study, published last year in the journal Emotion, Bargh and his co-author, Idit Shalev, asked subjects about their personal hygiene habits—how often they showered and bathed, for how long, how warm they liked the water. They also had subjects take a standard test to determine their degree of social isolation, whether they were lonely or not. What they found is that lonely people took longer and warmer baths and showers, perhaps substituting the warmth of the water for the warmth of regular human interaction.
That isn't priming, exactly, though it is a related unconscious phenomenon often called embodied cognition. As in the elderly-walking study, the subjects didn't realize what they were doing, didn't know they were bathing longer because they were lonely. Can warm water alleviate feelings of isolation? This was a result with real-world applications, and reporters jumped on it. "Wash the loneliness away with a long, hot bath," read an NBC News headline.
Bargh's study had 92 subjects. So far Cesario has run more than 2,500 through the same experiment. He's found absolutely no relationship between bathing and loneliness. Zero. "It's very worrisome if you have people thinking they can take a shower and they can cure their depression," he says. And he says Bargh's data are troublesome. "Extremely small samples, extremely large effects—that's a red flag," he says. "It's not a red flag for people publishing those studies, but it should be."
Even though he is, in a sense, taking aim at Bargh, Cesario thinks it's a shame that the debate over priming has become so personal, as if it's a referendum on one man. "He has the most eye-catching findings. He always has," Cesario says. "To the extent that some of his effects don't replicate, because he's identified as priming, it casts doubt on the entire body of research. He is priming."
I'll admit that took me a few seconds too long to parse. (~_^)
That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.
That has been the narrative. Bargh's research is crumbling under scrutiny and, along with it, perhaps priming as a whole. Maybe the most exciting aspect of social psychology over the last couple of decades, these almost magical experiments in which people are prompted to be smarter or slower without them even knowing it, will end up as an embarrassing footnote rather than a landmark achievement.
Well yes dear journalist that has been the narrative you've just presented to us readers.
Then along comes Gary Latham.
How entertaining a plot twist! Or maybe a journalist is writing a story about out of a confusing process where academia tries to take account of a confusing array of new evidence. Of course that's me telling a story right there. Agggh bad brain bad!
Latham, an organizational psychologist in the management school at the University of Toronto, thought the research Bargh and others did was crap. That's the word he used. He told one of his graduate students, Amanda Shantz, that if she tried to apply Bargh's principles it would be a win-win. If it failed, they could publish a useful takedown. If it succeeded ... well, that would be interesting.
They performed a pilot study, which involved showing subjects a photo of a woman winning a race before the subjects took part in a brainstorming task. As Bargh's research would predict, the photo made them perform better at the brainstorming task. Or seemed to. Latham performed the experiment again in cooperation with another lab. This time the study involved employees in a university fund-raising call center. They were divided into three groups. Each group was given a fact sheet that would be visible while they made phone calls. In the upper left-hand corner of the fact sheet was either a photo of a woman winning a race, a generic photo of employees at a call center, or no photo. Again, consistent with Bargh, the subjects who were primed raised more money. Those with the photo of call-center employees raised the most, while those with the race-winner photo came in second, both outpacing the photo-less control. This was true even though, when questioned afterward, the subjects said they had been too busy to notice the photos.
Latham didn't want Bargh to be right. "I couldn't have been more skeptical or more disbelieving when I started the research," he says. "I nearly fell off my chair when my data" supported Bargh's findings.
That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives. Are there photos that would make people be safer at work? Are there photos that undermine performance? How should we be fine-tuning the images that surround us? "It's almost scary in lots of ways that these primes in these environments can affect us without us being aware," he says. Latham hasn't stopped there. He's continued to try experiments using Bargh's ideas, and those results have only strengthened his confidence in priming. "I've got two more that are just mind-blowing," he says. "And I know John Bargh doesn't know about them, but he'll be a happy guy when he sees them."
Latham doesn't know why others have had trouble. He only knows what he's found, and he's certain about his own data. In the end, Latham thinks Bargh will be vindicated as a pioneer in understanding unconscious motivations. "I'm like a converted Christian," he says. "I started out as a devout atheist, and now I'm a believer."
Following his come-to-Jesus transformation, Latham sent an e-mail to Bargh to let him know about the call-center experiment. When I brought this up with Bargh, his face brightened slightly for the first time in our conversation. "You can imagine how that helped me," he says. He had been feeling isolated, under siege, worried that his legacy was becoming a cautionary tale. "You feel like you're on an island," he says.
Though Latham is now a believer, he remains the exception. With more failed replications in the pipeline, Dijksterhuis believes that Kahneman's looming-train-wreck letter, though well meaning, may become a self-fulfilling prophecy, helping to sink the field rather than save it. Perhaps the perception has already become so negative that further replications, regardless of what they find, won't matter much. For his part, Bargh is trying to take the long view. "We have to think about 50 or 100 years from now—are people going to believe the same theories?" he says. "Maybe it's not true. Let's see if it is or isn't."
Admirable that he's come to the latter attitude after the early angry blog posts prompted by what he was going through. That wasn't sarcasm, scientists are only human after all, there are easier things to do than this.
From pg812-1020 of Chapter 8 “Sufficiency, Ancillarity, And All That” of Probability Theory: The Logic of Science by E.T. Jaynes:
The classical example showing the error of this kind of reasoning is the fable about the height of the Emperor of China. Supposing that each person in China surely knows the height of the Emperor to an accuracy of at least ±1 meter, if there are N=1,000,000,000 inhabitants, then it seems that we could determine his height to an accuracy at least as good as
merely by asking each person’s opinion and averaging the results.
The absurdity of the conclusion tells us rather forcefully that the rule is not always valid, even when the separate data values are causally independent; it requires them to be logically independent. In this case, we know that the vast majority of the inhabitants of China have never seen the Emperor; yet they have been discussing the Emperor among themselves and some kind of mental image of him has evolved as folklore. Then knowledge of the answer given by one does tell us something about the answer likely to be given by another, so they are not logically independent. Indeed, folklore has almost surely generated a systematic error, which survives the averaging; thus the above estimate would tell us something about the folklore, but almost nothing about the Emperor.
We could put it roughly as follows:
error in estimate = (8-50)
where S is the common systematic error in each datum, R is the RMS ‘random’ error in the individual data values. Uninformed opinions, even though they may agree well among themselves, are nearly worthless as evidence. Therefore sound scientific inference demands that, when this is a possibility, we use a form of probability theory (i.e. a probabilistic model) which is sophisticated enough to detect this situation and make allowances for it.
As a start on this, equation (8-50) gives us a crude but useful rule of thumb; it shows that, unless we know that the systematic error is less than about of the random error, we cannot be sure that the average of a million data values is any more accurate or reliable than the average of ten1. As Henri Poincare put it: “The physicist is persuaded that one good measurement is worth many bad ones.” This has been well recognized by experimental physicists for generations; but warnings about it are conspicuously missing in the “soft” sciences whose practitioners are educated from those textbooks.
Or pg1019-1020 Chapter 10 “Physics of ‘Random Experiments’”:
…Nevertheless, the existence of such a strong connection is clearly only an ideal limiting case unlikely to be realized in any real application. For this reason, the law of large numbers and limit theorems of probability theory can be grossly misleading to a scientist or engineer who naively supposes them to be experimental facts, and tries to interpret them literally in his problems. Here are two simple examples:
- Suppose there is some random experiment in which you assign a probability p for some particular outcome A. It is important to estimate accurately the fraction f of times A will be true in the next million trials. If you try to use the laws of large numbers, it will tell you various things about f; for example, that it is quite likely to differ from p by less than a tenth of one percent, and enormously unlikely to differ from p by more than one percent. But now, imagine that in the first hundred trials, the observed frequency of A turned out to be entirely different from p. Would this lead you to suspect that something was wrong, and revise your probability assignment for the 101’st trial? If it would, then your state of knowledge is different from that required for the validity of the law of large numbers. You are not sure of the independence of different trials, and/or you are not sure of the correctness of the numerical value of p. Your prediction of f for a million trials is probably no more reliable than for a hundred.
- The common sense of a good experimental scientist tells him the same thing without any probability theory. Suppose someone is measuring the velocity of light. After making allowances for the known systematic errors, he could calculate a probability distribution for the various other errors, based on the noise level in his electronics, vibration amplitudes, etc. At this point, a naive application of the law of large numbers might lead him to think that he can add three significant figures to his measurement merely by repeating it a million times and averaging the results. But, of course, what he would actually do is to repeat some unknown systematic error a million times. It is idle to repeat a physical measurement an enormous number of times in the hope that “good statistics” will average out your errors, because we cannot know the full systematic error. This is the old “Emperor of China” fallacy…
Indeed, unless we know that all sources of systematic error - recognized or unrecognized - contribute less than about one-third the total error, we cannot be sure that the average of a million measurements is any more reliable than the average of ten. Our time is much better spent in designing a new experiment which will give a lower probable error per trial. As Poincare put it, “The physicist is persuaded that one good measurement is worth many bad ones.”2 In other words, the common sense of a scientist tells him that the probabilities he assigns to various errors do not have a strong connection with frequencies, and that methods of inference which presuppose such a connection could be disastrously misleading in his problems.
I excerpted & typed up these quotes for use in my DNB FAQ appendix on systematic problems; the applicability of Jaynes’s observations to things like publication bias is obvious. See also http://lesswrong.com/lw/g13/against_nhst/
If I am understanding this right, Jaynes’s point here is that the random error shrinks towards zero as N increases, but this error is added onto the “common systematic error” S, so the total error approaches S no matter how many observations you make and this can force the total error up as well as down (variability, in this case, actually being helpful for once). So for example, ; with N=100, it’s 0.43; with N=1,000,000 it’s 0.334; and with N=1,000,000 it equals 0.333365 etc, and never going below the original systematic error of . This leads to the unfortunate consequence that the likely error of N=10 is 0.017<x<0.64956 while for N=1,000,000 it is the similar range 0.017<x<0.33433 - so it is possible that the estimate could be exactly as good (or bad) for the tiny sample as compared with the enormous sample, since neither can do better than 0.017!↩
Possibly this is what Lord Rutherford meant when he said, “If your experiment needs statistics you ought to have done a better experiment”.↩
I wish to transfer to a university in Europe, to complete my engineering formation. I thought it might be the opportunity to initiate a discussion on the merits of European technical schools, given how many people here have a STEM background, and have experienced the first-hand.
Which ones do you think are best at teaching? Which provide the best starting point, professionally? Which have the most productive, idealistic mood among the studentship? If you've been to several of schools, how do they compare to each other?
The floor is yours.
A paper on the psychology of religious belief, Paranormal and Religious Believers Are More Prone to Illusory Face Perception than Skeptics and Non-believers, came onto my radar recently. I used to talk a lot about the theory of religious cognitive psychology years ago, but the interest kind of faded when it seemed that empirical results were relatively thin in relation to the system building (Ara Norenzayan’s work being an exception to this generality). The theory is rather straightforward: religious belief is a naturally evoked consequence of the general architecture of our minds. For example, gods are simply extensions of persons, and make natural sense in light of our tendency to anthromorphize the world around us (this may have had evolutionary benefit, in that false positives for detection of other agents was far less costly than false negatives; think an ambush by a rival clan).*
But enough theory. Are religious people cognitively different from those who are atheists? I suspect so. I speak as someone who never ever really believed in God, despite being inculcated in religious ideas from childhood. By the time I was seven years of age I realized that I was an atheist, and that my prior “beliefs” about God were basically analogous to Spinozan Deism. I had simply never believed in a personal God, but for many of earliest years it was less a matter of disbelief, than that did not even comprehend or cogently in my mind elaborate the idea of this entity, which others took for granted as self-evidently obvious. From talking to many other atheists I have come to the conclusion that Atheism is a mental deviance. This does not mean that mental peculiarities are necessary or sufficient for atheism, but they increase the odds.
And yet after reading the above paper my confidence in that theory is reduced. The authors used ~50 individuals, and attempted to correct demographic confounds. Additionally, the results were statistically significant. But to me the above theory should make powerful predictions in terms of effect size. The differences between non-believers, the religious, and those who accepted the paranormal, were just not striking enough for me.
Because of theoretical commitments my prejudiced impulse was to accept these findings. But looking deeply within they just aren’t persuasive in light of my prior expectations. This a fundamental problem in much of social science. Statistical significance is powerful when you have a preference for the hypothesis forwarded. In contrast, the knives of skepticism come out when research is published which goes against your preconceptions.
So a question for psychologists: which results are robust and real, to the point where you would be willing to make a serious monetary bet on it being the orthodoxy in 10 years? My primary interest is cognitive psychology, but I am curious about other fields too.
Considering the communities heavy reliance on such results I think we should answer the question as well.
Fifteen years ago John Horgan wrote The End Of Science: Facing The Limits Of Knowledge In The Twilight Of The Scientific Age. I remain skeptical as to the specific details of this book, but Carl’s write-up in The New York Times of a new paper in PNAS on the relative commonness of scientific misconduct in cases of retraction makes me mull over the genuine possibility of the end of science as we know it. This sounds ridiculous on the face of it, but you have to understand my model of and framework for what science is. In short: science is people. I accept the reality that science existed in some form among strands of pre-Socratic thought, or among late antique and medieval Muslims and Christians (not to mention among some Chinese as well). Additionally, I can accept the cognitive model whereby science and scientific curiosity is rooted in our psychology in a very deep sense, so that even small children engage in theory-building.
That is all well and good. The basic building blocks for many inventions and institutions existed long before their instantiation. But nevertheless the creation of institutions and inventions at a given moment is deeply contingent. Between 1600 and 1800 the culture of science as we know it emerged in the West. In the 19th and 20th centuries this culture became professionalized, but despite the explicit institutions and formal titles it is bound together by a common set of norms, an ethos if you will. Scientists work long hours for modest remuneration for the vain hope that they will grasp onto one fragment of reality, and pull it out of the darkness and declare to all, “behold!” That’s a rather flowery way of putting the reality that the game is about fun & fame. Most will not gain fame, but hopefully the fun will continue. Even if others may find one’s interests abstruse or esoteric, it is a special thing to be paid to reflect upon and explore what one is interested in.
Obviously this is an idealization. Science is a highly social and political enterprise, and injustice does occur. Merit and effort are not always rewarded, and on occasion machination truly pays. But overall the culture and enterprise muddle along, and are better in terms of yielding a better sense of reality as it is than its competitors. And yet all great things can end, and free-riders can destroy a system. If your rivals and competitors and cheat and getting ahead, what’s to stop you but your own conscience? People will flinch from violating norms initially, even if those actions are in their own self-interest, but eventually they will break. And once they break the norms have shifted, and once a few break, the rest will follow. This is the logic which drives a vicious positive feedback loop, and individuals in their rational self-interest begin to cannibalize the components of the institutions which ideally would allow all to flourish. No one wants to be the last one in a collapsing building, the sucker who asserts that the structure will hold despite all evidence to the contrary.
Deluded as most graduate students are, they by and large are driven by an ideal. Once the ideal, the illusion, is ripped apart, and eaten away from within, one can’t rebuild it in a day. Trust evolves and accumulates it organically. One can not will it into existence. Centuries of capital are at stake, and it would be best to learn the lessons of history. We may declare that history has ended, but we can’t unilaterally abolish eternal laws.
Link to original post.
When lies sound better than truth, people tend to lie. That's Social Desirability Bias for you. Take the truth, "Half the population is below the 50th percentile of intelligence." It's unequivocally true - and sounds awful. Nice people don't call others stupid - even privately.
The 2000 American National Election Study elegantly confirms this claim. One of the interviewers' tasks was to rate respondents' "apparent intelligence." Possible answers (reverse coded by me for clarity):
0= Very Low
1= Fairly Low
3= Fairly High
4= Very High
Objectively measured intelligence famously fits a bell curve. Subjectively assessed intelligence does not. At all. Check out the ANES distribution.
The ANES is supposed to be a representative national sample. Yet according to interviewers, only 6.1% of respondents are "below average"! The median respondent is "fairly high." Over 20% are "very high." Social Desirability Bias - interviewers' reluctance to impugn anyone's intelligence - practically has to be the explanation.
You could just call this as an amusing curiosity and move on. But wait. Stare at the ANES results for a minute. Savor the data. Question: Are you starting to see the true face of widespread hostility to intelligence research? I sure think I do.
Suppose intelligence research were impeccable. How would psychologically normal humans react? Probably just as they do in the ANES: With denial. How can stupidity be a major cause of personal failure and social ills? Only if the world is full of stupid people. What kind of a person believes the world is full of stupid people? "A realist"? No! A jerk. A big meanie.
My point is not that intelligence research is impeccable. My point, rather, is that hostility to intelligence research is all out of proportion to its flaws - and Social Desirability Bias is the best explanation. Intelligence research tells the world what it doesn't want to hear. It says what people aren't supposed to say. On reflection, the amazing thing isn't that intelligence research has failed to vanquish its angry critics. The amazing thing is that the angry critics have failed to vanquish intelligence research. Everything we've learned about human intelligence is a triumph of mankind's rationality over mankind's Social Desirability Bias.
Due in part to Eliezer's writing style (e.g. not many citations), and in part to Eliezer's scholarship preferences (e.g. his preference to figure out much of philosophy on his own), Eliezer's Sequences don't accurately reflect the close agreement between the content of The Sequences and work previously done in mainstream academia.
I predict several effects from this:
- Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
- Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
- If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.
I'd like to counteract these effects by connecting the Sequences to the professional literature. (Note: I sort of doubt it would have been a good idea for Eliezer to spend his time tracking down more references and so on, but I realized a few weeks ago that it wouldn't take me much effort to list some of those references.)
I don't mean to minimize the awesomeness of the Sequences. There is much original content in them (edit: probably most of their content is original), they are engagingly written, and they often have a more transformative effect on readers than the corresponding academic literature.
I'll break my list of references into sections based on how likely I think it is that a reader will have missed the agreement between Eliezer's articles and mainstream academic work.
(This is only a preliminary list of connections.)
First, a short personal note to make you understand why this is important to me. To make a long story short, the son of a friend has some atypical form of autism and language troubles. And that kid matters a lot to me, so I want to become stronger in helping him, to be able to better interact with him and help him overcome his troubles.
But I don't know much about psychology. I'm a computer scientist, with a general background of maths and physics. I'm kind of a nerd, social skills aren't my strength. I did read some of the basic books advised on Less Wrong, like Cialdini, Wright or Wiseman, but those just give me a very small background on which to build.
And psychology in general, autism/language troubles in particular, are fields in which there is a lot of pseudo-science. I'm very sceptical of Freud and psychoanalysis, for example, which I consider (but maybe I am wrong?) to be more like alchemy than like chemistry. There are a lot of mysticism and sect-like gurus related to autism, too.
So I'm bit unsure on how from my position of having a general scientific and rationality background I can dive into a completely unrelated field. Research papers are probably above my current level in psychology, so I think books (textbooks or popular science) are the way to go. But how to find which books on the hundreds that were written on the topic I should buy and read? Books that are evidence-based science, not pseudo-science, I mean. What is a general method to select which books to start in a field you don't really know? I would welcome any advise from the community.
Disclaimer: this is a personal "call for help", but since I think the answers/advices may matter outside my own personal case, I hope you don't mind.
Summary: Current social psychology research is probably on average compromised by political bias leftward. Conservative researchers are likely discriminated against in at least this field. More importantly papers and research that does not fit a liberal perspective faces greater barriers and burdens.
An article in the online publication inside higher ed on a survey on anti-conservative bias among social psychologists.
Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same -- and other -- surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics. So to many academics, the question of ideological bias is not a big deal. Investment bankers may lean to the right, but that doesn't mean they don't provide good service (or as best the economy will permit) to clients of all political stripes, the argument goes.
And professors should be assumed to have the same professionalism.
A new study, however, challenges that assumption -- at least in the field of social psychology. The study isn't due to be published until next month (in Perspectives on Psychological Science), and the authors and others are noting limitations to the study. But its findings of bias by social psychologists (even if just a decent-sized minority of them) are already getting considerable buzz in conservative circles. Just over 37 percent of those surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a "conservative perspective" would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant. (The final version of the paper is not yet available, but an early version may be found on the website of the Social Science Research Network.)
To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.
"The questions were pretty blatant. We didn't expect people would give those answers," said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.
He said that the findings should concern academics. Of the bias he and a co-author found, he said, "I don't think it's O.K."
Discussion of faculty politics extends well beyond social psychology, and humanities professors are frequently accused of being "tenured radicals" (a label some wear with pride). But social psychology has had an intense debate over the issue in the last year.
At the 2011 meeting of the Society for Personality and Social Psychology, Jonathan Haidt of the University of Virginia polled the audience of some 1,000 in a convention center ballroom to ask how many were liberals (the vast majority of hands went up), how many were centrists or libertarians (he counted a couple dozen or so), and how many were conservatives (three hands went up). In his talk, he said that the conference reflected "a statistically impossible lack of diversity,” in a country where 40 percent of Americans are conservative and only 20 percent are liberal. He said he worried about the discipline becoming a "tribal-moral community" in ways that hurt the field's credibility.
The link above is worth following. The problems that arise remind me of the situation with academic and our own ethics in light of this paper.
That speech prompted the research that is about to be published. Members of a social psychologists' e-mail list were surveyed twice. (The group is not limited to American social scientists or faculty members, but about 90 percent are academics, including grad students, and more than 80 percent are Americans.) Not surprisingly, the overwhelming majority of those surveyed identified as liberal on social, foreign and economic policy, with the strongest conservative presence on economic policy. Only 6 percent described themselves as conservative over all.
The questions on willingness to discriminate against conservatives were asked in two ways: what the respondents thought they would do, and what they thought their colleagues would do. The pool included conservatives (who presumably aren't discriminating against conservatives) so the liberal response rates may be a bit higher, Inbar said.
The percentages below reflect those who gave a score of 4 or higher on a 7-point scale on how likely they would be to do something (with 4 being "somewhat" likely).
Percentages of Social Psychologists Who Would Be Biased in Various Ways
Self Colleagues A "politically conservative perspective" by author would have a negative influence on evaluation of a paper 18.6% 34.2% A "politically conservative perspective" by author would have a negative influence on evaluation of a grant proposal 23.8% 36.9% Would be reluctant to extend symposium invitation to a colleague who is "politically quite conservative" 14.0% 29.6% Would vote for liberal over conservative job candidate if they were equally qualified 37.5% 44.1%
I can't help but think that self-assessments are probably too generous. For predictive power of how an individual behaves when the behaviour in question is undesirable, I'm more likely to take their estimate of how "colleagues" behave than their estimate of how they personally do.
The more liberal the survey respondents identified as being, the more likely they were to say that they would discriminate.
The paper notes surveys and statements by conservatives in the field saying that they are reluctant to speak out and says that "they are right to do so," given the numbers of individuals who indicate they might be biased or that their colleagues might be biased in various ways.
Inbar said that he has no idea if other fields would have similar results. And he stressed that the questions were hypothetical; the survey did not ask participants if they had actually done these things.
He said that the study also collected free responses from participants, and that conservative responses were consistent with the idea that there is bias out there. "The responses included really egregious stuff, people being belittled by their advisers publicly for voting Republican."
This shouldn't be surprising to hear since to quote CharlieSheen: "we even have LW posters who have in academia personally experienced discrimination and harassment because of their right wing politics."
Neil Gross, a professor of sociology at the University of British Columbia, urged caution about the results. Gross has written extensively on faculty political issues. He is the co-author of a 2007 report that found that while professors may lean left, they do so less than is imagined and less uniformly across institution type than is imagined.
Gross said it was important to remember that the percentages saying they would discriminate in various ways are answering yes to a relatively low bar of "somewhat." He also said that the numbers would have been "more meaningful" if they had asked about actual behavior by respondents in the last year, not the more general question of whether they might do these things.
At the same time, he said that the numbers "are higher than I would have expected." One theory Gross has is that the questions are "picking up general political animosity as much as anything else."
If you are wondering about the political leanings of the social psychologists who conducted the study, they are on the left. Inbar said he describes himself as "a pretty doctrinaire liberal," who volunteered for the Obama campaign in 2008 and who votes Democrat. His co-author, Joris Lammers of Tilburg, is to Inbar's left, he said.
What most impressed him about the issues raised by the study, Inbar said, is the need to think about "basic fairness."
While I can see Lammers' point that this as disturbing from a fairness perspective to people grinding their way through academia and should serve as warning for right wing LessWrong readers working through the system, I find the issue of how this our heavy reliance on academia for our map of reality might lead to us inheriting such distortions of the map of reality much more concerning. Overall in light of this if a widely accepted conclusion from social psychology favours a "right wing" perspective it is more likely to be correct than if no such biases against such perspectives existed. Conclusions that favour "left wing" perspective are also somewhat less likely to be true than if no such biases existed. We should update accordingly.
I also think there are reasons to think we may have similar problems on this site.
When I read a book with new and interesting ideas, I usually want to know if there are major flaws that any knowledgeable scholar in the field would point out immediately (Two recent examples are Pinker's "The Better Angels of our Nature", and Harriss's "The Nurture Assumption")
- Look at reviews on Amazon (especially the negative ones)
- Google with keywords like "criticism, "review", "problem", (and whatever major issues I seem to have run in) etc.
- Search Google Scholar for the same thing
- Ask in some communities (LessWrong, reddit AskHistorians) if anybody read it
One problem is that I end up spending a lot of time reading stuff of no interest - either reviewers explaining the book to people who haven't read it (and sometimes even misrepresenting it's arguments, or framing them in terms of their pet controversy), or bloggers/posters who haven't read the book so go off a summary and come up with arguments that are already well-addressed in it.
So, what tips and strategies do you have for finding solid scholarly criticism ?
I'm an undergraduate studying molecular biology, and I am thinking of going into science. In Timothy Gower's "The Importance of Mathematics", he says that many mathematicians just do whatever interests them, regardless of social benefit. I'd rather do something with some interest or technological benefit to people outside of a small group with a very specific education.
Does anybody have any thoughts or links on judging the impact of the work on a research topic?
Clearly, the pursuit of a research topic must be producing truth to be helpful, and I've read Vladimir_M's heuristics regarding this.
Here's something I've tried. My current lab work is on the structure of membrane proteins in bacteria, so this is something I did to see where all this work on protein structure goes. I took a paper that I had found to be a very useful reference for my own work, about a protein that forms a pore in the bacterial membrane with a flexible loop, experimenting with the influence of this loop on the protein's structure. I used the Web of Science database to find a list of about two thousand papers that cited papers that cited this loop paper. I looked through this two-steps-away list for the ones that were not about molecules. Without too much effort, I found a few. The farthest from molecules that I found was a paper on a bacterium that sometimes causes meningitis, discussing about a particular stage in its colonization of the human body. A few of the two-steps-away articles were about antibiotics discovery; though molecular, this is a topic that has a great deal of impact outside of the world of research on biomolecules.
Though it occurs to me that it might be more fruitful to look the other way around: to identify some social benefits or interests people have, and see what scientific research is contributing the most to them.
Link to ACM press release.
In addition to their impact on probabilistic reasoning, Bayesian networks completely changed the way causality is treated in the empirical sciences, which are based on experiment and observation. Pearl's work on causality is crucial to the understanding of both daily activity and scientific discovery. It has enabled scientists across many disciplines to articulate causal statements formally, combine them with data, and evaluate them rigorously. His 2000 book Causality: Models, Reasoning, and Inference is among the single most influential works in shaping the theory and practice of knowledge-based systems. His contributions to causal reasoning have had a major impact on the way causality is understood and measured in many scientific disciplines, most notably philosophy, psychology, statistics, econometrics, epidemiology and social science.
While that "major impact" still seems to me to be in the early stages of propagating through the various sciences, hopefully this award will inspire more people to study causality and Bayesian statistics in general.
Luke's recent post mentioned that The Lancet has a policy encouraging the advance registration of clinical trials, while mine examined an apparent case study of data-peeking and on-the-fly transformation of studies. But how much variation is there across journals on such dimensions? Are there journals that buck the standards of their fields (demanding registration, p=0.01 rather than p=0.05 where the latter is typical in the field, advance specification of statistical analyses and subject numbers, etc)? What are some of the standouts? Are there fields without any such?
I wonder if there is a niche for a new open-access journal, along the lines of PLoS, with standards strict enough to reliably exclude false-positives. Some possible titles:
- The Journal of Real Effects
- (Settled) Science
- Probably True
- Journal of Non-Null Results, Really
- Too Good to Be False
So I'm applying for grad schools right now, and am visiting Yale, Brown, and UChicago this month (I already got accepted into UChicago, and also got invited to expenses-paid visits to both Yale and Brown). I'm visiting Yale in just 2 days.
So what are some cool things a LWer can do at those places? And which professors do research that a LWer could potentially find very interesting? Which universities would a LWer find himself/herself most at home at?
Also, is there anything else I need to know about those places?
I'm still waiting for decisions from Columbia and MIT (and got rejected by Caltech).
Split from "Against Utilitarianism: Sobel's attack on judging lives' goodness" for length.
Robert K. Shope, back in his 1978 paper "The Conditional Fallacy in Contemporary Philosophy", identified a kind of argument that us transhumanists will find painfully familiar: you propose idea X, the other person says bad thing Y is a possible counterexample if X were true, so X can't be true - ignoring that Y may not happen, and X can just be modified to deal with Y if it's really that important.
("If we augment our brains, we may forget how to love!" "So don't remove love when you're augmenting, sheesh." "But it might not be possible!" "But wouldn't you agree that augmentation without loss of love would be better than the status quo?")
I managed to turn an essay assignment into an opportunity to write about the Singularity, and I thought I'd turn to LW for feedback on the paper. The paper is about Thomas Pogge, a German philosopher who works on institutional efforts to end poverty and is a pledger for Giving What We Can.
I offer a basic argument that he and other poverty activists should work on creating a positive Singularity, sampling liberally from well-known Less Wrong arguments. It's more academic than I would prefer, and it includes some loose talk of 'duties' (which bothers me), but for its goals, these things shouldn't be a huge problem. But maybe they are - I want to know that too.
I've already turned the assignment in, but when I make a better version, I'll send the paper to Pogge himself. I'd like to see if I can successfully introduce him to these ideas. My one conversation with him indicates that he would be open to actually changing his mind. He's clearly thought deeply about how to do good, and may simply have not been exposed to the idea of the Singularity yet.
I want feedback on all aspects of the paper - style, argumentation, clarity. Be as constructively cruel as I know only you can.
If anyone's up for it, fee free to add feedback using Track Changes and email me a copy - mjcurzi[at]wustl.edu. I obviously welcome comments on the thread as well.
You can read the paper here in various formats.
Upvotes for all. Thank you!
JSTOR is a massive online archive of academic journals, virtually all of which were behind a subscription wall. (JSTOR's been quite arbitrary about locking up its content; some of the papers it hosts are available at no cost elsewhere. For example, these papers from the Proceedings of the National Academy of Sciences USA are freely available on the PNAS website but not on JSTOR.) Two days ago JSTOR opened up part of its database by giving free access to nearly 500,000 old articles. From JSTOR's announcement:
I am writing to share exciting news: today, we are making journal content on JSTOR published prior to 1923 in the United States and prior to 1870 elsewhere, freely available to the public for reading and downloading. This includes nearly 500,000 articles from more than 200 journals, representing approximately 6% of the total content on JSTOR.
The announcement also refers obliquely to two related recent events — Greg Maxwell releasing a slice of old JSTOR material on the Pirate Bay (previously discussed on Less Wrong) and Aaron Swartz being charged for illicitly downloading papers from JSTOR en masse:
I realize that some people may speculate that making the Early Journal Content free to the public today is a direct response to widely-publicized events over the summer involving an individual who was indicted for downloading a substantial portion of content from JSTOR, allegedly for the purpose of posting it to file sharing sites. While we had been working on releasing the pre-1923/pre-1870 content before the incident took place, it would be inaccurate to say that these events have had no impact on our planning.
Haven't posted in quite a while.
Suppose you have a big, complicated question that you're not sure of the answer to, and you want to seek an adviser to guide you. One kind of adviser is someone whose opinion, by your lights, constitutes strong evidence regarding the answer; on the basis of that opinion alone you are prepared to substantially update your beliefs. Of course you may profit from further discussion beyond just hearing the adviser's opinion on the big question: since the question is complicated, hearing his or her reasoning or evidence on different elements of the big question may be valuable, but the point is that there are some advisers for whom just knowing their ultimate judgment moves the needle a lot for you. Such people might be termed "good guides."
But there may be other potential advisers whose ultimate opinion on the big question you don't credit much at all, but who you think might still have valuable insight into some important element of the question. A good example for me is "Chicago School" Industrial Organization Economics. It's members had some insights that are absolutely true and important ("one monopoly profit" and related ideas), and that the people who I would have regarded as my "good guides" had I been around at the time did not have before them. No analyst who does not understand those insights can be a good analyst, and no analysis that ignores them can be correct. But simply knowing what an orthodox Chicago School economist thinks about some big question would move me very little. They are a valuable part of my "intellectual portfolio" (to use a phrase favored by Brad DeLong) and I would be a fool to dismiss them. But they are only providers of valuable input, not good guides.
I think the distinction between these two types of advisers is often missed. If you believe my example (if not, substitute one of your own, the point of this post is not to debate IO), there are a bunch of expert economists (Chicago School types) who should have fancy prestigious professorships, and whose arguments should be given careful consideration; and there are another bunch of expert economists who should have fancy prestigious professorships, whose arguments should be given careful consideration, and whose advice should be heeded. Leave aside the practical difficulty of knowing which is which if you are, say, a reporter or a policy-maker. The point is that there should be two buckets for two different types of prestigious advice-giver, but we only really have one.
At the end of June, I asked Less Wrong to vote for "What topic[s] would be best for an investigation and brief post?" in order to direct a search for topics to examine here. My thanks to everyone that participated (especially since the comments hint that the poll format was not well-liked). The most-wanted topics follow, and the complete list can be found on Google Docs -- maps and graphs related to the poll are also available on All Our Ideas. A score for a topic in the results below is an "estimated [percent] chance that it will win against a randomly chosen idea."
- Systems theory -- 71.6
- Leadership -- 70.7
- Linguistics (general) -- 70.7
- Finance -- 67.0
- Bayesian approach to business -- 60.7
- Lisp (Programming language) -- 59.7
- Anthropology (general) -- 59.4
- Sociology (general) -- 59.2
- Political Science (general) -- 58.5
- Historiography (the methods of history) -- 58.3
- Logistics -- 56.8
- Sociology of Political Organizations -- 56.0
- Military Theory -- 52.1
- Diplomacy -- 51.1
Systems theory, in first place, is a topic that I found while rummaging through online sources, including Wikipedia, for items to add to the poll; it's described there as the "study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. [....] In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback." Leadership seems to fall into both the social and "being effective" categories of interest, but has only lightly been touched on in previous discussion here despite a lot of ink spilled on the topic elsewhere -- the top Google results for "leadership" on this site are currently Calcsam's post on community roles and a book review for the Arbinger Institute's Leadership and Self Deception. "To Lead, You Must Stand Up" also comes to mind.
The spreadsheet includes columns for "Currently Investigated By" and "Writeup URLs" -- feel free to add your name or writeup links. If you already know a thing or two about one of the above topics, share your knowledge in a comment below or in a discussion post as appropriate, similar to the earlier "What can you teach us?" If you want to survey what currently exists on a topic, grab a few books, investigate, and then let us know what you found. When a related post instead of just a comment is appropriate, I recommend the tag "topic_search" As mentioned previously, even investigations that end in a comment to this post that a topic isn't useful for LW is still itself useful for the search.
This post is in a constant state of revision, similar to this post. This is mainly because I do not have a beta and this is based on many personal experiences that are unclear at times.
This subject has been touched on many times throughout LessWrong because Akrasia is the most dangerous foe of any true follower of Rationality. When you know you could be amazing but you find yourself unable to change due to the havoc that feelings can play with your thoughts you feel helpless and I want to help you surpass that. I am beginning a Journey to fight Akrasia directly in all its forms and in the past such Journey's have been abandoned without much progress. In this mini-sequence of posts I plan to not only document my fight to push past the depressing weight of Akrasia as a tool to keep me on the path, I will also provide some anti-Akrasia reports on my progress with different techniques that fellow LessWrongians can look back on and draw strength from in times of despair and laziness.
My name is Matthew Baker and I want to save the world.
I think most people share the feeling that the world should be saved and that only true sociopaths can discount the value of all sentient life. This is so important because the majority of people aren't able to defeat their innate Akrasic reasoning, ugh fields, and other factors that prevent them from functioning in a way that aligns with their beliefs. I think that if you believe in something, and you wish to be more rational towards the world then you should either push your beliefs towards the current state of reality or push reality towards your current state of beliefs.
When I was younger and sought something that I could devote effort to that would change the world for the better, I was quite disillusioned by the fact that nearly every cause relied on their innate biases to deal with the problems facing them. From political struggles to moral tribulation humanity is very good at ignoring things that don't coincide with their worldview. I always sought to surpass that but for a long time I failed to find anything to believe in that coincided with reality. Now that my skepticism is satisfied I have to logically take a look at what things are preventing me from promoting my beliefs. Akrasia is the most dangerous foe of any true follower of rationality. I've personally experienced Akrasia as the feeling when you know you could be amazing but you find yourself unable to change due to the havoc that feelings can play with your thoughts. I am beginning a journey to fight Akrasia directly in all its forms. I've attempted this in the past without making much progress; I'm hoping a different approach will help me succeed (or at least make new and different mistakes). In this mini-sequence of posts I plan to document my fight to push past the depressing weight of Akrasia. As a tool to keep me on the path, I will also provide some anti-Akrasia reports on my progress with different techniques.
My goals for this quest are varied yet connected. I don't intend to take them all on at once, but instead phase them in over the upcoming month and see if i can find the limit of my ability to avoid wasting time.
My goal to make myself more fit and transition to eating healthier food, right now I'm fairly skinny and I want to build some muscle to match with my height(6'1"). Enough so that I dont have trouble picking up things and carrying them without much out-word signalling of effort, but I'm not looking to become a bodybuilder or anything I just wanna optimize the vessel carrying my consciousnesses with better food and habits.
My goal is to become more skilled socially, I rested on my social laurels for a long time and focused on associating with people that fit my views on set issues. For maximum success I will focus on general social group construction as I advance into my second year of college. I wanna see how much fun and rationality I can spread if I focus on being skilled at gathering smart and interesting people into the fun vortex I can create around me.
My goal is to get a substantially higher GPA then I did last semester. I spent very little time on school but managed to pull off a 3.1 which was lower than my first semester GPA and I want this trend to reverse as I spend more focused time on school and actually study for the first time in my life.
Things that prevent me from achieving my goals are mostly random web browsing and gaming, lots of ugh fields I've only recently been able to write down and start purging from my thought process, negative emotions that sap my willpower and currently unknown other factors. Hopefully I will be able to surpass these problems with the power of self reflection and sharing, classical conditioning and positive substance use.
My goals for the upcoming week involve some social and fitness goals until school starts on the 20th. Hopefully I can get these partially phased in and be able to focus more on academia once I'm back up at school. For specific milestones I want to dance closely with at least 1 girl at a rave I'm going to tonight up in LA and I want to start working on pull-ups so I can get back up to my previous total(3) and start building from there.
I expect I'll have to deal with some social anxiety at the rave and some ugh field's towards the fitness, but hopefully this form of specific goal setting and reflection will work well. I will also have substances available for backup in case I fail to perform to my personal expectations. Combined, this should allow me to surpass my Akrasic Reasoning of the past for the sake of our combined future.
What can you gain from my efforts as fellow rationalists? Hopefully, once I've competed my journey I'll be able to explain my mind state well enough that you can learn from it and apply it to your own goals. When my mental state is low reading about how someone else was able to push back up from a similarly bad state can be amazingly helpful and I hope that I can provide that to others.
Tsuyoku Naritai! My Friends
P.S. If luck exists, I wish to gain more of it and believe in it so wish me luck with my first top level post. :) Edit: Its now in discussion until I see a surge of excitement towards the idea of this mini-sequence.
Followup to: Systematic Search for Useful Ideas
I've set up a pairwise poll for this question and additional suggestions are welcome. My original proposal was to examine topics that haven't already been covered here, but instead of that, I'd like to ask people to consider the existing level of discussion on a topic in evaluating what would be "best."
ETA: There are currently over 500 pairs. You don't have to go through all of them -- answer as many or as few as you like.
LessWrong is a font of good ideas, but the topics and interests usually expressed and explored here tend to cluster over few areas. As such, high-value topics may still be present for the community in other fields which can be systematically explored, rather than waiting for a random encounter. Additionally, there seems to be interest here in examining a wider variety of topics. In order to do this, I suggest creating a community list of areas to look into (besides the usual AI, Cog Sci, Comp Sci, Econ, Math, Philosophy, Psych, Statistics, etc.) and then reading a bit on the basics of these fields. In additional to potentially uncovering useful ideas per se, this also might offer the opportunity to populate the textbooks resource list and engage in not-random acts of scholarship.
Everyone Split Up, There’s a A Lot of Ideosphere to CoverA rough sketch of how I think the project will work follows. I’ll be proceeding with this and tackling at least one or two subjects as long as there’s at least a few other people interested in working on it too.
Step 1, Community Evaluation: Using All Our Ideas or similar, generate a list of fields to investigate.
Step 2, Sign-Up: People have the best sense of what they already know and their abilities, so at this point anyone that wants to can pick a subject that’s best for them to look into.
Step 3, Study: I imagine this will mostly involve self-directed reading of a handful of texts, watching some online videos, and maybe calling up one or two people -- in other words, nothing too dramatic. If a vein of something interesting is found, it’s probably better that it’s “marked” for further follow-up rather than further examined alone.
Step 4, Post: Some these investigations will not reveal anything -- that’s actually a good thing (explained below); for these, a short “Looked into it, nothing here” sort of comment should suffice. Subjects with bigger findings should get bigger, more detailed comments/posts.
Evaluation of ProposalAs a first step, I’ll use a variation of the Heilmeier questions which is an (admittedly idiosyncratic) mix of the original version and gregv’s enhanced version.
- What are you trying to do? Articulate your objectives using absolutely no jargon.
Produce comments or posts providing very brief overviews of fields of knowledge, not previously discussed here, with notes pertaining to Less Wrong topics and interests.
- Who cares? How many people will benefit?
This post is partially an attempt to determine that, but there seems to be at least some interest in more variety on the site (see above). Additionally, the posts should be a good general resource for anyone that stumbles across them, and might even make good content for search purposes.
- Why hasn't someone already solved this problem? What makes you think what stopped them won't stop you?
The idea is roughly book club meets Wikipedia, but with an emphasis on creating a small evaluative body of knowledge rather than a massive descriptive encyclopedia, and with a LessWrong twist. The sharper focus should make the results more useful to go through than just hitting “random page” in yon encyclopedia.
- How much have projects like this cost (time equivalent)?
Some have the ability to take on “whole fields of knowledge in mere weeks” but that’s not typical -- investigating a subject in this case is roughly comparable in complexity to taking an introductory class or two, which people without any previous training normally accomplish over a period of about three to four months at a pace which is not especially strenuous, and with fairly light monetary costs beyond tuition/fees (which aren't applicable here).
- What are the midterm and final "exams" to check for success?
For each individual investigation, a good “midterm” check would be for the person looking into a field to have an list of resources or texts they’re working on. The final “exam” is a posting indicating if anything useful or interesting was found, and if so, what.
- If y [this community search] fails to solve x [uncover useful knowledge in fields previously under-examined on LessWrong], what would that teach you that you (hopefully) didn't know at the beginning?
Quite possibly, this could be a good thing -- it indicates that the mix of topics on LessWrong is approximately right, and things can continue on. In this case, we’d end up seeing a bunch of short “nothing interesting here” comments, and can rest more or less assured that further investigation into even more minute detail in unnecessarily. This is conditional on not-terrible scholarship and a reasonably good priority list from step 1.
In Defense of Objective Bayesianism by Jon Williamson was mentioned recently in a post by lukeprog as the sort of book that should be being read by people on Less Wrong. Now, I have been reading it, and found some of it quite bizarre. This point in particular seems obviously false. If it’s just me, I’ll be glad to be enlightened as to what was meant. If collectively we don’t understand, that’d be pretty strong evidence that we should read more academic Bayesian stuff.
Williamson advocates use of the Maximum Entropy Principle. In short, you should take account of the limits placed on your probability by the empirical evidence, and then choose a probability distribution closest to uniform that satisfies those constraints.
So, if asked to assign a probability to an arbitrary A, you’d say p = 0.5. But if you were given evidence in the form of some constraints on p, say that p ≥ 0.8, you’d set p = 0.8, as that was the new entropy-maximising level. Constraints are restricted to Affine constraints. I found this somewhat counter-intuitive already, but I do follow what he means.
But now for the confusing bit. I quote directly;
“Suppose A is ‘Peterson is a Swede’, B is ‘Peterson is a Norwegian’, C is ‘Peterson is a Scandinavian’, and ε is ‘80% of all Scandinavians are Swedes’. Initially, the agent sets P(A) = 0.2, P(B) = 0.8, P(C) = 1 P(ε) = 0.2, P(A & ε) = P(B & ε) = 0.1. All these degrees of belief satisfy the norms of subjectivism. Updating by maxent on learning ε, the agent believes Peterson is a Swede to degree 0.8, which seems quite right. On the other hand, updating by conditionalizing on ε leads to a degree of belief of 0.5 that Peterson is a Swede, which is quite wrong. Thus, we see that maxent is to be preferred to conditionalization in this kind of example because the conditionalization update does not satisfy the new constraints X’, while the maxent update does.”
p80, 2010 edition. Note that this example is actually from Bacchus et al (1990), but Williamson quotes approvingly.
His calculation for the Bayesian update is correct; you do get 0.5. What’s more, this seems to be intuitively the right answer; the update has caused you to ‘zoom in’ on the probability mass assigned to ε, while maintaining relative proportions inside it.
As far as I can see, you get 0.8 only if we assume that Peterson is a randomly chosen Scandinavian. But if that were true, the prior given is bizarre. If he was a randomly chosen individual, the prior should have been something like P(A & ε) = 0.16 P(B & ε) = 0.04 The only way I can make sense of the prior is if constraints simply “don’t apply” until they have p=1.
Can anyone explain the reasoning behind a posterior probability of 0.8?
View more: Next