The importance of cognitive psychology, neuroscience, and evolutionary psychology to rationalist projects are no secret.  The sequences begin with an introduction to cognitive biases and heuristics for example.  I am an amateur researcher of theoretical and philosophical psychology currently preparing a series of blog posts intended to introduce the rationality community to some of the key insights from that field. As part of that project, I'm hoping to use a couple of studies as case studies to examine the processes by which the knowledge gained from them was produced and the philosophical and theoretical assumptions at play in the interpretation of their findings. In choosing which studies to use as case studies, I'm hoping to optimize for the following criteria.

1. Rationalist/Science approved methodologies were followed. The more confident the community would be in relying on the findings of the studies the better.

2. Relevance to rationalist projects.

3. The "less likely to fall to the replication crisis" the better

4. The more important to the rationalist worldview the better.

I don't know if I'll do all three yet(no promises), but I'm interested in choosing one study or finding from each of these three categories, each optimized for the above criteria. 

1. Cognitive Experimental Psychology

2. Neuroscience

3. Evolutionary Psychology.

Meta-analyses are alright, but I'm going to need specific studies for this project. 

 

New Answer
New Comment
21 comments, sorted by Click to highlight new comments since:

I don't think that's a good model of how knowledge aquisition works. Reliable knowledge is build on the synthesis of multiple sources and not on a single study.

When it comes to the biases and heuristics there's still no good overview of what's findings are actually reliable that someone wrote up in the rationalist community.

That's great, but synthesis of multiple sources begins with single sources and I'm trying to start at that level and build up. I'm not asking for a definitive source, just studies that rationalists think are important, supportive of their worldview, and reliable as far as single studies go. 

If philosophy(and psychology) is supposed to be based on the findings of cognitive science(and not the other way around, 
https://www.lesswrong.com/posts/vzLrQaGPa9DNCpuZz/against-modal-logics 

recursive justification hits rock bottom with a reflective process that relies on cognitive science, 
https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom

and reflection on the ways human brains are predictably irrational is the lens that sees its own flaws
https://www.lesswrong.com/s/5g5TkQTe9rmPS5vvM/p/46qnWRSR7L2eyNbMA

 then what cognitive science, neuroscience, and evolutionary psychology is that and what studies exist to support it? 

The sequences where written before the replication crisis and uncritically repeat the claims that Kahnemann made. Post-replication crisis nobody in the rationality community cared enough about biases to go through the literature and write a summary of what we should believe based on the changing academic evidence about biases. 

One thing you might look at are the papers cited in the CFAR handbook.

While a summary would be ideal, I'm really looking for any specific source from any of those fields that the community would deem important and relatively trustworthy. I'm going to be at least mildly critical in my analysis and so I want to be sure that I'm not just strawmanning. Whatever the strongest arguments and sources rationalist can provide from these sources, they are the ones I want to spend my time working with. I'm trying to narrow this down to single sources because I want to do a critical analysis of a single source as an example of how to do the kind of critical analysis I'm going to be explaining how to do. The CFAR handbook is a good idea and I'll be looking through it for sources tomorrow. Thanks!

PS. Considering the importance of bias research to the rationalist worldview, can you try to help me understand why no one would care enough about it post replication crisis to get clear on what, specifically, the replication findings were calling into question? If biases research is less "foundational" to the rationalist worldview than I've been thinking, then what research is foundational and what studies is that research informed by? Thank you for your help.

Considering the importance of bias research to the rationalist worldview, can you try to help me understand why no one would care enough about it post replication crisis to get clear on what, specifically, the replication findings were calling into question?

Laziness.

Never overestimate the degree to which pure, disinterested honesty can motivate people to do anything. A project to re-examine the entirety of the Sequences, with an eye toward pinpointing exactly which of the cited science holds up, is one which I have suggested and even built basic infrastructure for making progress on, but (unsurprisingly) no one has ever expressed any interest in contributing to something like this. It would, after all, be thankless, selfless work, the end result of which would be—what? Improved accuracy of your beliefs, and the beliefs of everyone in the community? Becoming less wrong about a lot of ostensibly-important matters? Such things do not motivate people to action.

EDIT: By the way, even setting aside the replication crisis, some (perhaps many? who knows!) of the citations in the Sequences are quite problematic.

People focus their research on the areas that are interesting to them and this doesn't seem to be. 

I'm not sure that such a project would be thankless. If someone would want to do such a project and invest serious energy into it, I think there's a good chance that they could get a grant from https://www.lesswrong.com/posts/WAkvnzxvNfeTJL4BT/funds-are-available-to-support-lesswrong-groups-among-others or other sources for it. 

People focus their research on the areas that are interesting to them and this doesn’t seem to be.

Well, yes, obviously, but just as obvious is the question: why? Why isn’t anyone interested in this, when it sure seems like it should be extremely important? (Or do you disagree? Should we not expect that the epistemic status of the work that we still base much of our reasoning on should be of great interest?)

As for the grant thing—fair point, that could make it worth someone’s time to do. Although the sort of money being offered seems rather low for the value I, at least, would naively expect this community to place on work like this. (Which is no slight against the grantmakers—after all, it’s not like they had this particular project in mind. I am only saying that the grants in question don’t quite measure up to what seems to me like the value of a project like this, and thus aren’t quite an ideal match for it.)

Although the sort of money being offered seems rather low for the value I, at least, would naively expect this community to place on work like this. 

If there would be someone who's clearly capable of doing a good job at the task, I would expect that the project could find reasonable funding. 

Well, yes, obviously, but just as obvious is the question: why? Why isn’t anyone interested in this, when it sure seems like it should be extremely important? (Or do you disagree? Should we not expect that the epistemic status of the work that we still base much of our reasoning on should be of great interest?)

I don't think I or most of the rationality community bases most of our reasoning on knowledge that we believe because someone made a claim in an academic paper. 

Take Julia Galef's recent work with the "Scout mindset". It basically about the thesis that whether or not those cognitive biases exist or not, teaching people about those won't make them more rational when they stay in "Soldier mindset". 

There's CFAR which build their curriculum by iterating a lot and looking at the effects of what they were doing and not primarily by believing that the academic knowledge is trustworthy. They used papers for inspiration but they tested whether ideas actually work in practice in the workshop context. 

Over at the Good Judgement Project Tetlock didn't find that what makes good superforcasters is their knowledge of logical fallacies or cognitive biases but a series of other heuristics. 

There's a sense that fake frameworks are okay, so if some of what's in the Sequences is a fake framework, that's not inherently problematic. When doing research it generally is good to have a theory of change and then focus on what's required. 

It seems awfully convenient that Eliezer made all these claims, in the Sequences, that were definitely and unquestionably factual, and based on empirical findings; that he based his conclusions on them; that he described many of these claims as surprising, such that they shifted his views, and ought to shift ours (that is, the reader’s)… but then, when many (but how many? we don’t know, and haven’t checked) of the findings in question failed to replicate, now we decide that it’s okay if they’re “fake frameworks”.

Does it not seem to you like this is precisely the sort of attitude toward the truth that the Sequences go to heroic lengths to warn against?

(As for CFAR, that seems to me to be a rather poor example. As far as I’m aware, CFAR has never empirically validated their techniques in any serious way, and indeed stopped trying to do so a long time ago, after initial attempts at such validation failed.)

CFAR generally taught classes and then followed up with people. They didn't do that in a scientifically rigorous manner and had no interest in collaborating with academics like Falk Lieder to run a rigorous inquiry but that doesn't mean that their approach wasn't empiric. There were plenty of classes that they had in the begining where they learned after doing them and looking at empiric feedback that they weren't a bad idea. 

Does it not seem to you like this is precisely the sort of attitude toward the truth that the Sequences go to heroic lengths to warn against?

You might argue that the view of the sequences is opposed to the one that's expressed in "fake frameworks" but it still seems to me one that's popular right now. 

I don't deny that there would be some value in someone going through and fact-checking all the sequences but at the same time I understand why that's nobodies Hemming problem.

Hemming problem?

I mispelled it and it should be "Hamming problem". See https://www.lesswrong.com/posts/P5k3PGzebd5yYrYqd/the-hamming-question

[-]TAG20

It looks like there is even less interest in checking the Sequences for philosophical correctness.

It's always seemed bizarre to me how disconnected from the philosophical discourse the sequences are. It's a series of philosophical positions articulated in ways that make naming them, and thus exposing oneself to the counter-arguments to them, and the ongoing discussions they are a part of, EXTREMELY difficult. If someone would just go through the sequences and label the ideas with their philosophical names and cite some of the people they are associated with in the larger philosophical discourse it seems like a lot of the discussion here could be short-cutted by simply exposing the community to the  people who have already talked about this stuff. 

[-]TAG00

If the technology is there, the motivation is presumably missing.

In the days of High Rationalism, the very idea that the sequences would need fixing, or could be fixed by ordinary PhD's would have been laughable.

I am not sure how one would do this, or what this would even mean. It’s not like anyone can agree on what “philosophical correctness” is; the Sequences, after all, contain various philosophical arguments, with which one may certainly disagree, but “checking” them for “correctness” seems like a dubious suggestion.

In contrast, checking to see if some study has replicated (or failed to do so), or whether some cited source even says what it’s claimed to say, etc., are tasks that can yield uncontroversial improvements in correctness.

[-]TAG10

If the correct metaphilosophy is that there is no way of assessing object level philosophical arguments, then any confident assertion of a philosophical claim is metaphilosophically wrong. And there are plenty of confident assertions of philosophical claims in the sequences. In Torture versus Dust Specks and the Free Will sequence, for instance, there is supposed to be a 100% correct answer, not an argument for your consideration.

What if we disagree on whether “the correct metaphilosophy is that there is no way of assessing object level philosophical arguments”?

Look, I’m not saying that the Sequences are, philosophically speaking, pure and without sin (I give the opposite of Eliezer’s answer in “Torture vs. Dust Speaks”, and consider the free will sequence to be unpersuasive and confused). But suppose some other Less Wrong commenter disagrees with me; what then? We just get mired in philosophical arguments, right? Because that’s the only way to “resolve” these disputes: arguments. There’s nothing else to appeal to.

It’s just a fundamentally different situation, totally unlike the question of study replication or “does paper X actually mention topic Y at all” or anything else along these lines.

[-]TAG00

What if we disagree on whether “the correct metaphilosophy is that there is no way of assessing object level philosophical arguments”?

If the correct metaphilosophy is that you can assess object level arguments ...then you can assess the object level arguments in the sequences....contra your original claim.

But suppose some other Less Wrong commenter disagrees with me; what then?

Then you can't assess object level correctness. But you can still fix the overconfidence.

You have the option of rewriting the sequences to withdraw or tone down the disputed claims.

I think there's a general sense that descriptive frameworks alone don't help you to improve thinking. As a result CFAR doesn't teach people a bunch of concept about cognitive biases but techniques of how to think. CFAR does list the academic research relevant to their exercises in their work book but those exercises don't just rest on reading the literature but also on practical application.

The the existince that there's now boosting decision making in academia, the papers are not widely read in the rationality community.

Valentine's In praise of fake frameworks seem to me widely accepted and plenty of rationalists prefer useful frameworks and are okay with them not being on a ground level supported. 

I just read fake frameworks and I still just feel like I'm being interpreted as asking a different question than I am asking. If the frameworks are ultimately fake then that's fine. I just want to know what the frameworks are and where they come from. I'm asking "Why do you believe what you believe?" and was expecting the answer to take the form of citations of cognitive/experimental psychology, neuroscience, and evolutionary psychology. Is that not the kind of answer I should be expecting?