Robin Hanson tells us to have fewer opinions on things, to specialize and be agnostic about everything outside your field of expertise. This may be good advice, but most of us won't take it. We're too obsessed with being right about stuff, including things we didn't study. Whether or not there is an external reason why being correct about something is useful, we usually care either way.

Most of our beliefs are based on what other people say, so the key skill seems obvious: identifying which people's views to consider strong evidence. Mastering that means being on par with the greatest experts in every field – not in understanding of the field itself, but in accuracy of views on controversial topics.

Why is there so little talk about this? Maybe because it's controversial, or it could be a status thing. I know I'd be uncomfortable with one entry if I had to share a short list of people who have a sizeable effect on my world-view. But even if that's enough of a reason to avoid talking about conclusions, about which particular people are or aren't trustworthy, it shouldn't stop us from at least going meta.

So the second half of the post will be my first suggestion, a list of cues which I, on reflection, appear to follow, to determine whether or not to take another person's views seriously. I'm almost certainly missing important ones, most likely even some that I use myself but am not conscious of. Items are phrased as actions: the person to be evaluated does X. Cues 1-8 are positive and improve trustworthiness, cues 9-12 are negative – though in reality most of them represent a spectrum and could also be phrased in the opposite way.


1. Is internally consistent

2. Is aligned with things I'm already confident in

3. Brings up points that aren't obvious but make sense

4. Has high IQ

5. Uses implicitly or explicitly consequentialist arguments

6. Uses sentences that in isolation sound as sophisticated as necessary to make the point and not more

7. Performs an unusually low amount of visible signaling

8. Cites people I consider to be highly trustworthy


9. Assumes a high baseline of trustworthiness of other people based on their social status or academic degrees

10. Ascribes well-thought-out views to groups of people that haven't been selected through a plausible filter

11. Bases her judgment of other people largely on the degree of agreement between her and their views.

12. Ascribes malicious intent to groups of people that haven't been selected through a plausible filter

And of course there is a prior of trustworthiness based on past actions which may dominate any other cue.

New Comment
13 comments, sorted by Click to highlight new comments since:

Dumb question -- when you append "(bad)" to the end of an entry, do you mean that you don't endorse using that heuristic, even though you do in fact use it (which is why it's on the list)?

No, it means they make me update downward rather than upward. That's what I meant by saying 'cues are phrased positively unless stated otherwise'. Like, if someone doesn't seem to signal that makes me think they're more trustworthy, if someone explains a phenomenon by lots of people being deliberately malicious, that makes me think they're less trustworthy.

Got it. Two more questions:

10. Overwhelming correlation between views about other people's beliefs and their intellectual honesty (bad)

1) What does #10 mean? (Whose intellectual honesty is being considered -- the expert you're evaluating, or other people whose beliefs the expert has views about?) Can you give an example of this one?

12. Prescribing malicious intent to groups of people that haven't been selected through a plausible filter (bad)

2) For #12, what do you mean about the filter -- do you mean: A) "Ascribing malicious intent to people, even though those people haven't been selected for maliciousness", or B) "Ascribing malicious intent to people because those people haven't been selected for non-maliciousness"?

(Also, I think you mean ascribing rather than prescribing for both #9 and #12.)

(Also, I think you mean ascribing rather than prescribing for both #9 and #12.)

Yes, I do. Non-native speaker. Clearly there are more issues with clarity than I had thought, so thank you for this comment.

@12 I mean A). It seems clear to me that very few people are actually malicious, and that not being aware of that on some level is a signal for incompetence and low trustworthiness. An example here is believing that anyone who wants stronger borders must dislike certain cultures.

@10: other people the expert has views about. This is about doubting the sincerity of people who disagree with you on emotionally charged topics. Say X and Y are public people who disagree on the minimum wage, and say person A declares X who happens to agree with her position to me more honest. If A does this too much, that makes me update downward on their trustworthiness.

I have the word overwhelming in there because I think there can be a real correlation between having one position on a question and being more honest, so if A just does this occasionally, that can be fine. But particularly if there are no exceptions, if people in A's team are consistently the good guys, that's a bad signal.

Intellectual honesty is a bad term here because it's too narrow. I'll look for something better.

Thanks for the clarification, makes sense to me now!

I liked this list more than I guessed I would. Check out Judgmental Bootstrapping and Schema Induction.

Can you give me a link to the Schema Induction Paper? I'm not sure which one you're referring to.

https://deepblue.lib.umich.edu/bitstream/handle/2027.42/25331/0000776.pdf;jsessionid=9FC75F4772E9B9EC75E78B42D66C9E16?sequence=1

One heuristic I learned is to adopt the opinion on a topic of one expert in the field, but find out the consensus position of experts on that topic. Another is to take meta-analyses more seriously than individual scientific publications. These are both good heuristics, but the heuristic I learned to learn them was just to follow around people who collected good heuristics for matching the map the to the territory. This is the rationality community. There are pieces of advice for scientific literacy which fall out of common sense, and which skeptics and science communicators tell the public, like not taking how the news report the results of a scientific study at face value. But I haven't completed a university degree. If I hadn't found the rationality community, I'd never know "initially anchor on expert consensus" or "look up meta-analyses over individual studies" were good heuristics for matching the map to the territory.

And this gets me thinking it's possible much of the rationality community is just people picking up heuristics for matching the map to the territory in a endless game of follow the follower. Our approach to improving our own epistemologies is to act like a school of fish. For all we know we could be a bunch of sophists who could never expect to independently recreate the reasoning which develops good heuristics for matching the map to the territory. It's certainly not *all* or maybe even most community members, but it certainly could be a lot of us. I fear I'd be in that group.

Of course I wouldn't discourage people from using the heuristics even if they don't fully understand them. If rationality is systematized winning, and we're a community, we're going to share systems for winning with each other. Some of us will be able to design systems from scratch, or have mastered the use of existing ones. This is instrumentally rational. But for those of us who feel like we're constantly borrowing others systems without understanding them, individually developing our own epistemic rationality might be necessary for instrumental rationality and goal achievement later. If we don't always have a community of masters and designers willing to share their systems for winning to depend on, eventually we'll need to figure out how to systematically win from scratch. How do we do that?

I've been struggling a bit with a reply. I have a suspicion we're not quite talking about the same thing, have different underlying assumptions, or maybe you're just generalizing.

I consider what I describe here to be of pretty limited 'practical' value (where by 'practical' I mean having a benefit not directly based on feelings). I care about knowing whether the minimum wage is a good policy, to pick one example of the kind of question I had in mind with this post, but pretty much only for intellectual curiosity, and the same is true for most similar questions. For me, there's only really one entry in this category where having more accurate views has significant practical implications, and that's the question about AI risk. Here the implications are massive, but I don't think that one's difficult to get right.

If we go one level higher, to what I'll call abstract epistemology for the sake of this post – general mental skills for being more rational – I'd agree that those have much more practical implications, because you can probably optimize wasteful behaviors in your own life. That's where I'm on board with talking about systematic winning; but this post really isn't that, this is specifically about whether, if public person X says something about the minimum wage, that should change your view on the topic or not.

So you're raising an interesting point but it seems like it goes beyond what I've been talking about, right? Which isn't bad, I'm just trying to get clarity.

Similarly, it seems odd to me to describe LW as primarily, or to any significant degree, being about looking for heuristics to match m&t. Like, as I was saying in the post, it seems to me like there is barely any talk about that, people are either more abstract (-> sequences) or less abstract (-> particular theoretical arguments, mostly about AI or about charity; or practical advice about instrumental rationality).

Going by that, one obvious explanation for why there isn't talk about matching m&t in this way is the lack of practical value, but, given how much people seem to care, I don't buy that.

I'm not sure if the question you asked at the end was how to come up with the kind of cues on my list, but I'll describe how I arrived at #8, which should be a fine example. It seems pretty clear to me that the institution of academia is highly flawed, to the point that people can have successful careers while mostly saying things which, to a rational person, are obviously false. Experts disagree on basic questions, the process is inefficient, there is tons of wasteful signaling, and the world should look different if academia was working really well. Failure to recognize that seems to be a fairly reliable signal of incompetence, because it's something that's not often talked about and not mainstream so you have to realize it yourself, and the most likely explanation for why you don't is that you're not significantly more competent than most people who constitute academia. My own very limited experience of working for people doing their PhDs confirms this. So I don't have a better reply than essentially picking up random valuable observations such as the above, which is what this list consists of.

If the question was how to have accurate views without such a system, I think the two heuristics you mentioned are solid (though, how do you go about figuring out what the scientific consensus on something is?). I'd also say, look at polls among really smart people. Like this and the SSC surveys. And insofar as they are applicable, prediction markets. But I wouldn't label either as systematic winning. On that front, I think the sequences are the most powerful tool, along with the books Inadequate Equilibria and The Elephant in the Brain. Those don't need to be re-invented.

What I was getting at was coming up with systems of who to take seriously to match the map to the territory isn't a replacement for knowing or learning how to do so independently. It was a tangent. I should've pointed that out. As you pointed out, it doesn't cause serious problems except in uncommon cases like working on AI alignment.

I consider what I describe here to be of pretty limited 'practical' value (where by 'practical' I mean having a benefit not directly based on feelings). I care about knowing whether the minimum wage is a good policy, to pick one example of the kind of question I had in mind with this post, but pretty much only for intellectual curiosity, and the same is true for most similar questions.

This feels like an indicator that we need to get more specific.

There are a few distinct things I can match to, "Picking an opinion on a topic".

There's the Social Instruction Manual Version. You want to be able to do the symbolic thing of Having a Conversation About Minimum Wage. Which side do you root for, what do you boo and yay, what sorts of words should you say to which people, etc.

There's adopting specific predictions with X about of certainty. This would be listening to someone talk about minimum wage and going, "Okay, I now expect that if a minimum wage was put in place in Examplestan, XYZ consequences would happen with probability P." This could be "practical" (maybe you need to make decisions based off of this prediction), but it doesn't have to be.

Then there's adopting an attitude. I see an attitude as a large set of under-specified rules for making predictions about the subject of the attitude. (quote taken from the first page of googling "why should we raise the minimum wage?")

To be sure, increasing the minimum wage alone won’t solve the broader problems of wage stagnation and income inequality. We need to make greater investments in job training and strengthen labor protections, among other policies. But a higher minimum wage can provide an important lift to the 2.2 million Americans currently earning minimum wage and help tens of millions of other workers who earn a few dollars more than $7.25.

This doesn't really tell me how to make predictions about minimum wage related issues, but it does point me in a direction.

Roughly, I see Social Instruction Manuals as not super important, specific predictions as useful based on how much I'm interested in the topic, and attitudes as often to risky to adopt without a lot more thought. It seems important to make clear which one I'm after, because I'd use different rules for picking people to adopt different kinds of "opinions" from.

Which of these did you have in mind when writing this post? Or were you thinking of something different form my three options?

The map and the territory can't match per definition. They are qualitatively different. There are also many cases where you actually want to have a workable model instead of one that has maximum accuracy.