Robin Hanson tells us to have fewer opinions on things, to specialize and be agnostic about everything outside your field of expertise. This may be good advice, but most of us won't take it. We're too obsessed with being right about stuff, including things we didn't study. Whether or not there is an external reason why being correct about something is useful, we usually care either way.
Most of our beliefs are based on what other people say, so the key skill seems obvious: identifying which people's views to consider strong evidence. Mastering that means being on par with the greatest experts in every field – not in understanding of the field itself, but in accuracy of views on controversial topics.
Why is there so little talk about this? Maybe because it's controversial, or it could be a status thing. I know I'd be uncomfortable with one entry if I had to share a short list of people who have a sizeable effect on my world-view. But even if that's enough of a reason to avoid talking about conclusions, about which particular people are or aren't trustworthy, it shouldn't stop us from at least going meta.
So the second half of the post will be my first suggestion, a list of cues which I, on reflection, appear to follow, to determine whether or not to take another person's views seriously. I'm almost certainly missing important ones, most likely even some that I use myself but am not conscious of. Items are phrased as actions: the person to be evaluated does X. Cues 1-8 are positive and improve trustworthiness, cues 9-12 are negative – though in reality most of them represent a spectrum and could also be phrased in the opposite way.
1. Is internally consistent
2. Is aligned with things I'm already confident in
3. Brings up points that aren't obvious but make sense
4. Has high IQ
5. Uses implicitly or explicitly consequentialist arguments
6. Uses sentences that in isolation sound as sophisticated as necessary to make the point and not more
7. Performs an unusually low amount of visible signaling
8. Cites people I consider to be highly trustworthy
9. Assumes a high baseline of trustworthiness of other people based on their social status or academic degrees
10. Ascribes well-thought-out views to groups of people that haven't been selected through a plausible filter
11. Bases her judgment of other people largely on the degree of agreement between her and their views.
12. Ascribes malicious intent to groups of people that haven't been selected through a plausible filter
And of course there is a prior of trustworthiness based on past actions which may dominate any other cue.
I've been struggling a bit with a reply. I have a suspicion we're not quite talking about the same thing, have different underlying assumptions, or maybe you're just generalizing.
I consider what I describe here to be of pretty limited 'practical' value (where by 'practical' I mean having a benefit not directly based on feelings). I care about knowing whether the minimum wage is a good policy, to pick one example of the kind of question I had in mind with this post, but pretty much only for intellectual curiosity, and the same is true for most similar questions. For me, there's only really one entry in this category where having more accurate views has significant practical implications, and that's the question about AI risk. Here the implications are massive, but I don't think that one's difficult to get right.
If we go one level higher, to what I'll call abstract epistemology for the sake of this post – general mental skills for being more rational – I'd agree that those have much more practical implications, because you can probably optimize wasteful behaviors in your own life. That's where I'm on board with talking about systematic winning; but this post really isn't that, this is specifically about whether, if public person X says something about the minimum wage, that should change your view on the topic or not.
So you're raising an interesting point but it seems like it goes beyond what I've been talking about, right? Which isn't bad, I'm just trying to get clarity.
Similarly, it seems odd to me to describe LW as primarily, or to any significant degree, being about looking for heuristics to match m&t. Like, as I was saying in the post, it seems to me like there is barely any talk about that, people are either more abstract (-> sequences) or less abstract (-> particular theoretical arguments, mostly about AI or about charity; or practical advice about instrumental rationality).
Going by that, one obvious explanation for why there isn't talk about matching m&t in this way is the lack of practical value, but, given how much people seem to care, I don't buy that.
I'm not sure if the question you asked at the end was how to come up with the kind of cues on my list, but I'll describe how I arrived at #8, which should be a fine example. It seems pretty clear to me that the institution of academia is highly flawed, to the point that people can have successful careers while mostly saying things which, to a rational person, are obviously false. Experts disagree on basic questions, the process is inefficient, there is tons of wasteful signaling, and the world should look different if academia was working really well. Failure to recognize that seems to be a fairly reliable signal of incompetence, because it's something that's not often talked about and not mainstream so you have to realize it yourself, and the most likely explanation for why you don't is that you're not significantly more competent than most people who constitute academia. My own very limited experience of working for people doing their PhDs confirms this. So I don't have a better reply than essentially picking up random valuable observations such as the above, which is what this list consists of.
If the question was how to have accurate views without such a system, I think the two heuristics you mentioned are solid (though, how do you go about figuring out what the scientific consensus on something is?). I'd also say, look at polls among really smart people. Like this and the SSC surveys. And insofar as they are applicable, prediction markets. But I wouldn't label either as systematic winning. On that front, I think the sequences are the most powerful tool, along with the books Inadequate Equilibria and The Elephant in the Brain. Those don't need to be re-invented.