Expert opinion should be discounted when their opinions could be predicted solely from information not relevant to the truth of the claims.
I can't think of how to usefully determine that some sort of information is not relevant to the truth of the claims. In some sense, everything is; I can predict someone's opinion on homeopathy by observing that they're a doctor. Although you could say that being a doctor is relevant to the truth of the claims (people who choose to become doctors rather than homeopaths make this choice because medicine works and homeopathy doesn't), it's a rather indirect relevance.
- The heuristic only applies to non-moral fields.
- The heuristic assumes the field is sound. In an upcoming post, I'll talk about signs a field may have unsound bases, and what to expect there.
My initial reaction is "are there significant fields for which the advice is necessary (it's not obvious to most readers that experts are on the right track) and for which either of these, let alone both, are true?" A few examples of things which you think your readers are incorrectly down-weighting expert opinion would help a lot.
Your given examples aren't about fields, but individuals, and the first two seem like morally-relevant fields - that's fine, but specifying what questions and what expertise-differential levels you're using (and where the boundary conditions are where it's neutral whether to defer or to think originally) would go a long way.
A good example: Economics. While some of the work is tainted by ideology, most aren't. And this leads to a few conclusions.
Capitalism is the best system of economics in practice, with a government.
Tariffs aren't good things, contra Trump.
One important aspect of our lives as we search for knowledge is knowing what and who to defer to, as we usually must take a lot of our knowledge on faith in expertise. However, how should you defer on issues, given that certain areas could be vastly wrong?
Well, I'll introduce some heuristics from Chris Hallquist that might help you to defer better.
They can be used in the following ways:
When an EA defers to a non-EA expert, or the movement as a whole defers to non-EA expertise.
When a less-knowledgeable EA defers to a more knowledgeable EA on something.
When someone outside a field defers to an insider expert.
Now, before I begin, I want to list caveats here:
The heuristic only applies to non-moral fields.
The heuristic assumes the field is sound. In an upcoming post, I'll talk about signs a field may have unsound bases, and what to expect there.
It's not a replacement for EV calculations.
If you're in a field or plan to work in a cause area, it's best to replace this heuristic with this post by Emrik: The underappreciated value of original thinking below the frontier.
https://www.lesswrong.com/posts/KmkZriGwkn2vDx8gB/the-underappreciated-value-of-original-thinking-below-the
But let's begin.
Conclusion
What about selection bias?
Emrik raised a concern about deferring to experts in that the most informed people are also selection biased to believe that their field is sound, from his post here: The Paradox of Expert Opinion, link below.
https://www.lesswrong.com/posts/S6Qcf5EgX5zAozTAa/the-paradox-of-expert-opinion
This is why it's so rare for the 1st, strongest condition to hold in practice. Not always, but unless selection effects are controlled for, it's going to produce wrong results.
So what's next? This is hopefully a useful resource so that you can defer quite a bit better and with better reasons than before this post.