A currently existing social norm basically says that everyone has the right to an opinion on anything, no matter how little they happen to know about the subject.
But what if we had a social norm saying that by default, people do not have the right to an opinion on anything? To earn such a right, they ought to have familiarized themselves on the topic. The familiarization wouldn't necessarily have to be anything very deep, but on the topic of e.g. controversial political issues, they'd have to have read at least a few books' worth of material discussing the question (preferrably material from both sides of the political fence). In scientific questions where one needed more advanced knowledge, you ought to at least have studied the field somewhat. Extensive personal experience on a subject would also be a way to become qualified, even if you hadn't studied the issue academically.
The purpose of this would be to enforce epistemic hygiene. Conversations on things such as public policy are frequently overwhelmed by loud declarations of opinion from people who, quite honestly, don't know anything on the subject they have a strong opinion on. If we had in place a social norm demanding an adequate amount of background knowledge on the topic before anyone voiced an opinion they expected to be taken seriously, the signal/noise ratio might be somewhat improved. This kind of a social norm does seem to already be somewhat in place in many scientific communities, but it'd do good to spread it to the general public.
At the same time, there are several caveats. As I am myself a strong advocate on freedom of speech, I find it important to note that this must remain a *social* norm, not a government-advocated one or anything that is in any way codified into law. Also, the standards must not be set *too* high - even amateurs should be able to engage in the conversation, provided that they know at least the basics. Likewise, one must be careful that the principle isn't abused, with "you don't have a right to have an opinion on this" being a generic argument used to dismiss any opposing claims.
This is something I’ve thought a lot about. I’m worried about the consequences of certain negative ideologies present here on Less Wrong, but, actually, I feel that x-rationality, combined with greater self-awareness, would be the best weapon against them. X-rationality -- identifying facts that are true and strategies that work -- is inherently neutral. The way you interpret those facts (and what you use your strategies for) is the result of your other values.
Consider, to begin with, the tautology that 99.7% of the population is less intelligent than 0.3% of the population, by some well-defined, arbitrary metric of intelligence. Suppose also, that someone determined they were in the top 0.3%. They could feel any number of ways about this fact: completely neutral, for example, or loftily superior, or weightily responsible. Seen in this way, feeling contempt for "less intelligent" people is clearly the result of a worldview biased in some negative way.
Generally, humanity is so complex that however anyone feels about humanity says more about them than it does about humanity. Various forces (skepticism and despair; humanism and a sense of purpose) have been vying throughout history: rationality isn’t going to settle it now. We need to pick our side and move on … and notice which sides other people have picked when we evaluate their POV.
I always find it ironic, when 'rationalists' are especially misanthropic here on Less Wrong, that Eliezer wants to develop a friendly AI. Implicit with this goal -- built right in -- is the awareness that rationality alone would not induce the machine to be friendly. So why would we expect that a single-minded pursuit of rationality would not leave us vulnerable to misanthropic forces? Just as we would build friendliness into a perfectly logical, intelligent machine; we must build friendliness into our ideology before we let go of “intuition” and other irrational ways we have of “feeling” what is right, because they contain our humanism, which is outside rationality.
We do not want to be completely rational because being rational is neutral. Being more neutral without perfect rationality would leave us vulnerable to negative forces, and, anyway, we want to be a positive force.