Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.
Maintaining a High Signal to Noise Ratio
The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.
For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.
Rationality is not a religion – Or is it?
Therefore, on Less Wrong, a person earns karma by expressing views from within the ideology. Wayward comments are discouraged with down-votes. Sometimes, even, an ideological toe is stepped on, and the disapproval is more explicit. I’ve been told, here and there, one way or another, that expressing extremely dissenting views is: stomping on flowers, showing disrespect, not playing along, being inconsiderate.
So it turns out: the conditions necessary for the faithful support of an ideology are not that different from the conditions sufficient for developing a cult.
But Less Wrong isn't a religion or a cult. It wants to identify and dis-root illusion, not create a safe place to cultivate it. Somewhere, Less Wrong must be able challenge its basic assumptions, and see how they hold up to new and all evidence. You have to allow brave dissent.
-
Outsiders who insist on hanging around can help by pointing to assumptions that are thought to be self-evident by those who "get it", but that aren’t obviously true. And which may be wrong.
-
It’s not necessarily the case that someone challenging a significant assumption doesn’t get it and doesn’t belong here. Maybe, occasionally, someone with a dissenting view may be representing the ideology more than the status quo.
Shouldn’t there be a place where people who think they are more rational (or better than rational), can say, “hey, this is wrong!”?
A Solution
I am creating this top-level post for people to express dissenting views that are simply too far from the main ideology to be expressed in other posts. If successful, it would serve two purposes. First, it would remove extreme dissent away from the other posts, thus maintaining fidelity there. People who want to play at “rationality” ideology can play without other, irrelevant points of view spoiling the fun. Second, it would allow dissent for those in the community who are interested in not being a cult, challenging first assumptions and suggesting ideas for improving Less Wrong without being traitorous. (By the way, karma must still work the same, or the discussion loses its value relative to the rest of Less Wrong. Be prepared to lose karma.)
Thus I encourage anyone (outsiders and insiders) to use this post “Dissenting Views” to answer the question: Where do you think Less Wrong is most wrong?
Certainly, the main ideological tenet of Less Wrong is rationality. However, other tenets slip in, some more justified than others. One of the tenets that I think needs a much more critical view is something I call "reductionism" (perhaps closer to Daniel Dennetts "greedy reductionism" than what you think of). The denial of morality is perhaps one of the best examples of the fallacy of reductionism. Human morality exists and is not arbitrary. Intuitively, the denial of morality is absurd and ideologies that result in conclusions that are intuitively absurd should require extraordinary justification to keep believing in them. In other words, you must be a rationalist first and a reductionist second.
First, science is not reductionist. Science doesn’t claim that everything can be understood by what we already understand. Science makes hypotheses and if the hypotheses don’t explain everything, it looks for other hypotheses. So far, we don’t understand how morality works. It is a conclusion of reductionism – not science – that morality is meaningless or doesn’t exist. Science is silent on the nature of morality: science is busy observing, not making pronouncements by fiat (or faith).
We believe that everything in the world makes sense. That everything is explainable, if only we knew enough and understood enough, by the laws of the universe, whatever they are. (As rationalists, this is the only single, fundamental axiom we should take on faith.) Everything we observe is structured by these laws, resulting in the order of the universe. Thus a falsehood may be arbitrary, but a truth, and any observation, will always not be arbitrary, because it must follow these laws. In particular, we observe that there are laws, and order, over all spatial and temporal scales. At the atomic/molecular scale, we have the laws of physics (classical mechanics). At the organism level, we have certain laws of biology (mutation, natural selection, evolution, etc). At the meta-cognitive level, there will be order, even if we don’t perceive or understand that order.
Obviously, morality is a natural emergent property of sapience. (Since we observe it.) Perhaps it is not necessary… concluding necessity would require a model of morality, that I don’t have. But imagine: the space of all sapient beings over all time in the universe. Imagine the patterning of their respective moralities. Certainly, their moralities will be different from each other. (This I can conclude, because I observe differences even among human moralities.) However, it is no leap of faith but just the application of the most important assumption, that their morality will also have certain features in common; will obey certain laws and will evidence order. Even if not readily demonstrated in a single realization. Our morality – whatever it is – is meant to be, is natural, is without question obeying laws of the universe.
By analogy with evolution (here I am departing from science and am reverting to reductionism, trying to understand something in the context of what I do understand -- the analogy doesn’t necessarily hold, and one must use their intuition to estimate if the analogy is reasonable) there may not be a unique emergent “best” morality, but it may be the case that certain moralities are better than others, just as some species are more fit than others. So instead of thinking of the existence of different moralities in humanity as evidence that morality is “relative” and arbitrary or meaningless, I see the variations as evidence that morality is something that is evolving, competing, striving even among humans to fit an idealized meta-pattern of morality, whatever it may be. Like all idealized abstractions, the meta-morality would be physically unobtainable. The meta-morality itself could only be deduced by looking at the pattern of moralities (across sapient life forms would be most useful) and abstracting what is essential, what is meaningful.
It is a constant feature of life to want to live. Every species has an imperative to do their utmost to live, each species contributes itself to the experiment, to participate in the demonstration of which aspects of life are most fit. Paradoxically, every species has an imperative to do its utmost to live, even if it means changing from what they were. There is a trade-off between fighting to stay the same (and “win” for your realization of life), and changing (a possible win for the life of the next species). Morality might be the same: we fight for our idea of morality (with a greater drive, not less, than our drive for life) but we will forfeit our own morality, willingly, for a foreign morality that our own morality recognizes as better. Morality wants to achieve this ideal morality that we only see the vaguest features of. (In complex systems, “wanting” means inexorably moving towards, globally.)
I’m not always sure when one moral position is better than another – there seems to be plenty of gray at this local level of my understanding. However, some comparisons are quite clear. That morality exists is a more moral position than denying that it exists. Also, morality is not just doing what’s best for the community by facilitating cooperation: that explanation is needlessly reductionist. We can see this by the (abstract) willingness of moral people to sacrifice themselves – even in a total loss situation – for a higher moral ideal. Morality is not transcendent however; “transcendent” is an old word that has lost its usefulness. We can just say that morality is an emergent property. An emergent property of something. A month ago, I would have said intelligence, but I’m not sure. A certain kind of intelligence, surely. Social intelligence, perhaps. That even ants possess, but not a paperclip AI.
[Later edit: I've convinced myself that a paperclip AI does have a morality, though a really different one. Perhaps morality is an emergent property of having a goal. Could you convince a paperclip AI to not make any paperclips if the universe would have more "paperclipness" without them? Maybe it would decide that everything being paperclips results in an arbitrary number, and it would be a stronger statement to eradicate all paperclips...)
No, reductionism doesn't lead to denial of morality. Reductionism only denies high-level entities the magical ability to directly influence the reality, independently of the underlying quarks. It will only insist that morality be implemented in quarks, not that it doesn't exist.