Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.
Maintaining a High Signal to Noise Ratio
The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.
For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.
Rationality is not a religion – Or is it?
Therefore, on Less Wrong, a person earns karma by expressing views from within the ideology. Wayward comments are discouraged with down-votes. Sometimes, even, an ideological toe is stepped on, and the disapproval is more explicit. I’ve been told, here and there, one way or another, that expressing extremely dissenting views is: stomping on flowers, showing disrespect, not playing along, being inconsiderate.
So it turns out: the conditions necessary for the faithful support of an ideology are not that different from the conditions sufficient for developing a cult.
But Less Wrong isn't a religion or a cult. It wants to identify and dis-root illusion, not create a safe place to cultivate it. Somewhere, Less Wrong must be able challenge its basic assumptions, and see how they hold up to new and all evidence. You have to allow brave dissent.
-
Outsiders who insist on hanging around can help by pointing to assumptions that are thought to be self-evident by those who "get it", but that aren’t obviously true. And which may be wrong.
-
It’s not necessarily the case that someone challenging a significant assumption doesn’t get it and doesn’t belong here. Maybe, occasionally, someone with a dissenting view may be representing the ideology more than the status quo.
Shouldn’t there be a place where people who think they are more rational (or better than rational), can say, “hey, this is wrong!”?
A Solution
I am creating this top-level post for people to express dissenting views that are simply too far from the main ideology to be expressed in other posts. If successful, it would serve two purposes. First, it would remove extreme dissent away from the other posts, thus maintaining fidelity there. People who want to play at “rationality” ideology can play without other, irrelevant points of view spoiling the fun. Second, it would allow dissent for those in the community who are interested in not being a cult, challenging first assumptions and suggesting ideas for improving Less Wrong without being traitorous. (By the way, karma must still work the same, or the discussion loses its value relative to the rest of Less Wrong. Be prepared to lose karma.)
Thus I encourage anyone (outsiders and insiders) to use this post “Dissenting Views” to answer the question: Where do you think Less Wrong is most wrong?
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don't want to do something, you can always find a reason.
Sure, that doesn't mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don't take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don't know what the hell you're doing and try things randomly, you'll improve as long as there's some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better... but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain's confabulation -- "reasoning" -- is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it's lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that "amateurs guess, professionals test". But "test" in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it's 27% likely that the problem with my car is in the spark plugs, but didn't actually test them, I'd best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first... but you could get almost as much optimization by testing in easiest-first order.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn't sound very scalable to me.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to anoth... (read more)