Byrnema, you talk extensively in this post about the LW community having a (dominant) ideology, without ever really explicitly stating what you think this ideology consists of.
I'd be interested to know what, from your perspective are the key aspects of this ideology. I think this would have two benefits:
(More generally, I think this is a great idea.)
that we seem more interested in esoteric situations than in the obvious improvements that would have the biggest impact if adopted on a wide scale.
Overall I think my views are pretty orthodox for LW/OB. But (and this is just my own impression) it seems like the LW/OB community generally considers utilitarian values to be fundamentally rational. My own view is that our goal values are truly subjective, so there isn't a set of objectively rational goal values, although I personally prefer utilitarianism myself.
I have two proposals (which happen to be somewhat contradictory) so I will make them in separate posts.
The second is that many participants here seem to see LW as being about more than helping each other eliminate errors in our thinking. Rather, they see a material probability that LW could become the core of a world-changing rationalist movement. This then motivates a higher degree of participation than would be justified without the prospect of such influence.
To the extent that this (perhaps false) hope may be underlying the motivations of community members, it would be good if we discussed it openly and tried to realistically assess its probability.
Where do you think Less Wrong is most wrong?
That it's not aimed at being "more right" -- which is not at all the same as being less wrong.
To be more right often requires you to first be more wrong. Whether you try something new or try to formulate a model or hypothesis, you must at minimum be prepared for the result to be more wrong at first.
In contrast, you can be "less wrong" just by doing nothing, or by being a critic of those who do something.' But in the real world (and even in science), you can never win BIG -- and it's often hard to win at all -- if you never place any bets.
This is perhaps a useful disctinction:
When it comes to knowledge of the world you want to be more right.
But when it comes to reasoning I do think it is more about being less wrong... there are so many traps you can fall into, and learning how to avoid them is so much of being able to reason effectively.
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
"I'm terrible at math."
"I hate math class."
"I'm just dumb."
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments--very small inferential gaps, no "trick questions".
Now, the "I'm terrible at math" attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A's on their homework papers--when they started to believe that maybe they were good at math, after all--the difference in their confidence and effort was night and day. It was the false belief that enabled them to "take the first steps."
Hypothetical (and I may expand on this in another post):
You've been shot. Fortunately, there's a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he's got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say: A. Look, I don't have painkiller, but I'm going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it's morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I'd have to admit I'd rather be deceived. Is this unwise? Why?
Admittedly, I haven't attained a false conclusion via my epistemology. It's probably wise to generally trust doctors when they tell you what they're administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby's various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. "Lie to me about what's in that IV!" might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I'm not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
The word "ideology" sounds wrong. One of the aspects of x-rationality is hoarding general correct-ideas-recognition power, as opposed to autonomously adhering to a certain set of ideas.
It's a difference between an atheist-fanatic who has a blind conviction in nonexistence of God and participates in anti-theistic color politics, and a person who has solid understanding of the natural world, and from this understanding concludes that certain set of beliefs is ridiculous.
I have two proposals (which happen to be somewhat contradictory) so I will make them in separate posts.
The first is that the real purpose of this site is to create minions and funding for Eliezer's mad scheme to take over the world. There should be more recognition and consciousness of this underlying agenda.
This is an interesting and worthwhile idea, though TBH I'm not sure I agree with the premise.
The whole "rationality" thing provides more of a framework that a status quo. People who make posts like "Well, I'm a rationalist and a theist, so there! Ha!" do tend to get voted down (when they lack evidence/argument), but I hardly see a problem with this. This community strongly encourages people to provide supporting evidence or argumentation and (interestingly) seems to have no objections to extremely long posts/replies.I have yet to see a ...
I don't know if this actually counts as a dissenting opinion, since there seems to be a conclusion around here that a little irrationality is okay. But I published a post about the virtues of irrationality (modeled after Yukowsky's twelve virtues of rationality), found here:
http://antisingularity.wordpress.com/2009/06/05/twelve-virtues-of-irrationality/
I suppose my attempt is to provide a more rational view by including irrationality but that is merely my opinion. I believe that there are good irrational things in the universe and I think that is a dissent...
I would say the direction I most dissent from Less Wrong is that I don't think 'rationality' is inherently anything worth having. It's not that I doubt its relevance for developing more accurate information, nor its potential efficacy in solving various problems, but if I have a rationalistic bent that is mainly because I'm just that sort of person - being irrational isn't 'bad', it's just - irrational.
I would say the sort of terms and arguments I most reject are those with normative-moral content, since (depending on your definition) I either do not beli...
I'm continually surprised that so many people here take various ideas about morality seriously. For me, rationality is very closely associated with moral skepticism, and this view seems to be shared by almost all the rationalist type people I meet IRL here in northern Europe. Perhaps it has something to do with secularization having come further in Europe than in the US?
The rise of rationality in history has undermined not only religion, but at the same time and for the same reasons, all forms of morality. As I see it, one of the main challenges for people...
One thing that came to mind just this morning: Why is expected utility maximization the most rational thing to do? As I understand it (and Im a CS, not Econ. major), prospect theory and the utility function weighing used in it are usually accepted as how most "irrational" people make their descisions. But this might not be because they are irrational but rather because our utility functions do actually behave that way in which case we should abandon EU and just try to maximize well being with all the quirks PT introduces (such as loss being more...
"Where do you think Less Wrong is most wrong?"
I don't know where Less Wrong is most "wrong" - I don't have a reliable conclusion about this and moreover I don't think Less Wrong community accept exceptionlessly a group of statements - but I can certainly say this: some posts (and sometimes comments) introduce jargon (i.e. Kullback-Leibler distance, utility function, priors etc.) for not very substantial reasons. I think sometimes people have a little urge to show off and reveal the world how smart they are. Just relax, okay? We all kno...
I think the group focusses too much on epistemic rationality - and not enough on reason.
Epistemic rationality is one type of short-term goal among many - whereas reason is the foundation-stone of rationality. So: I would like to see less about the former and more about the latter.
I like this idea. I don't really have anything to contribute to this thread at the moment, though.
Seems along the same lines as the "closet thread" but better.
I read LW for a few months but I haven't commented yet. This looks like a good place to start.
There are two points in LW community that seem to gravitate towards ideology IMHO:
Anti-religion. Some people hold quite rational religious believes which seem to be a big no-no here.
Pro-singularity. Some other people consider Singularity merely a "sci-fi fantasy" and I have an impression that such views, if expressed here, would make this community irrationally defensive.
I may be completely wrong though :)
But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
"Just try my things!" you say,
I've been saying to try something. Anything. Just test something. Yes, I've suggested some ways for testing things, and some things to test. But most of them are not MY things, as I've said over and over and over.
At this point I've pretty much come to the conclusion that it's impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to "try my things"... when in fact I've bent over backwards to point as much as possible to other people's things. Believe it or not, I didn't come here to promote my work or business.
I don't care if you test my things. They're not "my" things anyway. I'm annoyed that you think I don't understand science, because it shows you're rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the "somatic marker hypothesis", if you haven't previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement -- we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable -- and it's a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they'd have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you'd need to validate a somatic marker scale against some other, more subjective scale that's already accepted, possibly in the context of some therapy that's also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can't be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement -- and on far-shorter timescales -- than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn't take more than 30 minutes after learning the technique to know if it's any good or not, because I have a way of measuring it that doesn't depend on me doing any guessing. Either I still flinch or I don't. Either I get a sinking feeling in my gut or I don't. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer's theory of animal magnetism.)
I don't know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I've consistently advocated here, is that it's possible to test self-help techniques by way of empirical observation of somatic marker responses both "before" and "after". But even this is not "my" idea.
The somatic marker hypothesis is cutting-edge neuroscience -- it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don't see how I can respond, since as far as I can tell, you're attacking things I never said... and if I had said them, I would agree with your impeccable critique of them. But since I didn't say them... I don't see what else I can possibly say.
Occasionally, concerns have been expressed from within Less Wrong that the community is too homogeneous. Certainly the observation of homogeneity is true to the extent that the community shares common views that are minority views in the general population.
Maintaining a High Signal to Noise Ratio
The Less Wrong community shares an ideology that it is calling ‘rationality’(despite some attempts to rename it, this is what it is). A burgeoning ideology needs a lot of faithful support in order to develop true. By this, I mean that the ideology needs a chance to define itself as it would define itself, without a lot of competing influences watering it down, adding impure elements, distorting it. In other words, you want to cultivate a high signal to noise ratio.
For the most part, Less Wrong is remarkably successful at cultivating this high signal to noise ratio. A common ideology attracts people to Less Wrong, and then karma is used to maintain fidelity. It protects Less Wrong from the influence of outsiders who just don't "get it". It is also used to guide and teach people who are reasonably near the ideology but need some training in rationality. Thus, karma is awarded for views that align especially well with the ideology, align reasonably well, or that align with one of the directions that the ideology is reasonably evolving.
Rationality is not a religion – Or is it?
Therefore, on Less Wrong, a person earns karma by expressing views from within the ideology. Wayward comments are discouraged with down-votes. Sometimes, even, an ideological toe is stepped on, and the disapproval is more explicit. I’ve been told, here and there, one way or another, that expressing extremely dissenting views is: stomping on flowers, showing disrespect, not playing along, being inconsiderate.
So it turns out: the conditions necessary for the faithful support of an ideology are not that different from the conditions sufficient for developing a cult.
But Less Wrong isn't a religion or a cult. It wants to identify and dis-root illusion, not create a safe place to cultivate it. Somewhere, Less Wrong must be able challenge its basic assumptions, and see how they hold up to new and all evidence. You have to allow brave dissent.
Outsiders who insist on hanging around can help by pointing to assumptions that are thought to be self-evident by those who "get it", but that aren’t obviously true. And which may be wrong.
It’s not necessarily the case that someone challenging a significant assumption doesn’t get it and doesn’t belong here. Maybe, occasionally, someone with a dissenting view may be representing the ideology more than the status quo.
Shouldn’t there be a place where people who think they are more rational (or better than rational), can say, “hey, this is wrong!”?
A Solution
I am creating this top-level post for people to express dissenting views that are simply too far from the main ideology to be expressed in other posts. If successful, it would serve two purposes. First, it would remove extreme dissent away from the other posts, thus maintaining fidelity there. People who want to play at “rationality” ideology can play without other, irrelevant points of view spoiling the fun. Second, it would allow dissent for those in the community who are interested in not being a cult, challenging first assumptions and suggesting ideas for improving Less Wrong without being traitorous. (By the way, karma must still work the same, or the discussion loses its value relative to the rest of Less Wrong. Be prepared to lose karma.)
Thus I encourage anyone (outsiders and insiders) to use this post “Dissenting Views” to answer the question: Where do you think Less Wrong is most wrong?