Sarokrae comments on Do people think Less Wrong rationality is parochial? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (196)
I have no grounding in cogsci/popular rationality, but my initial impression of LW was along the lines of "hmm, this seems interesting, but nothing seems that new to me..." I stuck around for a while and eventually found the parts that interested me (hitting rocky ground around the time I reached the /weird/ parts), but for a long while the impression was that this site had too high a rhetoric to actual content ratio, and presented itself as more revolutionary than its content justifies.
My (better at rationality than me) OH had a more extreme first impression of approximately "These people are telling me nothing new, or vaguely new things that aren't actually useful, in a tone that suggests that it's going to change my life. They sound like a bunch of pompous idiots." He also stuck around though, and enjoyed reading the sequences as consolidating his existing ideas into concrete lumps of usefulness.
From these two limited points of evidence, I timidly suggest that although LW is pitched at generic rational folk, and contain lots of good ideas about rationality, the way things are written over-represent the novelty and importance of some of the ideas here, and may actively put off people who have good ideas about philosophy and rationality but treat them as "nothing big".
Another note - jumping straight into the articles helped neither of us, so it's probably a good idea to simplify navigation, as has already been mentioned, and make the "About" page more prominent, since that gives a good idea to someone new as to what actually happens on this site - something that is quite non-obvious.
I think you hit nail on the head. It seems to me that LW represent bracketing by rationality - i.e. there's lower limit below which you don't find site interesting, there is the range where you see it as rationality community, and there's upper limit above which you would see it as self important pompous fools being very wrong on some few topics and not interesting on other topics.
Dangerously wrong, even; the progress in computing technology leads to new cures to diseases, and misguided advocacy of great harm of such progress, done by people with no understanding of the limitations of computational processes in general (let alone 'intelligent' processes) is not unlike the anti-vaccination campaigning by people with no solid background in biochemistry. Donating for vaccine safety research performed by someone without solid background in biochemistry, is not only stupid, it will kill people. The computer science is no different now, that it is used for biochemical research. No honest moral individual can go ahead and speak of great harms of medically relevant technologies without first obtaining a very very solid background with solid understanding of the boring fundamentals, and with independent testing of oneself - to avoid self delusion - by doing something competitive in the field. Especially so when those concerns are not shared by the more educated or knowledgeable or accomplished individuals. The only way it could be honest is if one is to honestly believe oneself to be a lot, lot, lot smarter than the smartest people on Earth, and one can't honestly believe such a thing without either accomplishing something impressive that great number of smartest people failed to accomplish, or being a fool.
Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?
Also, if someone was worried about the possibility of a bad singularity, but didn't think that supporting SIAI was a good way to address that concern, what should they do instead?
Instrumental rationality, i.e. "winning"? Lots...
Epistemic rationality? None...
Tell SIAI why they don't support them and thereby provide an incentive to change.
How did Eliezer create LW?
Popularizing ideas from contemporary cognitive science and naturalized philosophy seems like a pretty worthy goal in and of itself. I wonder to what extent the "Less Wrong" identity helps this (by providing a convenient label and reference point), and to what extend it hurts (by providing an opportunity to dismiss ideas as "that Less Wrong phyg"). I suspect the former dominates, but the latter might be heard from more.
Popularization is better without novel jargon though.
Unless there are especially important concepts that lack labels (or lack adequate labels).
For my part, when I discovered the LW material (via MoR, as it turns out, though I had independently found "Creating Friendly AI" years earlier and thought it "pretty neat"), I was thrilled to see a community that apparently understood and took seriously what I had long felt were uncommonly known but in truth rather intuitive facts about the human condition, and secretly hoped to find a site full of Nietzschean ubermenschen/frauen/etc. who were kind and witty and charming and effortlessly accomplished and vastly intelligent.
It wasn't quite like that but it's still pretty neat.