Long time lurker, first time poster here.
My general impression of the general impression of LW is that it's an Eliezer Yudkowsky fanclub. Essentially, if you ask anyone what they think of Eliezer Yudkowsky, you'll know what they think of LW - which is unfortunate, because lots of people seem to think EY is "full of hot air" or "full of himself" or "a self-important wanker", and this maps on to their attitude about LW.
I am a counterexample. I think Eliezer is a self-important wanker, but I have a favorable view of LW as a whole. I agree that I might be rare. I also wouldn't describe myself as a "part of the LW community." I think I attended a total of 1 meetup.
honest people can't stay self deluded for very long.
This is surely not true. Lots of wrong ideas last a long time beyond when they are, in theory, recognizably wrong. Humans have tremendous inertia to stick with familiar delusion, rather than replace them with new notions.
Consider any long-lived superstition, pseudoscience, etc. To pick an uncontroversial example, astrology. There were very powerful arguments against it going back to antiquity, and there are believers down to the present. There are certainly also conscious con artists propping up these belief structures -- but they are necessarily the minority of purported believers. You need more victims than con artists for the system to be stable.
People like Newton and Kepler -- and many eminent scientists since -- were serious sincere believers in all sorts of mystical nonsense -- alchemy, numerology, and so forth. I's possible for smart careful people to persistently delude themselves -- even when the same people, in other contexts, are able to evaluate evidence accurately and form correct conclusions.
For what it's worth, I think Eliezer is a very bright person who has built a serious fanclub that reinforces all of his existing views, and has thus cemented a worldview that can casually brush off all negative feedback because "my fanclub says I'm right / I'm smarter than them."
This maps quite well to my view of LessWrong as a whole - there's a strong bias to accept affirmations of belief and reject contrary viewpoints. My gut reaction is that the standards of evidence are significantly different for the two categories.
The Sequences strike me as a reasonably large accomplishment, which is why I consider Eliezer bright. I haven't seen another example of someone successfully cultivating a large body of easily accessed rationality information. It's written in such a way that you don't need a ton of grounding to get involved in it, it's available for free online, and it covers a wide variety of topics.
If I'm missing something, please point me towards it, of course!! :)
I've tried to get a family member to read parts of the Sequences in hopes of getting to a point where we could resolve a long-standing disagreement, but they don't show much interest in it.
In my experience, it works far better to internalize the message of a text and then communicate the pieces of that message that are relevant to my discussions with people as they come up, than to point people to the text.
Of course, it's also a lot more work, and not all discussions (or relationships) are worth it.
I don't think "parochial" is the right word here -- a more accurate term for what you're describing would be "contrarian."
In any case, insofar as there exists some coherent body of insight that can be named "Less Wrong rationality," one of its main problems is that it lacks any really useful methods for separating truth from nonsense when it comes to the output of the contemporary academia and other high-status intellectual institutions. I find this rather puzzling: on the one hand, I see people here who seem seriously interested in forming a more accurate view of the world -- but at the same time, living in a society that has vast powerful, influential, and super-high-status official intellectual institutions that deal with all imaginable topics, they show little or no interest in the question of what systematic biases and perverse incentives might be influencing their output.
Now, the point of your post seems to be that LW is good because its opinion is in line with that of these high-status institutions. (Presumably thanks to the fact that both sides have accurately converged onto the truth.) But then what exactly makes LW useful or worthwhile in a...
Vladimir_M, what makes you think that elite universities have the desire and money/power to proselytize their "output"?
Mencius Moldbug has convincingly argued on his blog, that intellectual fashion among the ruling class follows intellectual fashion on Harvard by an offset of about one generation. A generation after that the judicial and journalist class exiles any opposition to such thought from public discourse and most educated people move significantly towards it. A generation after that through public schools and the by now decades long exposure to media issuing normative statements on the subject, such beliefs are marginalized even among the uneducated, making any populist opposition to society wide stated value or policy changes a futile gesture destined to live only one season.
It is indeed is a wonderful machine for generating political power through opinion in Western type societies. While I generally have no qualms about Harvard being an acceptable truth generation machine when it comes to say Physics, in areas where it has a conflict of interest, like say economics or sociology let alone political science or ethics it is not a reliable truth generating mac...
I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality.
Is this the opinion of the cogsci experts, as well? If not, then either it is not true, or you have a communication problem.
(My personal feeling, as a complete non-expert, is that, once you shed the FAI/cryonics/MWI fluff (and possibly something about TDT/UDT, though I know next to nothing about that), and align the terminology with the mainstream, there is nothing parochial about "modern rationality". If anything, there is probably enough novel stuff in the sequences for a graduate degree or two.)
once you shed the FAI/cryonics/MWI fluff
I would amputate exactly these. I doubt however, that the site community can live as such with those gone.
There is at least a non-negligible minority (or a silent majority?) of those who would retroactively call it an improvement if your wish were granted by some magic measure.
Even though I do think decoherence-based MWI is a better model than Copenhagen non-interpretation, it doesn't look like there are any new arguments in support or against it on LW anyway.
But given that LW is run mostly by SingInst people, and they do believe in possibility of FOOM, there is no reason for FAI to become offtopic on LW. Most of the time it is easy to recognize by the thread caption, so it is easy for us to participate only in those discussions that are interesting to us.
I have no grounding in cogsci/popular rationality, but my initial impression of LW was along the lines of "hmm, this seems interesting, but nothing seems that new to me..." I stuck around for a while and eventually found the parts that interested me (hitting rocky ground around the time I reached the /weird/ parts), but for a long while the impression was that this site had too high a rhetoric to actual content ratio, and presented itself as more revolutionary than its content justifies.
My (better at rationality than me) OH had a more extreme first impression of approximately "These people are telling me nothing new, or vaguely new things that aren't actually useful, in a tone that suggests that it's going to change my life. They sound like a bunch of pompous idiots." He also stuck around though, and enjoyed reading the sequences as consolidating his existing ideas into concrete lumps of usefulness.
From these two limited points of evidence, I timidly suggest that although LW is pitched at generic rational folk, and contain lots of good ideas about rationality, the way things are written over-represent the novelty and importance of some of the ideas here, and may...
Are you aware of another online community where people more rational than LWers gather? If not, any ideas about how to create such a community?
Also, if someone was worried about the possibility of a bad singularity, but didn't think that supporting SIAI was a good way to address that concern, what should they do instead?
That depends a lot on what "Less Wrong rationality" is understood to denote.
There's a lot of stuff here I recognized as mainstream cogsci when I read it.
There's other stuff that I don't consider mainstream cogsci (e.g. cryonics advocacy, MWI advocacy, confident predictions of FOOMing AI).
There's other stuff that drifts in between (e.g., the meta-ethics stuff is embedded in a fairly conventional framework, but comes to conclusions that are not clearly conventional.... though at times this seems more a fact about presentation than content).
I can accept the idea that some of that stuff is central to "LW rationality" and some of it isn't, but it's not at all obvious where one would draw the line.
My first impression of lesswrong was of a community devoted to pop science, sci-fi, and futurism. Also, around that time singulartian was getting a bad name for good reasons (but it was the Kurzweil kind d'oh), and so I closed the tab thinking I wasn't missing anything interesting. It wasn't until years later when I was getting tired of the ignorance and arrogance of the skeptic community that I found my way back to lesswrong with some linked post that showed careful, honest thinking.
It would be a good idea to put up a highly visible link on the front page addressing new visitors' immediate criticisms. For example:
Another thing, the layout of this site will take people more than ten seconds to grok which is enough to have most people just leave. For instance, I'd rename 'discussion' to 'forum' and 'main' to 'rationality blog' or just 'blog'.
I'd rename 'discussion' to 'forum' and 'main' to 'rationality blog' or just 'blog'.
This is a great idea.
Be who you are
Keep your (status quo) identity small, don't be who you are, strive to be who you should be.
IME, skeptics seem to like the stuff on cognitive biases and how not to be stupid. The other local tropes, they take or leave, mostly leave. (Based on anecdotal observation of RationalWiki and elsewhere in the skepticsphere.)
I thought Less Wrong-style rationality was parochial, or basically "Eliezer's standard". I might have done better to apply this quote from the Quantum Physics Sequence elsewhere:
Many of these ideas are surprisingly conventional, and being floated around by other thinkers. I'm a good deal less of a lonely iconoclast than I seem; maybe it's just the way I talk.
Only how was I to know it is more general?
When I read this, my thinking goes off in a couple different directions. On the one hand, my impression is that there's been a bit of a dustup in the literature between the heuristics and biases tradition and the evo psych folks, and that furthermore LW tends to go with the heuristics and biases tradition, where as I find what Steven Pinker, along with Samuels and Stich, have written about that issue more persuasive.
But that may be more specific than what you have in mind. Because I've also been thinking lately that there's way too little reflection about ...
When I first encountered Less Wrong, two or three years ago, I would have agreed with the Oaksford & Chater quotation and would have found it completely mainstream. The intellectual paradigm of my social circles was that one needed to be self-consistent in their worldview, and beyond that there was room for variation, especially as people would have a lot of different experiences swaying them one way or another.
I thought Less Wrong was extremely, iconoclastically, over-confident in its assertion that people should or must be atheists to be rational. S...
My experience is that plenty of people view the Less Wrong approach to rationality as parochial, but I suspect that if most of these same people were told that it's largely the same as the mainstream cogsci approach to rationality, they would conclude that the mainstream cogsci approach to rationality is parochial.
How wide an audience are you concerned with here?
Some of the rationality may to significant extent be a subset of standard, but it has important omissions - in the areas of game theory for instance - and much more importantly significant miss-application such as taking the theoretically ideal approaches given infinite computing power as the ideal, and seeing as the best try the approximations to them which are grossly sub-optimal on the limited hardware where different algorithms have to be employed instead. One has to also understand that in practice computations have cost, and any form of fuzzy reasoni...
I would turn this around- what core part of Less Wrong is actually novel? The sequences seem to be popularizations of various people's work. The only thing unique to the site seems to be the eccentricity of its choice in topics/examples (most cog sci people probably don't think many worlds quantum mechanics is pedagogically useful for teaching rationality).
There also appears to be an unspoken contempt for creating novel work. Lots of conjecture that such-and-such behavior may be signaling, and such-and-such belief is a result of such-and-such bias, with little discussion of how to formalize and test the idea.
I sometimes think a quote I've heard in reference to Wolfram's "A New Kind of Science", might apply equally well to the sequences:
Much that is new, much that is true, and very little overlap between the two.
This sort of far-mode thinking is usually [1] evidence of an attempt to signal not-"Straw Vulcan Rationality" while simultaneously earning warm fuzzies in those possible worlds in which [DELETED] (ed. Explaining the reason for this edit would either reveal excessive information about the deleted content or require mentioning of true ideas which are considered abhorrent by mainstream society.) and is ultimately the result of having a brain which evolved to have hypocritical akrasia regarding skepticism and to guess the teacher's password [2].
[1] p(parent post is mere signalling | p-zombie Mary in a Chinese room would claim that "semantic stop-signs are red" is a map-territory-map-mapitory confusion) = .7863, but I may have performed an Aumann update with a counterfactual-me who generalized from fictional fictional-evidence.
[2] The password is Y355JE0AT15A0GNPHYG.
Before coming across Less Wrong, I wasn't really aware of rationality as a community or a lifestyle - I don't think there's anything like that where I live, though there are small but reasonably strong skeptic and atheist communities - so I don't think I'm necessarily able to answer the question you're asking. I will say that some of the local beliefs - Singularity, transhumanism, cyronics - are certainly a bit alien to some people, and may undermine people's first impression of the site.
Can anyone share a copy of Oxford Handbook of Thinking and Reasoning? I'd like to read more about how "LW rationality" compares to "mainstream cogsci rationality."
No one LW position is all that parochial. It's taking them all seriously simultaneously that is considered weird. You aren't supposed to really believe in all these words. Words are for status, what are you a bunch of nerds?
Self-aggrandizing tribalistic straw-manning. Currently upvoted to +5. If the upvotes are meant to be amusingly ironic, come home, all is forgiven.
Could anyone who is familiar with the modern cogsci literature comment on the current relevance of Bermudez's Introduction to Cognitive Science? I mean this one: http://www.amazon.com/Cognitive-Science-An-Introduction-Mind/dp/0521708370 There was a fascinating mega-post on it at commonsenseatheism.
Why doesn't Yudkowsky publish the sequences--at least as an e-book? One likely reason: it would require extensive editing, not only to eliminate the inconsistencies that have arisen but, more so, to eliminate the prolix prose and filler that might make the postings entertaining to read on a blog (or which he just didn't take the time to cut), but which make for tedious sustained reading. A thorough rewrite would make a real contribution; Yudkowsky has a lot to say--but not that much.
Contradiction much?
No. I dislike repeating myself:
I am fairly certain the reason creationism is still around as a political force in some US states is because creationism is not a serious threat to The Cathedral.
But the following part of your response amused me and further more provoked some thought on the topic of conspiracy theories so have a warm fuzzy.
Let's at least be consistent about our conspiracy theories ...
I am not quite sure what you mean with that phrase. Can you please clarify?
And finally it is a convenient tool to clearly and in vivid colours paint something as low status, it is a boo light applied to any explanation that has people acting in anything that can be described as self-interest and is a few inferential jumps away. One could argue this is the primary meaning of calling an argument a conspiracy theory in on-line debates.
I'm going to be generous and assume that this last meaning wasn't the primary intended one since you have since edited the line out of your reply.
Tying the content of the linked post back to our topic, I will admit Moldbug shows off his smarts and knowledge with silly, interesting and probably wrong ideas when he talks about his proposals for a neocameralist state. He can be a bit crankish talking about it, but hey show me a man who made a new ideology and wasn't a bit crankish about it! But no I think when he talks recent history, politics and sociology he is a most excellent map maker and not a "conspiracy nut" (though the pattern match is an understandable one to make in ignorance).
First there is a reason I talked about a "power machine" and not a sinister cabal. If you have a trusted authority to which people outsource their thinking from where they upload their favoured memeplexes, then allowing even for some very limited memetic evolution you will see the thing (all else being equal) try and settle. Those structures that aren't by happen-stance built so that the memeplexes they emit increase trust of the source will tend to be out-competed by those who do. Don't we have a working demonstration of this in organized religion? Notice how this does not require a centuries spanning conspiracy of Christian authorities consistently and consciously working to enhance their own status and nothing else while lying to the masses, nope I'm pretty sure most of them honestly believed in their stated map of reality. Yet the Church did end up working as such a belief pump and it even told us it was a belief pump that could be derived as true and necessary from pure reason. Funny how that worked out. Also recall the massive pay-offs in a system where the sillies in the brains of the public or experts directly matter in who the government allots resources to. Not much coordination needed for those peddling their particular wares to individually exploit this, or for them to realize which soap box is the best one to be standing on. If anything like a trusted soap box exists there will be great demand to stand on it, are we sure the winner of such a fight is actually someone who will not abuse the soap boxes truth providing credentials? Maybe the soap box comes equipped with some mechanisms to make it so, still they better be marvellously strong since they will probably be heavily strained. Secondly it is not a model that anthropomorphizes society or groups needlessly, indeed it might do well to incorporate more of it, since large chunks of our civilization where redecorated by the schemes of hundreds of petty and ambitious historically important figures that wanted to mess with ... eh I mean optimize power distribution.
On the story thing, well I do admit that component is present in biasing me and others on LW towards making it seem more plausible. MM is a good if verbose writer. Speaking of verbosity you should consider my current take as a incomplete and abridged version not the full argument, it is also possible I plain misremember some details so I hope other posters also familiar with MM will correct me. I have the impression you simply aren't familiar with his thinking since you so seem to attack a very weak and mangled form of his argument seemingly gleaned only from a ungenerous reading of the parent posts. I strongly recommend, even if you judge the value of additional information gained out of reading his writings low, to do a search on LW for other discussion of these ideas in various comment sections and so on, since a lot has been written on the subject. Browsing the comment history of people who often explicitly talk about such topics also seems like a good idea. Remember this is just some dude on the internet, but this is a dude on the internet that Robin Hanson considered worth debating and engaging and is someone who many LWers read and think about (note I didn't say agree with). Discussions debating his ideas are also often up voted. You will also see respected and much more formidable rationalists than myself occasionally name drop or reference him. If you have some trust in the LessWrong rationalist community, you probably need to update on how seriously you should take this particular on-line hobo distributing photocopied essays.
Note: This reply was written before edits of parent. I will respond to the added edited material in a separate post.
Edit: Abridged text by storing the analysis of conspiracy theory failure mode in a open discussion post.
I've spent so much time in the cogsci literature that I know the LW approach to rationality is basically the mainstream cogsci approach to rationality (plus some extra stuff about, e.g., language), but... do other people not know this? Do people one step removed from LessWrong — say, in the 'atheist' and 'skeptic' communities — not know this? If this is causing credibility problems in our broader community, it'd be relatively easy to show people that Less Wrong is not, in fact, a "fringe" approach to rationality.
For example, here's Oaksford & Chater in the second chapter to the (excellent) new Oxford Handbook of Thinking and Reasoning, the one on normative systems of rationality:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
They go on to clarify that by probability they mean Bayesian probability theory, and by rational choice theory they mean Bayesian decision theory. You'll get the same account in the textbooks on the cogsci of rationality, e.g. Thinking and Deciding or Rational Choice in an Uncertain World.