Tl;dr: Articles on LW are, if unchecked (for now by you), heavily distorting a useful view (yours) on what matters.

 

[This is (though in part only) a five-year update to Patrissimo’s article Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. However, I wrote most of this article before I became aware of its predecessor. Then again, this reinforces both our articles' main critique.]

 

I claim that rational discussions in person, conferences, forums, social media, and blogs suffer from adverse selection and promote unwished-for phenomena such as the availability heuristic. Bluntly stated, they do (as all other discussions) have a tendency to support ever worse, unimportant, or wrong opinions and articles. More importantly, articles of high relevancy regarding some topics are conspicuously missing. This can be also observed on Less Wrong. It is not the purpose of this article to determine the exact extent of this problem. It shall merely bring to attention that “what you get is not what you should see." However, I am afraid this effect is largely undervalued.

 

This result is by design and therefore to be expected. A rational agent will, by definition, post incorrect, incomplete, or not at all in the following instances:

  • Cost-benefit analysis: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,) offsets the former effect. Example: Have you seen articles by Mark Zuckerberg? But I also argue that for random John Doe the personal cost-benefit-analysis from posting an article is negative. Even more, the value of your time should approach infinity if you really drink the LW Kool-Aid, however, this shall be the topic of a subsequent article. I suspect the theme of this article may also be restated as a free-riding problem as it postulates the non-production or under-production of valuable articles and other contributions.
  • Conflicting with law: Topics like drugs (in the western world) and maybe politics or sexuality in other parts of the world are biased due to the risk of persecution, punishment, extortion, etc. And many topics such as in the spheres of rationality, transhumanism, effective altruism, are at least highly sensitive, especially when you continue arguing until you reach their moral extremes.
  • Inconvenience of disagreement: Due to the effort of posting truly anonymously (which currently requires a truly anonymous e-mail address and so forth), disagreeing posts will be avoided, particularly when the original poster is of high status and the risk to rub off on one’s other articles thus increased. This is obviously even truer for personal interactions. Side note: The reverse situation may also apply: more agreement (likes) with high status.
  • Dark knowledge: Even if I know how to acquire a sniper gun that cannot be traced, I will not share this knowledge (as for all other reasons, there are substantially better examples, but I do not want to make spreading dark knowledge a focus of this article).
  • Signaling: Seriously, would you discuss your affiliation to LW in a job interview?! Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point). LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”
  • WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western, educated, industrialized, rich, democratic, and male. The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world. LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.” LW could certainly use more diversity. Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as "Growing Mentally Stronger" which seemed to me intuitively highly counterproductive. I then asked my wife who is far more into fashion and not at all into LW. Her comment (Crocker's warning): “They are great! You should definitely buy one for your son if you want him to go to high school and to be all for himself for the next couple of years; that is, except for the mobbing, maybe.”
  • Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors. This fact seems underappreciated. Think of SSC's "What universal human experiences are you missing without realizing it?" Think of inferential distances and the typical mind fallacy. Think of slight changes in beliefs after drinking coffee, been working out, deeply in love for the first time/seen your child born, being extremely hungry, wanting to and standing on the top of the mountain (especially Mt. Everest). Russell pointed out the interesting and strong effect of Schopenhauer’s and Nietzsche’s personal history on their misogyny. However, it would be a stretch to simply call them irrational. In every discussion, you have to start somewhere, but finding a starting point is a lot more difficult when the discussion partners are more diverse. All factors may not result in direct misinformation on LW but certainly shape the conversation (see also the next point).
  • Priorities: Specific “darlings” of the LW sphere such as Newcomb’s paradox or MW are regularly discussed. Just one moment of not paying bias attention, and you may assume they are really relevant. For those of us currently not programming FAI, they aren’t and steal attention from more important issues.
  • Other beliefs/goals: Close to selfishness, but not quite the same. If an agent’s beliefs and goals differ from most others, the discussion would benefit from your post. Even so, that by itself may not be a sufficient reason for an agent to post. Example: Imagine somebody like Ben Goertzel. His beliefs on AI, for instance, differed from the mainstream on LW. This did not necessarily result in him posting an article on LW. And to my knowledge, he won’t, at least not directly. Plus, LW may try to slow him down as he seems less concerned about the F of FAI.
  • Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect. E.g. I do not want to signal myself that I belong to this tribe. This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma." Again, if this sounds perfectly fine to you, that may be bad news.
  • Barriers to entry: Your first post requires creating an account. Karma that signals the quality of your post is still absent. An aspiring author may question the relative importance of his opinion (especially for highly complex topics), his understanding of the problem, the quality of his writing, and if his research on the chosen topic is sufficient.
  • Nothing new under the sun: Writing an article requires the bold assumption that its marginal utility is significantly above zero. The likelihood of which probably decreases with the number of posts, which is, as of now, quite impressive. Patrissimo‘s article (footnote [10]) addresses the same point, others mention being afraid of "reinventing the wheel."
  • Error: I should point out that most of the reasons brought forward in this list talk about deliberate misinformation. In many cases, an article will just be wrong which the author does not realize. Examples: facts (the earth is flat), predications (planes cannot fly), and, seriously underestimated, horizon effects (if more information is provided the rational agent realizes that his action did not yield the desired outcome, e.g. ban of plastic bags).
  • Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders. See “is LW a c***” and Roko’s ***." This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.
  • Topicality: This is a problem specific to LW. Many of the great posts as well as the sequences have originated about five to ten years ago. While the interest in AI has now reached mainstream awareness, the solid intellectual basis (centered around a few individuals) which LW offered seems to break away gradually and rationality topics experience their diaspora. What remains is a less balanced account of important topics in the sphere of rationality and new authors are discouraged to enter the conversation.
  • Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them."
  • +Redundancy: If any of the above reasons apply, I may choose not to post. However, I also expect a rational agent with sufficiently close knowledge to attain the same knowledge himself so it is at the same time not absolutely necessary to post. An article will “only” speed up the time required to understand a new concept and reduce the likelihood of rationalists diverting due to disagreement (if Aumann is ignored) or faulty argumentation.

This list is not exhaustive. If you do not find a factor in this list that you expect to accounts for much of the effect, I will appreciate a hint in the comments.

 

There are a few outstanding examples pointing in the opposite direction. They appear to provide uncensored accounts of their way of thinking and take arguments to their logical extremes when necessary. Most notably Bostrom and Gwern, but then again, feel free to read the latter’s posts on endured extortion attempts.

 

A somewhat flippant conclusion (more in a FB than LW voice): After reading the article from 2010, I cannot expect this article (or the ones possibly following that have already been written) to have a serious impact. It thus can be concluded that it should not have been written. Then again, observing our own thinking patterns, we can identify influences of many thinkers who may have suspected the same (hubris not intended). And step by step, we will be standing on the shoulders of giants. At the same time, keep in mind that articles from LW won’t get you there. They represent only a small piece of the jigsaw. You may want to read some, observe how instrumental rationality works in the “real world," and, finally, you have to draw the critical conclusions for yourself. Nobody truly rational will lay them out for you. LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X. But keep in mind that the instrumental rationality of your hairstylist may still surpass yours, and I don’t even need to say much about the one of your president, business leader, and club Casanova. And yet, they may be literally dead wrong, because they have overlooked AI and SENS.

 

A final personal note: Kudos to the giants for building this great website and starting point for rationalists and the real-life progress in the last couple of years! This is a rather skeptical article to start with, but it does have its specific purpose of laying out why I, and I suspect many others, almost refrained from posting.

 

 

New Comment
43 comments, sorted by Click to highlight new comments since:

I am not sure of the point here. I read it as "I can imagine a perfect world and LW is not it". Well, duh.

There are also a lot of words (like "wrong") that the OP knows the meaning of, but I do not. For example, I have no idea what are "wrong opinions" which, apparently, rational discussions have a tendency to support. Or what is that "high relevancy" of missing articles -- relevancy to whom?

And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?

The article mixes together examples of imperfection of the world with specific problems of LW.

God, grant me the serenity to accept the things I cannot change,
The courage to change the things I can,
And the wisdom to know the difference.

We could try to be more specific about things related to us, and how we could solve them. For example:

Cost-benefit analysis

Applies to our celebrities, too. As a reader, I would love to read a new Sequence written by Eliezer, or other impressive people in our community. However, it may not be the best use of their time. Writing an article can take a lot of time and energy.

Gwern's comment suggests the solution: have someone else write the article. A person who is sufficiently rational and good at writing, and lives in the Bay Area, could take the role of a "rationalist community journalist". I imagine their work could be spending time with important rationalists, making notes, writing the articles, having them reviewed by the relevant people, and publishing them on LW.

WEIRD

Relevant article: "Black People Less Likely". Also, as far as I know, the traditional advice for groups with overwhelmingly western educated rich white male membership is to put a clique of western educated rich white feminists in the positions of power. Which creates its own problems, namely that the people with newly gained power usually give zero fucks about the group's survival or its original mission, and focus on spreading their political memes.

This said, I support the goal of bringing more people from different backgrounds to the rationalist community (as long as the people are rational, of course). I object against the traditional methods of doing it, because those methods often fail to reach the goal.

I suspect that the fact that LessWrong is an online community using English language already contributes heavily to readers more likely being western, educated, rich (you need to be good at English, have a lot of free time, have a good internet connection). Whiteness correlates with being western and rich. There is a gender imbalance in STEM, in the general society outside LessWrong. -- All these filters are applied before any content was written. Which of course doesn't mean the content couldn't add another filter in the same direction.

Here are some quick ideas that could help: Create rationality materials in paper form (for people who can't afford to spend hundreds of hours online). Translate those materials in other languages (for people not fluent in English). Maybe create materials for different audiences; e.g. a reduced version for people without good mathematical education.

Funny thing, age wasn't mentioned in the original list of complaints, and I believe it can play an important role. Specifically the fact that many rationalists have spent their whole lives at school. -- For example, it's ridiculous to see how many people self-identify as "effective altruists" while saying "okay I'm still at school without an income, so I actually didn't send a penny yet, but when I grow up and get a job I totally will give as much as possible". Nice story, bro! Maybe I could tell you how I imagined my future while I was at school; then we can laugh together. So far, you are merely a fan of effective altruism. When you start spending the rest of your life making money only to donate it to someone poorer than you, then you become an effective altruist. If you can keep doing it while feeding your children and paying for the roof above your heads, then you are hardcore. Right now, you just dilute the meaning of the word.

"rationalist community journalist"

We already have a reddit-style forum. Why would you want to go back to the old model where a few people (journalists) only provide content and the dumb masses only consume it?

traditional advice for groups with overwhelmingly western educated rich white male membership is to put a clique of western educated rich white feminists in the positions of power

...traditional?? We, um, come from different traditions, I guess X-/ I know that's what feminists want, but that doesn't make it a tradition.

So far, you are merely a fan of effective altruism.

Yep, true. Though, to be fair, EA isn't about how much you give, it's about to what you give.

I am not sure of the point here. I read it as "I can imagine a perfect world and LW is not it". Well, duh.

No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.

There are also a lot of words (like "wrong") that the OP knows the meaning of, but I do not. For example, I have no idea what are "wrong opinions" which, apparently, rational discussions have a tendency to support. Or what is that "high relevancy" of missing articles -- relevancy to whom?

High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC's Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity. Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.

And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?

No. Nash probably wouldn't cooperate, even though he understood game theory and I wouldn't blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).

a perfect world is difficult to achieve ... most also would not have expected for Wikipedia to work out as well as it does

A perfect world is, of course, impossible to achieve (not to mention that what's perfect to you is probably not so for other people) and as to Wikipedia, there are longer lists than yours of its shortcomings and problems. Is it highly useful? Of course. Will it ever get close to perfect? Of course not.

I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance.

Sure. But this is an observation about your mind, not about LW.

High relevancy to the reader who is an aspiring rationalist.

"Aspiring rationalist" is a content-free expression. It tells me nothing about what you consider "wrong" or "relevant".

The discussion of AI mostly end, where they become interesting.

Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues.

For example, I don't find the question of "shall we enforce a police state" interesting. The answer is "No", case closed, we're done. Notice that I'm speaking about myself -- you, being a different person, might well be highly interested in extended discussion of the topic.

if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity.

Yeah, sure, you go join an Animal Liberation Front of some sorts, but what's particularly interesting or rational about it? It's a straightforward consequences of the values you hold.

Heed the typical mind fallacy. Other people are not you. What you find interesting is not necessarily what others find interesting. Your dilemmas or existential issues are not their dilemmas or existential issues. For example, I don't find the question of "shall we enforce a police state" interesting. The answer is "No", case closed, we're done. Notice that I'm speaking about myself -- you, being a different person, might well be highly interested in extended discussion of the topic.

I strongly disagree and think it is unrelated to the typical mind fallacy. Ok, the word "interesting" was too unprecise. However, the argument deserves a deeper look in my opinion. Let me rephrase to: "Discussions of AI sometimes end, where they have serious implications regarding real life." Especially! if you do not enjoy to entertain the thought of a police state and increased surveillance, you should be worried if respected rational essayists come to conclusions that include them as an option. Closing your case when confronted with possible results from a chain of argumentation won't make them disappear. And a police state to stay with the example is either an issue for almost everybody (if it comes to existance) or nobody. Hence, this detached from and not about my personal values.

Let me rephrase to: "Discussions of AI sometimes end, where they have serious implications regarding real life."

I agree, that would be a bad thing.

Closing your case when confronted with possible results from a chain of argumentation won't make them disappear.

Of course not, but given my values and my estimates of how likely are certain future scenarios, I already came to certain conclusions. For them to change, either the values or the probabilities have to change. I find it unlikely that my values will change as the result of eschatological discussions on the 'net, and the discussions about the probabilities of Skynet FOOMing can be had (and probably should be had) without throwing the police state into the mix.

In general, I don't find talking about very specific scenarios in the presence of large Knightian uncertainty to be terribly useful.

None of us could "enforce a police state". It's barely possible even in principle, since it would need to include all industrialized nations (at a minimum) to have much payoff against AGI risk in particular. Worrying about "respected rational essayists" endorsing this plan also seems foolish.

"Surveillance" has similar problems, and your next sentence sounds like something we banned from the site for a reason. You do not seem competent for crime.

I'm trying to be charitable about your post as a whole to avoid anti-disjunction bias. While it's common to reject conclusions if weak arguments are added in support of them, this isn't actually fair. But I see nothing to justify your summary.

Everything Lumifer said, plus this: all this marketing/anti-marketing drama seems to be predicated upon the notion that there exists a perfect rational world / community / person. No such thing though: LW itself shows that even a rationalist attire is better than witch hunting (the presupposition of course is that LWers have rationality as their tribe flag and are not especially more rational than the average people).

I do not think that there exists a perfect rational world. My next article will emphasize that. I do think that there is a rational attire which is on average more consistent than the average one presented on LW and one should strive for it. I did not get the point of your presupposition though it seems obvious to you, LWers are not more rational?

Selfishness: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect. Example: Dating advice. Better example: Have you seen articles by Mark Zuckerberg or Elon Musk?

Do you honestly think that Musk who gives his competitors his patents for free holds something back in the way of writing articles because he fears people will compete with him?

Musk doesn't advice other people to copy him because he lives a life that includes 80 hours of work per week that he isn't really enjoying. It also doesn't leave him with time to write articles.

I was going to say, I have seen articles by Elon Musk and we have been discussing them recently, it's just that for some reason he's written them under a silly pseudonym like 'Wait But Why'...

Disintermediation from social media shaming. Also it's higher status to dictate stuff, and saves time.

When does a ghostwritten book or article deserve the name of the person who types it?

Fair Enough. Maybe I should take Elon Musk out, he has in WBW found a way to push the value of advertising beyond his the cost of his time spent. If Zuckerberg posts to, I will be fully falsified. To compensate, I introduce typical person X whose personal cost-benefit analysis from posting an article is negative. I still argue that this is the standard.

I think you're missing the broader point I was making: writing your own articles is like changing the oil in your own car. It's what you do when you are poor, unimportant, have low value of time, or it's your hobby.

Once you become important, you start outsourcing work to research assistants, personal assistants, secretaries, PR employees, vice presidents, grad students, etc. Musk is a billionaire and a very busy one at that and doesn't write his own books because it makes more sense for him to bring in someone like WBW to talk to for a few hours and have his staff show them around and brief them, and then they go off and a while later ghostwrite what Musk wanted to say. Zuckerberg is a billionaire and busy and he doesn't write all his own stuff either, he tells his PR people 'I want to predictably waste $100m on a splashy donation; write up the press release and a package for the press etc and send me a final draft'. Jobs didn't write his own autobiography, that's what Isaacson was for. Memoirs or books by famous politicians or pundits - well, if they're retired they may have written most or all of it themselves, but if they're active...? Less famously, superstar academics will often have written little or none of the papers or books published under their names; examples here would be invidious, but I will say I've sometimes looked at acknowledgements sections and wondered how much of the book the author could have written themselves. (If you wonder how it's possible for a single person to write scores of high-quality papers and books and opeds, sometimes the answer is that they are a freak of nature blessed with shortsleeping genes & endless willpower; and sometimes the answer is simply that it's not a single person.) And this is just the written channels; if you have access to the corridors of power, your time may well better be spent networking and having in-person meetings and dinners. (See the Clinton Foundation for an example of the rhizomatic nature of power.)

I'm not trying to pass judgment on whether these are appropriate ways for the rich and powerful to express their views and influence society, but it is very naive to say that just because you cannot go to the bookstore and buy a book with Musk's name on it as author, that he must not be actively spreading his views and trying to influence people.

Yeah, this highlights my overall issue with the OP.

Elon Musk's path to success is well-known and not replicable. His story relies too much on (1) luck and (2) high-IQ-plus-ultra-high-conscientiousness, in that order of importance. Elon Musk is a red herring in these discussions.

More to the point, there is already an absurd overabundance of available information about how to be quite successful in business. It is not to the comparative advantage of LW to try to replicate this type of content. Likewise, the Internet hosts an absurd overabundance of practical, useful advice on

  • how to exercise, with the aim of producing any given physical result
  • how to succeed at dating, to whatever end desired
  • how to manage one's personal finances
  • etc.

It is not the role of LW to comprehensively answer all these questions. LW has always leaned more toward rationality qua rationality. More strategy, less tactics.

Also, I think the OP is attacking a straw man to a large degree. Nobody here thinks that LW has already emmanetized the eschaton. Nobody here thinks the LW has already solved rationality. We're just a group of people interested in thinking about and discussing these types of considerations.


All that said, when I first discovered LW (and particularly the Sequences), it was such a cognitive bombshell that I did genuinely expect that my life and mind would be completely changed. And that expectation was sort of borne out, but in ways that only make sense in a sort of post hoc fashion. As in, I used LW-inspired-cognition for a lot of major life choices, but it's impossible to do A/B testing and determine if those were the right choices, because I don't have access to the world where I made the opposite choice. (People elsewhere in this very comment thread repeat the meme that "LWers are not more rational than average." Well, how would you know if they were? What does that even mean?)

but it's impossible to do A/B testing and determine if those were the right choices

You could find some people similar to you, and mumble certain incantations.

Just because the example wasn't well-chosen that doesn't invalidate the argument per se.

I do agree. The point was originally "selfishness or effort" which would have avoided the misunderstanding. I think for Musk, the competitive aspect is definitely less important than the effort aspect (he is surely one of those persons for whom "the value of time approaches infinity"). However, I doubt that Musk would give away patents if he didn't see an advantage in doing that.

I doubt that Musk would give away patents if he didn't see an advantage in doing that.

He sees that it gives him an advantage with the goal of our society not burning fossil fuels. But it doesn't give Tesla an advantage at making profits.

Nobody truly rational will lay them out for you.

?

"Truly rational" people don't share information on instrumental rationality with others?

A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect.

Perhaps people are finding that benefit. How long have you spent considering what those benefits could be? How many people have you asked what benefits they think they receive in return for their effort?

LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X.

LW is great! Yay!

But keep in mind that the instrumental rationality of your hairstylist may still surpass yours

Perhaps. But how is that relevant? I'm not my hairstylist. Different attitudes, preferences, habits, and aptitudes. Do you think that what works for "your hairstylist" is likely to work for LW readers?

Yes, LW did not make me omniscient and omnipotent yesterday. Did you think I thought otherwise?

LW sucks, compared to what?

I mostly agree with what you are saying here, but you should replace the formatting of the post (which you probably wrote with some other editor) with the standard formatting for LW. The discrepant formatting is distracting and adds the signal "outsider", which will bias people against a reception of what you are saying.

Definitely. I am slightly irritated that I missed that. The line spacing and paragraph spacing still seems a bit off compared to other articles. Is there anything I am doing wrong?

Cost-benefit analysis

I think Patri's whole post was pretty much this.

Conflicting with law

If you feel free speech is threatened, then you have bigger problems to worry about.

Inconvenience of disagreement

Only weak-willed people are afraid of disagreement. In a self-respecting community, you can say "you're wrong, here's 11 reasons why: [1] [2] [3]".

Dark knowledge

Unless you're running the simulation, I doubt you'd be the only one to know that. I'd actually advise you to tell about it so it will be properly dealt with.

Signaling: Seriously, would you discuss your affiliation to LW in a job interview?!

Terrible example, it has no relevance to a job interview.

Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point).

That's what friends are for.

LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”

Zero-sum.

WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western,[1] educated,[2] industrialized,[3] rich,[4] democratic,[5] and male.[6]

  1. So? (what of being western is of importance?)
  2. No fools in my garden. (Well-kept gardens die by pacifism)
  3. Sorry folks, internet only. No "The LessWrong Times" available. (By no fault of our own)
  4. By third-world comparisons, yes. Otherwise, I doubt it. Provide an example. (Or pledge 50% of your richness to GiveWell)
  5. I've never seen a discussion about this, so no comment. (Mind linking to one?)
  6. Men are far more interested in stuff like this. This is no category and in fact I doubt women won't be included should they want to. (If women are 'turned off' by the discussion here, and therefore choose not to participate, then they both don't have to, and the inverse would be true for me and probably a significant amount of men too)

The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world.

Well, that's pretty much a given. That's not a bad thing, and if it's a good thing is debateable.

LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.”

In case the user is inactive: I have no idea what he meant. Not everyone is rational or being 100% effective or whatever. The last sentence feels like a LW-complete sanity test, and a very scary one by it's implications of the userbase being completely off-base with reality.

LW could certainly use more diversity.

I'm sure people would oppose more people like me. Leaving me aside, "diversity" seems like an ideal that I'm not sure what it actually implies. Let's add women, and people of colour, and some monkeys and jackdaws. That's just my silly recommendations though. What do you imply by "diversity" that LW is lacking, and why is it important to be included?

Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as "Growing Mentally Stronger" which seemed to me intuitively highly counterproductive.

Me too. I think they're silly.

(Crocker's warning)

You mean trigger warning.

Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors.

Correct, but you still need to infinitely recurse.

Priorities

Agreed. I'd put other stuff on the list, but it would derail this post well past oblivion.

Other beliefs/goals

Then what is the point of the previously mentioned diversity? To me it looks like a contradiction and admittance that it's not a very utility-generating ideal.

Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect.

Yeah, the high school jock cliche won't like it. Can't disagree with you about the cheerleaders, though.

E.g. I do not want to signal myself that I belong to this tribe.

You've already made a point I agreed with on rationality T-shirts being silly, there's no reason to implement a mildly different form of it that accomplishes the same thing.

This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma." Again, if this sounds perfectly fine to you, that may be bad news.

No, it's only bad news to you. People who recognize weak aspects of them and try to self-improve should be applauded. You are, as far as I am concerned, dragging humanity down. Now tell me where you keep those un-traceable rifles. (The examples are admittedly silly but they're mere examples)

(A note of importance to me is what they consider 'interesting', and why. Are they trying to appeal to a different group?)

Barriers to entry

Agreed, but on the other hand, those talking about that are probably fify books or so ahead of you. I don't participate in the AI department and don't plan to. On the other hand, there's plenty of topics where LW could theoretically help, but they appear less commonly and there's less people who can help with them.

There's also the issue of specialization: the more specialized a topic, the more you need to know about it. Highly specialized topics shouldn't be confused with a high entry barrier.

Nothing new under the sun

Too many places suffer from this to one degree or another, but unless the community bands together (LW wiki?) and makes those 'already posted' stuff easy to access so it won't be reposted.

Maybe a bunch of AI researchers can make something that goes through text and tells the user "this might have been already posted". And hopefully it won't destroy the world while it's at it, too.

Error

Once again, infinite recursion.

Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders.

Such as? You don't need to publicy discuss EVERYTHING, either.

See “is LW a *” and * *."

  1. Unless a community has no merit, you can take the good stuff with you and leave the rest.
  2. I heard that's bad for you and you shouldn't mention it. That correlates with the first one, amusingly enough.

This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.

Live your life as you see fit.

However, said agent must first research happiness thoroughly before making such a statement. There's also individual reactions, but that's getting too precise for my calculations.

Topicality

That happens to everything, eventually. Overlaps with my aforementioned specialization.

This is a community-only thing, though. People can develop and have different experiences and the next best thing to do is what we can take from LW and how we can apply it in our life.

Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them."

Best thing I can say is: maybe people like it? Maybe they want to write something. Why not let them? So what if only nerds read it.

There's an insulting, "what-if" that assumes it's not only correct but also unquestionable and any deviation from it should be punished with a smack on your head in that title.

+Redundancy:

If we've become redundant on the topic of rationality, then it's time to stop milking the cow and start using it in our life. This is the real rationality test; the real freakin' deal.


Everything below that list is excellent and I don't regret taking a reading break for this just because of that.

By third-world comparisons, yes. Otherwise, I doubt it. Provide an example. (Or pledge 50% of your richness to GiveWell)

Unless the third world includes the United States outside of the Bay Area and New England (which, judging by the term "fly-over country", it probably does in lots of minds), then yes, LWers talking about attending CFAR's $3000 workshops and traveling all over the place and how they're already working for a big software giant and talked their bosses into giving them a raise are signs of being toward the higher end of the American Middle Class, if not higher. Just having so many programmers and the occasional psychiatrist is enough to put LW into the "rich even by first world standards" category.

This has come up before. Some LWer who is not rich points out that LWers are on average pretty dang rich, and most everyone goes "surely not! Just abandon everything you have and move to Silicon Valley with the money you don't have and surely you'll get a programming job, and realize how not-rich we are!" *

I am not trying to signal tribal affiliation when I say that LW unintentionally taught me to appreciate the whole "check your privilege" concept.

Having said all that, there are a few people who aren't financially successful STEM lords around here. It's just that they are decidedly not the majority of dominant voices.

* The first and last phrases might be a bit uncharitable, but the reaction is generally disbelief, in spite of the fact that LWers do seem to have thousands of dollars whenever they need them. Just a couple days ago, someone on Facebook was trying to get someone to go with him on a trip to Indiana, so they could split the gas money, but he realized he really needed to spend that money elsewhere. I've had reasonably middle-class people on Facebook trying to come up with someplace to stay, asking for donations for emergencies, saying how they wish they could justify spending money on things far cheaper than a new computer... and all of them are financially and socially way better off than me.

I conclude from the discussion that the term "rich" is too vague. The following is mine: I should be surprised to find many LWers who don't find themselves in the top percentage of the Global Richlist and who could not afford cryonics if they made it their lives' goal.

It looks like MOST of the descriptions people are using for "richness" are pretty vague, which I find to be weird since we actually have readily available numbers.

Median US individual income is $26,695 . Unless you want to claim that half of Americans are "poor" or lower class, you should probably start your middle class no lower than that.

(ETA: For a full-time worker over the age of 25 it's $39k, so you could maybe push it up to that, but it disregards a lot of people who are stuck in part time jobs, or are kept working at just below full time so that they don't have to be given benefits)

10%ers start at $82k. I think it would be silly to say someone in the top 10% of US earners isn't rich. Almost all STEM LWers I've met either make well above this, or work for a non-profit.

I conclude from the discussion that the term "rich" is too vague.

Not that I suggest that everyone adopt these definitions, but I usually use these words in the following meaning:

  • Rich -- "financially independent", you don't have to work if you don't want to and still have at least upper-middle-class lifestyle.

  • Upper-middle -- not worry about money too much, it's sufficient for comfortable and socially adequate lifestyle, but you need a high-paying job and can't really afford expensive extravagances.

  • Middle -- money is kinda OK, you can afford all the necessities and some (but not many) luxuries.

  • Lower-middle -- money is tight, you can afford most necessities, but few if any luxuries

  • Lower -- Paycheck to paycheck (if you have a job), no reserves, any crisis can thoroughly screw you up.

I just split people into "spends less than half of what I do," "reasonable," and "spends more than twice what I do." [/joke]

Yeah, these are also known as "poor bastards", "regular people", and "rich bastards" :-D

There are three different variables: income, consumption, and wealth, which confuse any discussion of economic class. Someone who is high-income, high-consumption, and low-wealth is probably working >40 hours a week at a professional job and worried about money, but also might be driving a fancy car and living in an expensive house.

In terms of life satisfaction, I get the sense that the primary variable that matters is wealth, but in terms of social status (for most groups), the primary variable that matters is consumption.

All true, but I wasn't trying to construct some sort of a comprehensive social stratification scheme. It's really just a quick list of what I mean when I'm using certain words.

I'm with you. I believe we use different meaning of rich. What do you mean by your own "rich"?

I can't really give a reason for this, but 100k dollars would be "rich" for me, and that includes a lot of luxuries, too.

[-]gjm30

The context was the famous observation that subjects of psychological studies etc. tend to be WEIRD (Western, educated, from industrialized, rich, and democratic countries). So "rich in comparison with most of the world's population" is probably the relevant criterion, and actually individual wealth as opposed to the wealth of the country you're in isn't really the point.

Although, if LW is mostly read by the WEIRD, having its content mostly written by and targeted at the WEIRD isn't necessarily a bad thing.

Just out of curiosity, do you mean $100k/year income or $100k assets or what?

(Looking again at the definition of WEIRD, it occurs to me that it's immensely redundant, which isn't terribly surprising since the items in the list were presumably designed to make the WEIRD acronym possible. Industrialized nations and rich nations are more or less the same thing. Both correlate highly with being democratic. "Western" more or less implies all three. Most people in Western nations are highly educated by global standards, though I guess it's also true that subjects of psychology studies are better-educated than average even among Westerners. But "Psychology is WE" wouldn't have sounded as good as "Psychology is WEIRD".)

do you mean $100k/year income or $100k assets or what?

100k saved somewhere is the baseline, I don't know enough about American prices to say about how much extra stuff should be worth. The most reasonable addition I can come with is no debt, or payments.

On a side note, I never really understood the whole "$x a year" thing. There are so many expenses during the year itself that what you made the whole year is pretty much irrelevant.

[-][anonymous]00

shrug I could afford cryonics if I made it my life's goal [pollid:1076]

Is "what you focus on is not what you would benefit most from focusing on" a special case of "the map is not the territory"?

Is the contribution that states its futility ever expressed?

Sometimes. I remember thinking about that when writing my last post, but I'm not sure how much that made it into the post.

[-][anonymous]10

...another point of 'suckiness' is, I think, treating rationality as a person's trait, a quality instead of just a kind of internal calculator which everyone has but few bother to tune. There was this Discussion thread about how your life was changed by LW, and people wrote much about how they themselves changed - it's relevant, but not the answer.

I used to think I should be 'more rational', but I kept forgetting about it when Stuff Happened and then beating myself up and making hole-plugging rules like 'NEVER get annoyed at my kid if he's had less than 7 hours of sleep'. Then we started treating him for hyperactivity, and just like that life bloomed anew! (= I get occasions to be annoyed much more seldom and less severely.) So I kinda shrugged and told myself, I can be rational, and Nature can still kill me; which was something I did learn on LW.

I guess what I wanted to say is, life is what matters, but training sometimes makes it easier to bear.

[-][anonymous]00

promote unwished-for phenomena such as availability heuristic.

Do you mean that the AH is promoted within the community as a whole (-> consensus achieved without weighing all the evidence) or in individual members (-> mindkills in unrelated areas of real life)? This should be testable.

I meant especially in individual members such as described in the point "priorities." Somewhat along the lines that topics in LW are not a representative sample concerning which topics and conclusions are relevant to the individual. In other words: The imaginary guide I write for my children "how to be rational" very much differs from the guide that LW is providing.

[-][anonymous]10

Of course! Guides should teach, not discuss. (Personally, I don't see LW as a guide to anything, but rather as a hivemind-critique-giving life form.)

Sure. And if you ever write that guide, you're still going to borrow heavily from LW. And someone else, writing "how to use your musical talent rationally" or "how to pick your career rationally" or whatever, can so the same.

That is the true value of LW in my book - it presents useful ideas in a sufficiently abstract fashion to make them obviously applicable across a wide range of topics, and does so in a way a smart high school student can understand. Sure it sacrifices brevity in order to do so, and its lack of infographics and other multimodality is a huge drawback, and it spends time on topics most people won't care about, and I'm sure there are other valid concerns. But still, if you want to do better, LW isn't a competitor, it's a shoulder to stand on.