Comment author: IlyaShpitser 24 December 2015 05:48:55PM *  -2 points [-]

But you do have time to drop by and criticize content people do produce, right?

Comment author: Mirzhan_Irkegulov 24 December 2015 08:21:37PM 3 points [-]

Criticism originating from unqualified people should be encouraged. People don't like criticism from people, who don't do what they propose others to do, for social, not rational, reasons. “You think this band's music is rubbish, well write your own music then” is a fallacy. If I go to a restaurant and get terrible food, there's no reason I should become a cook before being allowed to rebuke it.

In response to LessWrong 2.0
Comment author: MaximumLiberty 23 December 2015 12:32:00AM 2 points [-]

This is a proposal to replace (or supplement) the tagging system with a classification system for content that would be based on three elements: subject, type, and organization.

For me, one of the problems with current LessWrong is that it has too many interesting distractions in it. Ideally, I would want to follow just a few things, with highly groomed content. For example, I'd like to see a section devoted to summaries of recent behavioral psychology articles by someone who understands them better than I do. I suspect that other people would like to see other things that I'd prefer to filter out. Examples: artificial intelligence research, effective altruism, personal productivity. I'm not knocking these subjects; but when I allocate time, I'd like to be able to allocate 100% to what I want to see and 0% to what I don't.

That suggests that one area where Less Wrong could be improved is at the top level of organization. I'd suggest that content be organized in subjects, like Behavioral Psychology, Effective Altruism, Personal Productivity, and Artificial Intelligence. Now you might say that the tagging system does this. It kind of does, but it is insufficiently prescriptive. An article on effective altruism could have no tags, or many, or not the ones I think of.

Currently, the content is also classified by type, in Main and Discussion. Frankly, the difference between the two makes little sense to me. But I think there is another classification that would be helpful when combined with prescriptive subjects. I'd classify content type more like this: * Research, used for summarizing a publication elsewhere, with the summary provided by someone who know something about it * Link, used for identifying some information that might be of use to the community * Commentary, used for the normal kind of stuff that shows up in discussion * Sequence, assigned by moderators to the original stuff that made this site what it is, or at least was * Reading, used for reading groups for specific books * Meetups, used only to announce Meetups * Organization, used to announce and promote organized action

Then a third classification of content is by organization. The community needs to remain connected to the organizations it spawned. So the third content classification would be by organization, which could be empty. Possible initial values would be MIRI, CFAR, FLI, etc. I'd hope that those organizations would ensure that at least their own research got into the relevant subject under a Research classification, and that their own blog posts got thrown over into the relevant subject under a Link classification.

This would make it easier for me to justify coming back to read Less Wrong daily, because I wouldn't expose my self to wonderful distrations in order to find the things I'd like to keep up on.

Comment author: Mirzhan_Irkegulov 24 December 2015 10:54:21AM 0 points [-]

I somewhat support what you're saying, but I also believe that 100% filtering would lead to a filter bubble. Suppose you were much smarter than you are now and upon reflection realized Effective Altruism is super-duper important. But now you've filtered EA-related articles on LW and you will no longer be exposed to it.

In response to comment by gjm on LessWrong 2.0
Comment author: Vaniver 04 December 2015 01:36:05AM 6 points [-]

I agree with you that the motivational bits, of wanting to acculturate to LW to be around the cool people, rely on the cool people being here.

The main reason I'm uncertain about the forum as the right model is that I don't see it in many other educational contexts and I think there are weird dynamics around the asymmetry between questioners and answerers and levels of competence/experience. (The cool people want some, but not too much, interaction with not-yet-cool people.) Perhaps the Slack and IRC channels and similar venues deserve some more of my attention as potential solutions here.

Vaniver's other suggestion for something that would serve this need better than a Redditalike is Stack Overflow. That's a better fit, but the SO model works best where what people need is answers to specific questions that have clear-cut answers.

Agreed. This dynamic gets even worse when the problems are psychological. If someone goes to Stack Overflow and posts "hey, this code doesn't do what I expect. What's going wrong?" we can copy the code and run it on our machines and find the issue. If someone goes to Sanity Overflow and posts "hey, I'm akratic. What's going wrong?" we... have a much harder time.


One of the things that comes up every now and then is the idea of rewriting the Sequences, and I think the main goal there would be to make them with as little of Eliezer's personality shining through as possible. (I like his personality well enough, but it's clear that many don't, and a more communal central repository would reduce some of the idiosyncrasy concerns.)

Some think that the Sequences could be significantly shortened, but I suspect that's optimism speaking instead of experience. There are only a handful of sections in the Sequences where Eliezer actually repeats himself, and even then it's likely one of those places where, really, it's worth giving them three slightly different versions of the same thing to make sure they get it.

In response to comment by Vaniver on LessWrong 2.0
Comment author: Mirzhan_Irkegulov 06 December 2015 06:26:07AM 4 points [-]

rewriting the Sequences

Not just rewriting them. My biggest problem with LW-rationality is that I haven't and probably can't internalize it on a very deep, systematic level, no matter how many times I re-read the articles. Instead of a long chain of blog-posts about everything on Earth, there should be a very focused rationality textbook with exercises, with spaced repetition and all that science of teaching and learning baked it. Luke Muehlhauser argued LW is a philosophy blog. Yet after reading RAZ I don't feel like I understand LW epistemology on a deep level. I still don't feel confident arguing with philosophers, even if I intuitively understand they are full of shit.

While I have many intuitions about how to be rational, and I'm ridiculously more sane and productive, than I was a year before, thanks to LW, my understanding of LW maths, science and philosophy is vague and not at all transparent.

In response to comment by shminux on LessWrong 2.0
Comment author: philh 03 December 2015 11:21:50AM 6 points [-]

(I haven't RTFA or most of the comments yet.)

Or maybe it should be a rationality-related aggregator/hub, where all relevant links get posted and discussed.

When ESRogs started /r/RationalistDiaspora, that's what I was hoping it would become.

I think the main thing that went wrong, is that not enough people saw it. It didn't pick up critical mass, or even self-sustaining mass. Now, ESRogs seems to be the only submitter, and there's not enough voting to act as a filter or to make links show up on my reddit front page.

But there's nothing stopping it from becoming that thing, if a bunch of people did start using it all at once. To that end, I just submitted something.

In response to comment by philh on LessWrong 2.0
Comment author: Mirzhan_Irkegulov 06 December 2015 06:11:23AM 9 points [-]

I would love there to be a single, canonical rationality-related link aggregator (with tags and other ways of categorizing!), but I don't want it to be on Reddit. Reddit has an implicit culture of transience. You can't discuss too old posts. Links are ordered chronologically. Links can't be grouped, categorized. It's hard to search for old or obscure links.

OTOH maybe a link aggregator should be transient, because the nature of blogs, news sites, Facebook feeds, and tumblr posts is transient too. Today Qiaochu Yuan or Scott Alexander found this particular article interesting; in a year's time this article is irrelevant.

There's also link rot, and many old links for interesting material are 404.

Maybe we shouldn't aggregate links at all, but aggregate the knowledge itself. Therefore something similar to LW wiki. But I strongly believe wiki is an overrated model for aggregating knowledge and it wouldn't work for aggregating rationality-related knowledge.

There should be a rationality knowledge base, something that transcends wikis, FAQs, tutorials, blog posts, link posts, Stack Overflow, Wikipedia outlines. Maybe it would require thinking intensely for 5 minutes (like how CFAR teaches), maybe it would require coming up with completely new concepts and code.

But this knowledge base would have to heavily incentivize people to contribute to it, otherwise the actual knowledge is never going to be written. Counter-intuitively, wikis are terrible at incentivizing contribution.

In response to LessWrong 2.0
Comment author: gjm 03 December 2015 12:26:59PM 13 points [-]

Something seems wrong to me about the "Welcoming Committee / Rationality Materials" section in the OP. I mean, if we imagine someone arriving in Rationalistan as a result of a link in the HPMOR author's notes or something of the kind and getting intimidated by how much stuff there is and/or how little they feel they know ... whyever would what they then need look like Wikipedia? Wikipedia is terrific and I am a huge fan, but it's not great at providing "social reinforcement" and "other people to ask questions of".

Vaniver's other suggestion for something that would serve this need better than a Redditalike is Stack Overflow. That's a better fit, but the SO model works best where what people need is answers to specific questions that have clear-cut answers. Surely that's not the situation of someone newly arriving in Rationalistan. Their questions are going to be more like "WTF is all this?" or "I think I need to reevaluate how I think about lots of important questions but I barely know where to start; what shall I do?". Stack Overflow itself (and I think most of its offshoots) strongly discourages that sort of open-ended question on the grounds that that's not what SO is good at.

The biggest weakness of Less Wrong as a welcoming committee isn't that it's a forum rather than an encyclopaedia or a Q&A site; I think a forum is the Right Thing for that purpose. The biggest weakness is the whole "ghost town of unquiet spirits" thing -- which I think is an unkind exaggeration but it's hard to deny has some truth to it. LW would make a better welcoming committee if it were livelier and more impressive, and it won't gain those attributes as a welcoming committee.

Having said that, I agree that there's a place for something Wikipedia-like. The LW wiki was meant to be that, but it's never had a great deal of participation. I have no idea what could be done to change that. People contribute to things to benefit others, or to benefit themselves. Editing a wiki is never going to bring much personal gain, and when all the material that would go into the wiki is already out there in other forms (e.g., the Sequences) it's hard to see that the benefit is going to be big enough to excite people doing it altruistically.

In response to comment by gjm on LessWrong 2.0
Comment author: Mirzhan_Irkegulov 06 December 2015 05:53:15AM 16 points [-]

While I'm not against LW wiki itself (it already exists, for starters), I'm very much against making LW “something like a wiki”, because I'm >50% confident it will fail. I flinched when I read “community-maintained wiki pages with explanations and links” in the original post, because “community-maintained wiki” are almost universally dead before reaching maturity.

Michael Snoyman wrote a small article on why people are willing to contribute to free software documentation via pull requests, but not via wiki edits. I wholeheartedly recommend everyone to read the article, but the gist is as follows.

For a wiki:

  • maintainers think they are encouraging the community to write documentation
  • contributors are intimidated by the wiki, because they are afraid they aren't justified in editing it
  • readers rightly expect incomplete, unstructured, and messy information.

For documentation that is improved through pull requests:

  • maintainers deal with documentation in atomic fashion using tools they know
  • contributors don't worry about inadvertently doing harm, because their contributions are checked by the maintainers
  • readers know that the information is canonical, because somebody reviewed the contribution before publishing it.

Why LW-as-a-wiki would discourage contributing (writing wiki-like articles)? Of all wikis I remember, the only successful are Wikipedia and very narrow-focused wikis (e.g. UESP for The Elder Scrolls or Ring of Brodgar for Haven and Hearth video games). In both cases they are thriving because there are very clear expectations of what a final article is supposed to look like.

LW is far away from being definitive canonical reference, which is good. Every rationality-relevant topic could be explained from different perspectives, so I would hate there to be the one definitive article on, say, control theory.

Then you'd have all the Wikipedia problems: edit wars, deletionism, constant arguing over the rules and article layout, slowly corrupting powers of wiki moderators, censorship. On a wiki everything is supposed to be canonical, so much effort will be wasted on arguing over canonical definitions and phrasings, or on referencing more and more rules and guidelines. Wiki model has bad incentives: wins the one, who is more stubborn.

LW-as-a-wiki would stagnate very quickly, as there will be huge psychological and social obstacles for people to contribute. I will go into these obstacles in greater detail in follow-up comments. For now I want to say that we should analyze what is wrong with the wiki model from cognitive psychology and science of human motivation perspective, and see how we can do better.

The most important revolutionary idea behind LW (and more specifically lukeprog era LW) is that science is a superweapon, and if diligently learn relevant science and then try to fix the problem, you can outdo your competitors by a large margin (see also: beating the averages). So maybe we should figure out psychology of motivation, incentives for contributing, that kinds of things, before patching LW codebase. Maybe LW should be a community blog, a Reddit-style site, a wiki; or maybe it should be something completely different.

Comment author: Error 03 February 2014 10:14:50PM 1 point [-]

I'm somewhat curious what the reaction was. Did they notice the contradiction between wanting the kid to go to church and not wanting him to actually act on what he learned there?

In response to comment by Error on On saving the world
Comment author: Mirzhan_Irkegulov 18 October 2015 10:44:37PM 1 point [-]

One hypothesis I have is that when you have a very bad epistemology and your beliefs consist of memorized atomic propositions, handed down from an authority figure, you eagerly want more people to agree with you, but not agree with you on everything except this one important atomic belief.

Religious parents and priests probably have this subconscious fear that the kid might go astray with their own theology. It's like you're a member of The People's Front of Judea and the person you'd really want to join you, joins The Judean People's Front instead.

The bit from The Life of Brian above is inspired by actual Marxist groups, which constantly splintered on all kinds of issues all the time. Marxist epistemology is nonsense, any explanation of it is just word salad. Dialectical materialism is a mysterious answer for a mysterious question. Therefore, if you're a member of Socialist Appeal, it's impossible to argue a member of a Socialist Party to switch allegiance, because there's no real epistemology, and therefore no possible inference.

Members of these organizations join based on incidental reasons, not independent inference. I suspect that members know that, at least at the corner or their minds. That's why at least some of them might not be as enthusiastic about recruiting new members, because every time you give your friend a book of Marx, you're afraid that they might acquire beliefs that are not entirely compatible with your set of Marxist beliefs.

This hypothesis might not apply though to this particular situation. Maybe religious parents want their kid to be devout, but not devout enough, so they become a celibate monk or nun instead of a patriarch or housewife of a traditional Christian family.

Comment author: MichaelAnissimov 08 August 2009 12:38:55AM 4 points [-]

My guess is that akrasia could be more effectively fought if this community called it was it is -- laziness -- instead of using a fancy Greek name.

Comment author: Mirzhan_Irkegulov 15 October 2015 06:40:47PM 3 points [-]

Everywhere I see the term “akrasia” used, people mean procrastination or laziness. Indeed, it makes no sense to just add a fancy Greek name to signal sophistication or belonging to rationality tribe. But akrasia can succinctly denote a whole cluster of things “you do against your better judgement”: procrastination, depression, anxiety, jealously, envy, alcoholism, tobacco smoking, drug addiction, compulsive lying, sex addiction, insecurities, self-harm, self-hatred, bitter hatred for something, constant arguing, seeking external validation, etc.

Things, that people do, realise on reflection are harmful or wasteful, but can't control themselves and do it anyway, again and again. Any destructive feeling or behaviour that people want to control but can't, fits the definition of akrasia. CBT claims that many of these are related, have the same origin and can be treated similarly. It's nice to have an umbrella term.

Comment author: Mirzhan_Irkegulov 24 September 2015 03:06:49PM 2 points [-]

Your writing is good, much better than mine, even though I came up with the same idea before. Please continue writing.

There is one thing I disliked, the mentioning of “liberals” and “conservatives” as the only 2 possible political positions, 2 “sides”. You already understand the package-deal fallacy, that one who identifies as a “liberal” and supports most stereotypically “liberal” policies, might still not support all of them, or support some of the “conservative” policies.

But there are policies that you can't pigeonhole into “liberal” and “conservative”. Many policies and ideas are not even binary: there are not 2 contradictory positions, but 10 contradictory positions. Therefore even the idea of a 1-dimensional continuum of left-centre-right is fallacious.

It's also important to note that not all English-speaking Internet users are from the US. There are well-educated people who live in countries you don't even know exist. So mentioning US political trends or personalities without brief explanation is an additional obstacle to understanding. If you want your texts to be widely read by non-Americans, you should make them more general, so that they don't require local American intuitions and background knowledge.

Finally, there is a point that is rarely brought up on LW, but is very important. The reason Luke Muehlhauser is so cool, is that he raised attention to the neglected virtue of scholarship, the fact that, in some sense, there is no “royal road to science”. Strictly speaking, this is not true, some roads are definitely more effective than the others. A good textbook or study methods might accelerate your learning tenfold, hundredfold.

But at the end of they day, once you have your rationality, your effective study methods, your time-management system, your Pomodoro, your Bayesian epistemology and what not, the only way you can actually understand something on a deep level is to sit your ass on the chair and read a freaking book. Sometimes more than once, with exercises, whiteboard and discussions.

You can optimize learning and thinking, for sure, but you can't skip the dumb “hard work” part. And this leads us to the fact that we have 24 hours a day. Most people have stupid jobs, little money, kids, commitments, social status to maintain. The little free time they have they can't spend efficiently on self-education, because they are tired after work, they are stressed and anxious and have lots of psychological barriers that make them think “nah, I'm too stupid to ever learn physics on my own”. And I'm not even talking about billions of people living in complete squalor, I'm talking about people with access to the Internet and books.

A middle-aged US conservative has enough time to read one book a month. Not because he's a “stupid redneck”, but because all other time is spent raising kids, fixing his car, going to the job, inviting friends for a meal etc. So when he goes to the bookstore, he sees a nice-seeming book by Ann Coulter. He has no training in political science, so how would he ever know that the book is the waste of time? He has no point of reference, no background knowledge. One might say he buys this book to assuage his tribal evo-psych desires or because he's biased. I say he simply can't make huge inferential jumps that would make him conclude reading Ann Coulter is a waste of time.

You see, saying that someone is biased, or a product of evolution in the ancestral environment, or prone to signaling and status-oriented behavior, and so on, is just a very elaborate way of saying someone is stupid. And stupid is a grave insult. People near-universally hate stupid people, and treat them with either condescension or hostility by default. It's like we constantly trying to prove that their stupidity is their “fault”. You'd think, if somebody is stupid, that is, has not enough knowledge or mental skills to come to a right conclusion, definitely the results of their stupidity are not their fault? But that's not how people think automatically, and it takes conscious effort to rewrite this default attitude towards stupid people.

Suppose the above-mentioned US conservative somehow magically decides to buy Bill Maher's book instead. He read somewhere that it's no harm to sometimes read what your political enemies write, that even in the worst propaganda might be a grain of truth, that it's virtuous to try to learn the position you dislike and evaluate it on your own. Somehow, his cognitive, social, evo-psych forces didn't stop him from buying that book. Would he now be able to support views that are closer to the truth? Most likely not, except if slightly or by epistemic luck.

Some political positions, like gay marriage, might be a “no-brainer”. But open borders is not a “no-brainer”, and so aren't negative income tax, raising minimum wage, military interventions, increasing inflation, decreasing inflation. You need to read books for that, and think with concentration for hours, and consult various sources, and maybe write down equations, and maybe even use statistical methods. Most people literally don't have time for that.

Teaching the dangers of death spirals and how politics can be a mind-killer to everyone would make the world a much better place. But I don't even think a next-door fundamentalist Christian, trained in LW rationality, would necessarily accept evolution. Because accepting evolution is not about rational thinking, it's about actually understanding how it works. Yeah, you can also accept it without understanding, and that's what happens most of the times. I'm pretty sure most /r/atheism subscribers don't actually understand evolution's mechanism and thus aren't justified in their belief.

In response to Original Seeing
Comment author: roland 23 April 2011 10:22:20PM 3 points [-]

I think this video says it all: http://www.youtube.com/watch?v=pIAoJsS9Ix8

In response to comment by roland on Original Seeing
Comment author: Mirzhan_Irkegulov 21 September 2015 06:56:27PM 0 points [-]

It says:

The video is private.

Comment author: Mirzhan_Irkegulov 10 September 2015 12:09:46PM 0 points [-]

Please correct this typo:

⟨Tv, v⟊

View more: Prev | Next