LessWrong lionizes empiricism and science, but then never seems to produce any.
I think you hit on something really important. More things in the vain of "So, I had this idea on how to do better at X, and I tried it out and (it didn't work at all)/(it seems promising)/(I'm still uncertain about...)" could make this a place where ideas are grown and forged.
Yes this would help. BUT. What we have tried to bring into culture is things like steelman. Encouraging the reader to make sense of the writing. This would help too.
There is a general theme to a lot of the flashpoints I’ve witnessed so far on LW 2.0. We're having the usual disagreements and debates like “what is and isn’t allowed here” or “is it worse to be infantile or exclusive?”. I think underlying a lot of these arguments are mistaken notions about how ideas get made. There is also a lack of self awareness about how the open nature of LessWrong fundamentally caps how much trust can be expected among participants. These are systemic issues, they're not something Oliver and Ben can easily fix.
Let's start with the bit everyone knows. Changing your mind, really changing your mind is hard. It's stupid hard. A large part of why LW 2.0 might be useful is that it has the promise of helping people think better. In addition to its native difficulty, people are usually in a state of active defense against others getting them to do it. Even on LessWrong people are generally more interested in reading things that come with the pleasant feeling of insight rather than homework. In one of his recent posts Eliot expressed nothing less than outright contempt for such people. The problem is that this behavior makes sense.
See, not only is changing your mind hard, it's dangerous. There are people who make their living off getting others to change their mind in dangerous ways, and a lot of our reluctance to perform weird mental gymnastics is totally reasonable in the face of that. Strange formal systems of interaction, weird fake frameworks that are supposed to work well anyway, bizarre ideas about what the real threats facing humanity are, look I'm not saying this stuff is bad but you have to be acutely aware of what you're really asking from people. If you dress in rags and look vaguely threatening, people won't want to follow you into the dark alley of the human psyche.
Combine these two things together, and it's fairly obvious why CFAR finds that their methods translate well in person but are hard to teach over the internet. They seem convinced that the problem is in the explanation, the operative explanation. They're giving advice at the level of 'use this bolt', which is entirely missing the point. What's missing isn't instructions but trust. This is a high effort, weird thing being pushed by secretive private sector psychologists. Double Crux right now probably doesn't need another drop of ink spilled on explanation until you've got the mission and strategy levels of explanation covered. Not 'how this' but 'why this'.
A necessary consequence of this is that in person training camps, where instructors have the full use of their charisma and physicality to rely on for establishing trust, where people have explicitly set aside time to focus on this thing, have paid good money and are willing to tolerate cramped conditions for the thing, well you're going to see better results than you will as a semianonymous internet stranger. Another necessary consequence is that you're just not going to get the high trust buy in from people that you need for this on an open forum with loose incentives for it. The kind of work that needs to be done to develop the early stages of an idea looks completely different from the kind to develop an idea in its adolescence. Taking these principles seriously suggests the following.
When Ideas Are In Their Infancy
Before an idea has really been developed, it's very vulnerable. The author(s) won't have smooth answers to everyones probing questions, they won't have perfect rigor, some stuff will be just flat wrong, the seed might be good but the execution is stupid. If you have to get it perfect on the first iteration, then you're not going to try and have very many ideas. When people write a blog, you don't see all the effort they put into developing ideas before putting hands to keyboard. To me the poster child for this sort of thing is Dragon Army. Dragon Army was an idea that Duncan explicitly wanted workshop style feedback on, but the first draft was so horrifying to some people that it just became a disaster. There are not only serious epistemic but public relations benefits from adopting sane norms around baby ideas. These need:
When Ideas Are In Adolescence
Once an idea hits the blog post format, it's open to incremental improvement at best. Further, because the crowd is so large it will think of many things the author did not, and ruthlessly point them out. It's not necessarily malicious, it's just a natural consequence of scale and the desire/need to improve. Therefore authors should be prepared for criticism by the time it's coming, and readers need to not have their time wasted. In the service of that:
(This post was originally published at https://namespace.obormot.net/Main/GuidedMentalChangeRequiresHighTrust, thanks to friends who gave feedback.)