Despite all the years we've talked about practical rationality on this site and others, it still seems like we're a community the loves chess, talks about chess constantly, reads dense theoretical books about chess (which usually contain few tactical examples), and then individually we each play roughly one chess game per month and don't pay particular attention to how we performed in the game. This is how you end up thinking you know a lot about chess without actually improving at chess, and in fact losing to street hustlers who play chess all day but never read about its theoretical aspects.
In fact, we don't even have the most basic framework for determining who is doing better or who is doing worse at rationality, beyond "do they seem happy? does it seem like they're achieving their values?" There can be no gaming, or ranking, or hierarchy without such a framework. You can't be a black belt in rationality unless you have some way of showing that you're better at rationality than a white belt, and white belts have no reason to listen to black belts for the same reasons.
I have tried countless times to make "games" or "tools" to structure various aspects of my own decisionmaking, to bootstrap my rationality, or to outright train my abilities. These projects usually either fail or end up being so tailor-made for the problem I'm facing that I never touch them again.
Some of the tools that I'm aware of, which have proven consistently useful in one sense or another, include the following:
Anki
PredictionBook
Beeminder
Google Sheets, for tracking various things that don't fit into Beeminder
Also, practical solutions that implement something like GTD are useful:
Nozbe or other GTD apps
Evernote for quick capture and sorting of information and notes
One hypothesis for the feeling around rationality improvement is that rationality does not have well defined core frameworks to serve as guideposts. Improvements therefore feel like grab bags of domain specific tricks rather than something with a clearly defined corpus, feedback loops/measures of progress, and most importantly a sense that everyone who works on the thing is pushing meaningfully in the same direction. The Rationality Quotient begins to sake steps in this direction, and obviously CFAR wants to be a testbed to try to develop such a thing if possible.
I think this is a result of focusing on the wrong level of abstraction. Specifically, material in this domain winds up looking like 'share things that look like checklists of best practices.' Which is great, but not the thing. The thing is more like figure out which knobs exist, can be turned, and what to turn them to to become a person who can generate checklists of best practices on the fly.
The turn towards things like focusing and TAPs have been huge steps in the correct direction AFAICT. The thing that is missing is what I will label as a sense of collaboration. It could be that much of the material to be explored is better explored in high bandwidth interactions rather than text and that is causing some of the problem.
Yes, using best practices is (in some situations) a rational decision, but it is not rationality.
It is also rational to have some division of labor: people who produce the checklists, and people who use them. Because developing good checklists requires time and resources.
Rationality itself is more like the art of creating such checklists, or evaluating the existing ones.
The way to winning includes both using and creating checklists. (I guess the optimal ratio depends on the quality of the existing checklists, on resources we can spend on our own research, on how the environment allows us to acquire more resources if we increase our skills, etc.)
Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered.
Really? It seems to me that by far the most obvious hypothesis is that all of the most interesting people left to do something else. (And I think I can credibly claim to be one of those people - before I left LW I was consistently one of the top contributors if not the top on any given day, and I left because the comments on my posts were terrible, not because I didn't have anything else to say.)
Anyway, step one to writing something good is to have something worth saying, so to that end I think we should strongly encourage everyone to just slam themselves into reality a lot more and pay attention to what happens.
And I think I can credibly claim to be one of those people
I definitely regarded you as one, if it may be of any value.
I would like to read you in the Main or elsewhere.
Preferably not about "rationality". But certainly about DT, AI or something like that. Or, if you can sway the term "rationality" in your direction, so much better.
I'll try and taboo out the term "rationality" when convenient. I think LW has created a strong illusion in me that all of the things I listed and more are "the same subject" -- or rather, not just the same subject, but so similar that they can all be explained at once with a short argument. I spent a lot of time trying to articulate what that short argument would be, because that was the only way to dispel the illusion -- writing out all the myriad associations which my brain insisted were "the same thing". These days, that makes me much more biased toward splitting as opposed to lumping. But, I suspect I'm still too biased toward lumping overall. So tabooing out rationality is probably a good way to avoid spurious lumping for me.
A comment by AnnaSalamon on her recent article:
I wouldn't presume to write "How To Write Good LessWrong Articles", but perhaps I'm up to the task of starting a thread on it.
To the point: feel encouraged to skip my thoughts and comment with your own ideas.
The thoughts I ended up writing are, perhaps, more of an argument that it's still possible to write good new articles and only a little on how to do so:
Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on -- and that this community would have an interesting spin on those things.
Moreover, I think that "rationality isn't solved" (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out -- you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can't do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn't really exist? If something is not quite clear to you, there's a decent chance that it's not quite clear to a lot of people; don't make the mistake of thinking everyone understands but you. And don't make the mistake of thinking you understand something that you haven't tried to explain from the start.
I'd encourage a certain kind of pluralistic view of rationality. We don't have one big equation explaining what a rational agent would look like -- there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm -- one unifying decision theory -- is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I'm thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of "rational principle" which we can attempt to follow -- to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can't work it all out from decision theory alone -- and anyway, as I've been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.
A second, more introspective way of writing LessWrong articles (my first being "dive into the literature"), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I'm thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.