the following is motivated by:

I've been a long time lurker on Less Wrong and I've noticed the recurring criticism that despite its focus on rationality, the community lacks structured training to develop practical rationality skills. Eliezer Yudkowsky talks rationality as a martial art, because it's something that can be trained and refined through deliberate practice. But where is our dojo?

A model that comes to mind is a website like LeetCode, where programmers can solve coding challenges, share solutions, and see how others approach the same problems. LeetCode can sometimes encourage overfitting to specific problem types so it's not a perfect analogy. The community driven aspect would interesting to me as you can see how other people approach the problem. Could something similar be adapted for rationality?

Imagine a platform where, instead of solving coding puzzles, users engage with problems designed to train rational thinking. Here are a few types of problems that might fit:

  1. Cognitive Bias Detection: Users could review novel, real-world scenarios and try to identify what cognitive bias or logical fallacy is present. The goal would be to train pattern recognition for biases without simply memorizing common examples. For instance, a scenario might subtly include a case of confirmation bias or anchoring, and users would need to spot it.
  2. Calibration Training: One of the most important skills in rationality is aligning your confidence with reality. For each problem or scenario, users could submit a confidence interval along with their answer. This serves as a double-training: users practice assessing their certainty, and over time, they get feedback on how well-calibrated they are.
  3. Bite-Sized, Practical Challenges: The focus should be on small, actionable exercises rather than lengthy theoretical discussions. For example, a problem might ask users to predict an outcome based on limited data, forcing them to confront the common pitfalls of overconfidence or representativeness heuristics.

This kind of platform could be a place where people practice and refine their skills, not just absorb entertaining ideas in way that some say is weakly applicable. 

"identify the bias" type problem for a prototype i'm working on

I have a few years of experience in Software Engineering (backend and ML) and have been thinking about building a tool like this for my own use. However, if others would find it valuable, I'd be open to expanding it into something that the wider community could use as well. It could even present an opportunity to create a sustainable project with some potential financial benefits along the way. I'd love to hear if there’s interest in such a platform and what features might be most helpful to include.

New Answer
New Comment

8 Answers sorted by

abstractapplic

293

I am extremely interested in this, and all similar efforts in this space. I agree our community should be doing much more along these lines.

Regarding your specific ideas:

Cognitive Bias Detection

Something about training people to categorize errors - instead of just making good decisions - rubs me the wrong way. Also, there's a lot of pre-existing work (I found out about this earlier today).

Calibration Training

The Credence Calibration Game exists. So does my variation on the same idea (see also the associated lesson plan). So do play-money and real-money prediction markets. That said, I do think there's a valuable and unfilled niche for something which doesn't require a download and has a nice user interface and has a four-digit number of questions and lets you check your answers immediately (. . . though I don't know how many people other than me would consider it valuable).

Bite-Sized, Practical Challenges

I am very much in favor of this, to the point where I'm already (tentatively) planning to (eventually) build some games with a similar motivation. Relatedly, the "ask users to predict an outcome based on limited data" example sounds like a description of that genre I invented (though "Bite-Sized" suggests you're thinking in terms of something much more polished/generally-accessible).

(Side note: A subtle benefit of the "Practical Challenges" approach is that it can correct for biases you weren't aiming for. A large part of my motivation for making D&D.Sci was "forcing them to confront the common pitfalls of overconfidence or representativeness heuristics"; I found that a Lesswronger working in a Data Science context will more often be insufficiently confident, and place too little weight on surface appearances; my endeavor 'failed' gracefully and people got a chance to notice those errors instead (plus various other problems I didn't even consider).)

-

I look forward to seeing what comes of this. If you want anything playtested, please let me know.

I appreciate the reply! 

Something about training people to categorize errors - instead of just making good decisions - rubs me the wrong way

Are you able to pinpoint exactly what gives you this feeling? The goal of this problem type would be to train the ability to recognize bias to the point where it becomes second nature, with the hope that this same developed skill would also trigger in your own thought processes. I believe it’s generally easier to evaluate the truthfulness of a statement than to come up with one initially, so this training would he... (read more)

5abstractapplic
  Less a single sharp pinpoint, more a death of a thousand six cuts: * The emphasis on learning the names of biases is kinda guessing-the-teacher's-password-y. * You'd need to put forth an unusual effort to make sure you're communicating the subset of psychological research which actually replicates reliably. * Any given bias might not be present in the student or their social/business circle. * The suggested approach implies that the set of joints psychologists currently carve at is the 'best' one; what if I happen to see Bias A and Bias B as manifestations of Bias C? * I worry some students would round this off to "here's how to pathologize people who disagree with me!" training. * Like I said, this is the kind of fruit that's low-hanging enough that it's mostly already picked. All that said, I still think this is potentially worthwhile and would still playtest it if you wanted. But I'm much more excited about literally every other idea you mentioned.

Raemon

50

FYI I'm working on an angle on this. One of my dreams is to make a proper website, but for now it's been more efficient to assemble a collection of puzzles and exercises that various other people have built, and layer rationality training exercises on top of them.

My agenda is written up in the Feedbackloop-first Rationality sequence. The basic idea is that rationality is bottlenecked on inventing better feedbackloops that train the actually important skills. (You can look over the "exercises" section)

My general strategy has been to take existing puzzles/exercises that have a fair amount of depth, such that in order to solve it you're going to need to:

  • make a plan for gaining more information about the puzzle
  • make a plan for acting on that information

Which naturally lends itself well to practicing the skills of:

Thanks for sharing this! I’ve read Feedbackloop-first Rationality, and it’s definitely contributed why I want to build something like this. I’ve even been looking for Thinking Physics style problems that might be free to use online. Getting a diverse and quality set of interesting problems I think will be difficult whether its aggregated, crowdsourced, or possibly AI generated. 

My agenda is written up in the Feedbackloop-first Rationality sequence. The basic idea is that rationality is bottlenecked on inventing better feedbackloops that train the actu

... (read more)
2Raemon
Yeah I'm basic using the lens of my cognitive bootcamp series to iron out the pedagogy here. I try to write up LW posts for all the key takeaways and exercises, although it takes awhile.

RomanHauksson

41

I would be interested in this!

Related: an organization called Sage maintains a variety of calibration training tools.

Julius

30

Another place that's doing something similar is clearerthinking.org

Cole Wyeth

20

I would be very interested to see what you come up with!

Julius

21

I like this idea and have wanted to do something similar, especially something that we could do at a meetup. For what it's worth, I made a calibration trivia site to help with calibration. The San Diego group has played it a couple times during meetups. Feel free to copy anything from it. https://calibrationtrivia.com/

Thanks! I have seen a similar tool like this before and enjoyed it quite a bit. I’d love to know where you source the trivia data, especially if it is available for open use. Also could be interesting to tailor to some functionality for meetups as well.

scarcegreengrass

20

I think this sounds fun! The versions of this i'd be most likely to use would be:

  • Puzzling over scenarios of satisfying complexity. There could be numerical details, selection bias, unreliable narrator obstacles, cases where users with different values might disagree, etc. Even if the scenario-poster is arguably wrong about the right answer, that could still be interesting.
  • Scenarios that you puzzle over & then read a comment section about. Scenarios that you remember & talk about with friends later.
  • User-submitted anecdotes from their real lives. This is oddly similar to Reddit's 'Am I the Asshole' threads, but with a focus on becoming more clearheaded & unbiased. Users could sometimes ask for testable predictions about what will happen next, then report back later. So if the pictured scenario came from real life, Maria might ask users how many times Jake will be late in the next 6 months.
  • Philosophy-esque thought experiments.
  • Scenarios that do indeed benefit my thinking or expand my perspective. Perhaps by improving my mental statistics skills, or exposing me to perspectives of people with very different lives, or demonstrating little-known math subtleties like Simpson's paradox. One failure mode for this would be scenarios like the more boring HR-training courses, where the story doesn't contain any knowledge you don't already know.

ideasthete

10

I like this leetcode style aspect of the idea. Maybe if you identify the "Blind 75" of cognitive biases, that might be a good start. Or take practice problems from this course: https://callingbullshit.org/. Maybe if you identify which problems you want to train on, you can use an LLM to continuously rewrite them to prevent users from simply memorizing an answer and force them to think critically. There's several ways to implement this sort of thing and I can easily imagine the high school version of myself falling down this rabbit hole. 

Not to self promote too much, but I created a demo of a similarly inspired idea that I call newsbetting at https://www.rashomonnews.com. The idea was to use betting mechanisms to create skin in the game and help news consumers identify their own biases when reading the news. Maybe you can include this betting mechanism as a proxy for your confidence interval. 

Regardless, I would very much like to see the outcome of this sort of project and I wish you the best!

3 comments, sorted by Click to highlight new comments since:

I've thought about this for a long time and I think one of the big issues is lack of labelled training data in many domains. E.g. people made calibration toys and that helped a lot for that particular dimension. Ditto the tests on which studies replicated. In many cases we'd want more complex blinded data for people to practice on, and that requires, like in games, someone to set up all the non-fun backend for them.

What is an example of a type of complex blinded data that you'd be imagining here?

like the calibration game but for a variety of decision problems, where the person has to assign probabilities to things at different stages based on what information is available. Afterwards they get an example brier score based on the average of what people with good prediction track records set at each phase.