On seeing the title of this post again, I'm reminded of an obvious answer: teach people how to decide what to learn for themselves. Sort of like the feed a man a day vs. teaching fishing thing.
I don't think there's a more useful meta thing to learn since that's what you need to figure out everything else for yourself.
Having an Anki deck is kind of useless in my view as engaging with the ideas is not the path of least resistance. There's a tendency to just go "oh, that's useful" and do nothing with it because Anki/Supermemo are about memorisation. Using them for learning, or creating, is possible with the right mental habits. But for an irrational person, that's exactly what you want to instill! No, you need a system which fundamentally encourages those good habits.
Which is why I'm bearish about including cards that tell you to drill certain topis into Anki since the act of drilling is itself a good mental habit that many lack. Something like a curated selection of problems that require a certain aspect of rationality, spaced out to aid retention would be a good start. But
Unfortunately, there's a trade off between making the drills thorough and reducing overhead on the designer's part. If you're thinking about an empircally excellent, "no cut corners" implementation of teaching total newbs mental models, I'd suggest DARPA's Digital Tutor. As to how you'd replicate such a thing, the field of research described in here seems a good place to start.
Could you rewrite some of the first paragraph? I read it 2-3 times and was still kind of confused.
Funny you linked commoncog, was about to link that too. Great blog.
I think one crux between us is the degree to which "memory is the foundation of cognition", as Michael Nielsen once put it. Coming from the perspective that this is true, it seems to me that a natural consequence of a person memorizing even a simple sentence, and maintaining that memory with SRS, is that the sentence needs to be compressed in the mind to ensure that it has high stability, and can be recalled even after having not been used for many months, or even years.
In order to achieve this compression, it is inevitable that the ideas represented by t...
Like you said, reading isn't enough. I think two of the key challenges for such software would be limiting inferential distance for any particular user, and giving practice examples/problems that they actually care about. That's much easier with a skilled mentor than with software, but I suspect it would be very helpful to have many different types of contexts and framings for whatever you try to have such software teach.
My first semester college physics class, the first homework set was all Fermi problems, just training us to make plausible assumptions and see where they lead. Things like "How many words are there in all the books in the main campus library?" or "How many feathers are there on all the birds in the world?" Even though this was years before the sequences were even written, let alone when I read them, it definitely helped me learn to think more expansively about what kinds of things count as "evidence" and how to use them. It also encourages playfulness with ideas, and counters the sense of learned helplessness a lot of us develop about knowledge in the course of our formal schooling.
Actually - beyond specific skills, it might be helpful to think about trying to foster the 12 virtues. Not just exercises, but anecdotes to motivate and show what's possible in interesting and real contexts, games that are fun to experiment with, things like that.
Inferential distance based knowledge systems would be super cool. There are lots of stats ideas I'd like to engage in but ordering is too much of a pain.
The mentor thing is also true, I think for math in particular. Math/physics are the only subjects where I'd hesitate to just learn them by myself.
Less wrong deck exists now though it seems incomplete missing things like Inferential Distance.
Aside from memorizing declarative knowledge, the question of how to acquire tacit knowledge is very interesting.
I don’t have any current great ideas (other than adding in hammer time like practical tests into things) but I think commoncog’s blog is very interesting, especially the stuff about naturalistic decision making. https://commoncog.com/blog/the-tacit-knowledge-series/ (Can’t link more specifically, on mobile)
Anki deck is a bad idea because as you said: a. formulation b. poor coherence (when you’re stuffing things other people though was cool in your brain it won’t connect with other things in your brain as well as if you’d made the deck
I think incremental reading with supermemo is a decent option. I’ve taught a few rat adjacenct people supermemo and the ones that have spent time on the sequences inside it have said it’s useful. I’m not sure how to summarize it well but basically, anki let’s you memorize stuff algorithmically while incremental reading let’s you learn (algorithmically) then memorize.
I’d be surprised if after day a year of using IR on the sequences you weren’t at least a fair bit more instrumental
(If you want to give it a try I’ll gladly teach you. I don’t think there’s any more efficient way to process declarative information)
We're already drowning in inert content, I don't see how adding more would help. We've had a way to get something like the martial art of rationality since ancient Athens, which is structured interaction with an actual human mentor who knows how to engage with the surrounding world and can teach and train other people with face-to-face interaction. This thing isn't mechanizable, like arithmetic or algebra is, so simple interactive programs are not going to be much better than just a regular book. This also isn't a not mechanizable but still clearly delimited topic like wood-carving or playing tennis, so you can't even say you're unquestionably doing the thing when going it alone, even though you might do better with some professional training. What you're trying to teach is the human ability to observe an unexpected situation, make sense of it and respond sensibly to it at a level above baseline adult competency, and the one way we know how to teach that is to have someone competent in the thing you're trying to learn you can interact with.
Like, yeah, maybe this will help, but I can't help but feel that people are compulsively eating ice and this is planning an ice shavings machine for your kitchen instead of getting an appointment for for having your blood work done.
While I agree with you that face to face interaction with a skilled mentor is the most effective way to learn complex skills such as rationality, that will fundamentally always be limited by the supply of humans who are both sufficiently skilled in the art, and are sufficiently good teachers, and who also have nothing better to do with their time.
So we really shouldn't look at this as either/or - we should, on the one hand, make sure there's good availability and supply of the best opportunity possible (face-to-face with skilled mentors), but also for the vast majority of learners for whom it isn't feasible to provide skilled human guidance, we need to provide the highest-quality content that can easily be scaled. There are flaws I see in the current best scalable solution (primarily stemming from a lack of interactivity), and I'm currently in a better position to attempt to address that issue than to improve the availability of human mentors
Have you used Syntorial, the synth-learning/tutoring software? I think it makes great use of adaptive interactivity(learning), which I feel tools like brilliant or explorable explanations, although great in terms of UX, lack severely. In fact, I have also found Syntorial to be very effective in terms of memory-related things like remembering patches etc. I think it has that neat quality of helping with both learning/doing and remembering what you learn. Maybe you could look into that too for some inspiration.
I agree that Syntorial has better interactivity than many of the "explorables" that have become popular lately, and I agree that high interactivity is vital for maximizing learning.
As far as the actual implementation of Syntorial, beyond the fact that it succeeds at having high interactivity, I find that the user experience lacks flow, and I find it fairly unengaging - in particular the videos slow down the pace, and I would generally want to skip them, but often don't because I worry about missing important information - which is something I hope to do better at in any software I may produce. I think the game Exapunks, while the system it teaches is a fictional system made up for the game, is a good example of a fairly high flow + high interactivity way of teaching skills.
I also think of the edutainment games I played as a kid, it's hard for me to highlight which ones I think are particularly good, since I haven't used them in a very long time, but I know they did a good job of using interactivity to force me to understand the concepts they taught. And I played them voluntarily, so they must have had at least decent flow.
This isn't a direct answer, but seems related.
I've been thinking about how I came to learn the concepts around here, and I realized that the most helpful thing was probably the seminars at Open Phil. I think MOOCs which people work through together (like on some sort of schedule, maybe in groups coordinated through LW) would replicate that experience fairly well. I was specifically thinking of an intro to AI risk course - because even though I'd been in the community for years at that point and had read all of the standard intros (including Superintelligence), I didn't really internalize the arguments or have an understanding of what the field of AI safety looked like until that seminar.
Making a MOOC seems like a lot of work, but there's already a lot of good content out there - obviously there's tons of writing on the topic, and for 'lectures' some of Rob Miles' stuff could probably work (with his consent). So the major remaining hurdle after corralling all of that material into a manageable syllabus would be developing discussion questions and/or setting up small discussion groups that would meet regularly over Zoom.
In any case, I think a lot of people have noticed this problem - one of the main things I hear when I ask people what they'd like to see from LW is some sort of more structured learning thing - and there have already been lots of attempts to solve it, e.g. by developing teaching modules for local groups, writing stuff on the LW wiki, inventing Arbital, etc. etc. etc. (Notably all of these projects have basically been abandoned.) Maybe it's just fine to have a lot of people throwing themselves at the problem from different angles, I'm not sure. I'd love a bigger discussion on this topic.
I've been noticing some complaints (such as this post by Richard Ngo) lately about the quality of the modern LW community's contribution to the big picture of humanity's knowledge.
Ideally, if it were the case that reading something automatically made you internalize deeply everything it said, then just by having a group of people who have read The Sequences, you'd have a superteam of intellectuals. And while I do think LW is a pretty cool group of smart thinkers, that isn't fully the case- just reading The Sequences isn't enough. To really internalize the lessons that one must learn, one must apply the principles, push against the problem, see where their understanding needs improvement, and where they are good enough.
The simplest form of this is having a high-quality Anki deck that tests users on the principles, both by testing recall of the stated principle itself, and even more importantly, giving them test cases where they can apply the principles (in the same vein as Ankifying medium-difficulty multiplication problems). I have seen some rationality-themed Anki decks, but many of the cards are poorly formatted (both esthetically and in terms of learnability), and are also poorly curated. Ideally, if there were to be an Anki deck, it would be well formatted, and the cards would be carefully chosen to maximize quality of information.
Another idea that I've been thinking about is making explorables, a la Nicky Case, that would introduce important rationality concepts. This would have the advantage of providing more flexibility in experience than Anki, but also would sacrifice the benefits of having already implemented SRS.
My question is: if there were to be either an Anki deck or an explorable teaching concepts from The Sequences, targeted primarily as an aide for current LW users, but also as an introduction aimed at the public at large, what concepts from The Sequences would you most want to see covered?