It might be astonishing, but this is fundamentally how word embedding works, by modelling the co-distribution of words/ expressions. You know the "nudge, nudge, you know what I mean" Python sketch? Try appending "if you know what I mean" to the end of random sentences.
Funny. I've used triumphant LoTR music once to overcome my terrible fear of heights. I was climbing mount Kathadin with friends (including passing along "Knife Edge "), and the humming/singing out loud this music (+imagining a chopper-camera shooting from above) has completely effaced my fear. Possibly being called "Legolas" during middle-school and high-school helped, too.
It was to be expected-- Someone had already created a "hierarchy Tags" addon: https://ankiweb.net/shared/info/1089921461
I haven't used it myself, but a comment there said "Simple, nice, and easy."
This is an idea I had only toyed with but have yet to try in practice, but one can create meta-cards for non-data learning. Instead of creating cards that demand an answer, create cards that demand a drill, or a drill with a specific success outcome. I find it a bit hard to find "the best example" for this, perhaps because the spectrum of learnable-skills is so broad, but just for the sake of illustration: if you're learning to paint, you can have "draw a still object", "draw a portrait", "practice color", "practice right composition", "practice perspective" &c, cards. After you finish your card-prompted drill, you move to the next card. Or if you're practicing going pro at a game (with existing computer program AIs), you can have "Play AI X in a game situation S and achieve A", "Practice game opening against AI until (able to reach a certain state)", "practice a disadvantaged end-game situation against AI and bring the game to a draw", and so on, cards. Of course reviewing the cards would take longer, but they are only meant to be used as scaffolding to harness the Anki spacing algorithm. The numeric parameters of the algorithm might need an adjustment (which is easy to do in Anki) for that, but I think that qualitatively it should work, at least for specific skills. Of course, this set-up, especially if it needs a major parametric-overhauling[1], is an investment, but every human breakthrough required its avant-gardians.
[1] Which is not granted: perhaps the algorithm is only problematic at the beginning of the "learning", being too frequent, in which case you can just "cheat" carefully and "pass" every other review for a while, which is not a major disturbance. Or, on the contrary, perhaps "well learned cards" (interval > 3 months, or even 1 month, for example) should be discarded for more challenging ones (i.e, "beat the expert AI" replacing "beat beginner AI", or "juggle 5 balls while riding a unicycle on a mid-air rope" replacing "juggle 4 balls"), which is even less of a problem, as you should immediately recognize well-learned skills (i.e. "practice counting up to 20").
This is not quite a "tech-tree" dependency structure, but you can use tags to stratify your cards and always review them in sequence from basic to dependent (i.e., first clear out the "basic" cards, then "intermediate", then "expert"). Even if the grouping is arbitrary, I think you can go a long way with it. If your data is expected to be very large and/or have a predictable structure, you can always go for a "multiple-pyramid" structure, i.e, have "fruits basic" < "fruits advanced" < "fruits expert", "veggies basics" < "veggies pro" tags &c, and perhaps even have an "edibles advanced" > veggies & fruits tag for very dependent cards.
On the assumption that the Anki algorithm works, just "reviewing down" to an empty deck every tag and proceeding thus sequentially from tag to tag, I think this would work too. Even if it so happened that by one Sunday you forgot "What is an American president" (basic) fact, it might still be profitable to rehearse that day the "Washington was the first president" card, despite the "20 rules" mentioned somewhere above. Presumably, if you had forgotten what a president is, the appropriate card is probably going to appear for review in the next few days, and so with a consistent (or even a semi-consistent) use of Anki, it would probably turn alright. This is more for the anecdotal sake, but this reminds me a time when I burst out laughing out loud while at the dictionary. I was reading at the time "Three Men in a Boat", and there was one sentence in which I didn't know 2-3 of the words; the punchline clicked as I read the definition of the last of them.
Either way, somewhere higher on this commenting thread, I have also thought about the possibility (or rather, lack of) of creating dependencies in Anki. I'm actually thinking of creating an add-on/plugin to enable that--- I'm learning Python these days (on which Anki runs), and I'm just about to start grad school (if I get admitted), so it seems like just the right time to make this (possibly major) meta-learning investment.*
* Not to mention that, since I'm learning Python, it's also a (non-meta) learning investment. Win-win.
Just to comment on the last bit: It seems odd to me that you stress the "3 weeks BARE minimum" and the "crossing point at 3 to 6 months" as a con, while you have used SRS for three years. Given that SRS is used for retention, and assuming that 6 months is the "crossing point", one would think that after three years of consistent SRS use you'd reap a very nice yield.
I know it's a metaphoric language, but it seems additionally ironic that the "BARE minimum" you stress equals to your frequency of exams, while you disfavor the cloze deletion's tendency to teach "guessing the teacher's password".
Is the advice perhaps against using SRS to learn/cram complex knowledge under a very limited time?
Being new to this whole area, I can't say I have preference for anything, and I cannot imagine how any programming paradigm is related to its capabilities and potential. Where I stand I rather be given a (paradigmatic, if you will) direction, rather than recommended a specific programming language given a programming paradigm of choice. But as I understand, what you say is that if one opts for going for Haskell, he'd be better off going for F# instead?
I was thinking in a similar direction. From a biological perspective, computation seems to be a costly activity --- if you just think of the metabolic demand the brain puts on the human being. I assumed that it is very different with computer, however. I thought that the main cost of computation for computers, nowadays, is in size, rather than energy. I might be wrong, but I assumed that even with laptops the monitor is a significant battery drainer in comparison to the actual computer. (sorry, mainly thinking out loud. I better read this and related posts more carefully. I'm glad to see the restriction on computations per amount of time, which I thought was unbounded here).
Regarding the bar charts. Understanding that 100 nokens were sampled at each radius and supposing that at least some of the output was mutually exclusive, how come both themes, "group membership" and "group nonmembership" have full bars on the low radii?