charlesoblack

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

This seems like a fun idea. I imagine there would be some high-level streamers willing to try this live (maybe chessbrah?).

What kind of lessons do you envision we learn from Deception Chess that could be applied towards alignment work? In my head, the situation is slightly different since we (or I) are currently assuming an AI tool isn't actively trying to deceive us, but in Deception Chess it's already known that there's a malicious actor.

I feel like that's part of the point. I'm not going to lie - the thought definitely crossed my mind to press the button and see if anything would happen even without launch codes. There's a sort of... allure about it. But knowing that potentially just pressing the button, could bring down the EA forum? that was enough to discourage me from trying it out.

Our stakes are much much smaller, of course, but I still feel some weight of the responsibility.

Yes, I used to be a daily guy. Over a graduate degree it got much more difficult to keep that up so I did have a backlog coming out of that, but I'm caught up.

I do think partly it's my settings that I haven't touched much, but that doesn't really help me right now of course, just me in a few years. It also mostly just pushes the problem further into the future.

Some advice I've seen thrown around is that at some point, one should just retire cards and rely on seeing the information naturally in the real world and not in SRS; that sounds like a risky thing to do to me, but when I looked back at the backlog I had and what my accuracy was there, I estimate I had ~50-70% retention even after nearly 2 years of barely any reviews. (there's a lot of issues with estimating that, since Anki doesn't tell you something was overdue - so I had to calculate it, but some cards are double counted, etc) So overall I think that that might be a viable option: to, at some point, filter cards out that have intervals greater than a certain length, as well as filter cards that you spend too much time/lapse too much on. I haven't found any good anecdotal reports of this approach, though.

Sure. Caveat: I haven't actually done any cards the past 8 days (finding it hard to motivate myself...) so this is likely low on young cards, but accurate on mature cards.

First image is desktop Anki, second is AnkiDroid simulations (which in my experience have proven pretty close to the truth). https://imgur.com/a/Swb6UjH

The second graph has a large spike in the first week because of the past 8 days. I'm also not sure what new cards AnkiDroid is seeing since I don't have any new cards being added.

The number of reviews drops in about 5 months, but even a year from now, I'm not even at 2/3 reviews of what I'm currently doing (which would be ~160 cards). It's a little unsatisfactory since it assumes I'm performing adequately during that whole time and my reward is being able to add maybe 8-10 new cards a day, after a year of strictly review.

I have to be honest, your tone is coming off a little condescending. I am sure you don't mean it that way, but please make it explicit.

These aren't new cards that I'm studying. Like I said, I've been using Anki for 4 years now; I have learned almost 20k cards, and have about 465k reviews. I have done my due diligence and read the 20 SRS rules several times. Perhaps I'm just not being clear.

My current problem is that, out of ~250 cards I do each day, ~200 of them are mature, and that number doesn't seem to be going down. Right now, I have about 18.8k mature cards, and only 850 young cards. My time is increasingly being taken up by the mature cards, and the more I study, the larger that corpus will become. So how does one deal with that fact? The cards seem to accumulate over time, and not spread out to infinity in a way that you'd eventually only do a handful of cards a day and still remember everything.

I don't quite think this is it. What I am learning is language (specifically, vocabulary) so there isn't a lot to understand before putting a card into SRS, and the card can't be much clearer than "biblioteca -> library".

 

What I mean about ever-increasing workloads is that at some point, even without adding new cards, you have long-tail cards that you have to review and give you a pretty consistent workload for a long time (because they're long tail cards and have long intervals and are spread out). Right now, without adding any new cards, I do ~250 cards/day; this is barely less than what I was doing when I was learning new material 2 years ago (~300 cards/day).

On the broader topic of SRS, how do you deal with ever-increasing workloads? I'm a user for 4 years now and have been struggling with my current workload, unable to add any more cards.

Here is what I did (n=524):

  • calculated performance rating for bullet games for each day
  • calculated anki accuracy (as measured by (1 - again%)) for each day
  • adjusted performance rating according to time by fitting an OLS model to predict perf rating with days since beginning of the dataset, then subtracting the model's prediction (should yield a normal distribution - this model had an R^2 of 0.4)
  • fit an OLS model to predict anki accuracy given the adjusted performance rating

This has an R^2 of 0.016, and the coefficient is ~5.5e-05 (though it is pretty significant). So a performance rating of 1000 higher than predicted only yields a boost of ~5% additional accuracy on anki. Since the adjusted performance rating has a standard deviation of 208 points, that means if you're having a "top cognition" day that's 2 std's above average, that's only 2% higher anki accuracy. Not a lot.

Of note: using a "locally smoothed" performance rating (where I smoothed the perf rating, then subtracted that from the perf rating to get a residual) yielded no significant correlation between anki accuracy and perf rating. Arguably this is a stronger bit of evidence - the above (naïvely) assumes that the perf rating goes up linearly with time, but this version is able to deal with plateaus and different slopes in increasing/decreasing rating.

I'm open for code/analysis review if anyone wants to double check my work.

I have a pretty big n dataset for Anki flashcards (and associated performance) and chess games performance that I could try measuring whether there's a predictive effect for long-term memory.

In what world is giving the second dose to the same person, raising them from 87% to 96% protected, a higher priority than vaccinating a second person? 

 

I'm not sure I agree with this point. There's no hard evidence that the second dose is not necessary: nobody was only vaccinated once in the trials (as far as I'm aware). Of course, we do have a prior for the immunity continuing, but we also have examples of other vaccines that require booster shots (HPV, meningitis, hep A/B); so I'd say that we should absolutely explore the one-dose option, but in the meantime, continue vaccinating people twice.

I don't think this would affect the overall outcome either, since - as you said - the overall distribution of the vaccine works exponentially. If we start a one-dose trial now, I'm sure we'd have results soon enough that it would still be massively useful to switch gears and start doing one-dose vaccinations.

Is there a further argument for believing one-dose is sufficient? I may have missed it.

Load More