duck_master

mathematics (and increasingly CS) enthusiast | MIT class of 2026 | Wikimedian | vaguely Grey Triber | personal website: https://duck-master.github.io

Wikitag Contributions

Comments

Sorted by

Is this only open to EU-based organizations, or is it global? Are there other eligibility criteria?

when and where is the event on Saturday July 19th if I may ask?

 

  1. 20 cards a day — Having too many cards and staggering review buildups is the main reason why no one ever sticks with Anki. Setting your review count to 20 daily (in deck settings) is the single most important thing you can do to stick with Anki long-term.


I think "20 cards a day" might be too aggressive, but I decided (after taking a shower and realizing that learning thirty new concepts a day is extreme) to lower the presets for my 2 decks (French and Mandarin Chinese) to 5 new cards and 50 review cards per day. Previously, it was 15 new cards and 150 reviews. I even finished one of the decks, which is kind of crazy.

Since you wrote the original, machine translation (which is pretty decent these days) should be fine, because it's not really generating the English version from scratch. Even Google Translate is okayish.

It might dangerous to always follow Claude though. In this 2023 article I once read, a Vice reporter tried using ChatGPT to control his life, and it failed miserably. Contrived decision theory scenarios are one thing; real life is another.

This is not just limited to websites I think. In my experience, a lot of companies or organizations that charge money (e.g. hospitals, cinemas, psychologists, some physical stores) intentionally hide or at least downplay how much they charge. My guess is this is probably to work around the price elasticity of demand - if you don't even know what the price is, there's no way you can flinch away from a high price to begin with. Interestingly, most restaurants are upfront about how much they charge, which I'm guessing is because the restaurant world is far more competitive.

No, it was rescheduled to June 15 instead

Sorry I'm arriving late

Make base models great again. I'm still nostalgic for GPT-2 or GPT-3. I can understand why RLHF was invented in the first place but it seems to me that you could still train a base model, so that if it's about to say something dangerous, it just prematurely cuts off the generation by emitting the <endoftext> token instead.

Alternatively, make models natively emit structured data. LLMs in their current form emit free-form arbitrary text which needs to be parsed in all sorts of annoying ways in order to make it useful for any downstream applications anyways. Also, structured output could help with preventing misaligned behavior.

(I'm less confident in this idea than the previous one.)

Try to wean people off excessive reliance on LLMs. This is probably the biggest source of AI-related negative effects today. I am trying to do this myself (I formerly alternated between claude, chatgpt, and lmarena.ai several times a day), but it is hard.

(By AI-related risks I mean effects like people losing their ability to write originally or think independently.)

Load More