duck_master

teenager | mathematics enthusiast | MIT class of 2026 | vaguely Grey Triber | personal website: https://duck-master.github.io

Wikitag Contributions

Comments

Sorted by

Sorry I'm arriving late

Make base models great again. I'm still nostalgic for GPT-2 or GPT-3. I can understand why RLHF was invented in the first place but it seems to me that you could still train a base model, so that if it's about to say something dangerous, it just prematurely cuts off the generation by emitting the <endoftext> token instead.

Alternatively, make models natively emit structured data. LLMs in their current form emit free-form arbitrary text which needs to be parsed in all sorts of annoying ways in order to make it useful for any downstream applications anyways. Also, structured output could help with preventing misaligned behavior.

(I'm less confident in this idea than the previous one.)

Try to wean people off excessive reliance on LLMs. This is probably the biggest source of AI-related negative effects today. I am trying to do this myself (I formerly alternated between claude, chatgpt, and lmarena.ai several times a day), but it is hard.

(By AI-related risks I mean effects like people losing their ability to write originally or think independently.)

Answer by duck_master180

When I visited Manhattan, I realized that "Wall Street" and "Broadway" are not just overused clichés, but the names of actual streets (you can walk on them!)

i will have to probably leave by 6:30pm at the latest :|

I am a bit sick today but the meetup will happen regardless.

Actually, not going at all. Scheduling conflict.

(To organizer: Sorry for switching to "Can't Go" and back; I thought this was on the wrong day. I might be able to make this.)

The single biggest question I have is "what is Dirichlet?"

Load More