Pretty much drove me away from wanting to post non alignment stuff here.
That seems unhelpful then? Probably best to express that frustration to a friend or someone who'd sympathize.
Thank you for continuing this very important work.
ok, options.
- Review of 108 ai alignment plans
- write-up of Beyond Distribution - planned benchmark for alignment evals beyond a models distribution, send to the quant who just joined the team who wants to make it
- get familiar with the TPUs I just got access to
- run hhh and it's variants, testing the idea behind Beyond Distribution, maybe make a guide on itr
- continue improving site design
- fill out the form i said i was going to fill out and send today
- make progress on cross coders - would prob need to get familiar with those tpus
- writeup of ai-plans, the goal, the team, what we're doing, what we've done, etc
- writeup of the karma/voting system
- the video on how to do backprop by hand
- tutorial on how to train an sae
think Beyond Distribution writeup. he's waiting and i feel bad.
I think the Conclusion could serve well as an abstract
An abstract which is easier to understand and a couple sentences at each section that explain their general meaning and significance would make this much more accessible
I plan to send the winning proposals from this to as many governing bodies/places that are enacting laws as possible - one country is lined up atm.
Let me know if you have any questions!
I think this is a really good opportunity to work on a topic you might not normally work on, with people you might not normally work with, and have a big impact: https://lu.ma/sjd7r89v
I'm running the event because I think this is something really valuable and underdone.