Alex_Altair

Sequences

Entropy from first principles

Comments

Sorted by

Indeed, we know about those posts! Lmk if you have a recommendation for a better textbook-level treatment of any of it (modern papers etc). So far the grey book feels pretty standard in terms of pedagogical quality.

Has anyone started accumulating errata for the SLT grey book? (I.e. Algebraic Geometry and Statistical Learning Theory by Sumio Watanabe.) This page on Watanabe's website seems to just be about the Japanese version of the book.

Alex_AltairΩ240

Some small corrections/additions to my section ("Altair agent foundations"). I'm currently calling it "Dovetail research". That's not publicly written anywhere yet, but if it were listed as that here, it might help people who are searching for it later this year.

Which orthodox alignment problems could it help with?: 9. Humans cannot be first-class parties to a superintelligent value handshake

I wouldn't put number 9. Not intended to "solve" most of these problems, but is intended to help make progress on understanding the nature of the problems through formalization, so that they can be avoided or postponed, or more effectively solved by other research agenda. 

Target case: worst-case

definitely not worst-case, more like pessimistic-case

Some names: Alex Altair, Alfred Harwood, Daniel C, Dalcy K

Add "José Pedro Faustino"

Estimated # FTEs: 1-10

I'd call it 2, averaged throughout 2024.

Some outputs in 2024mostly exposition but it’s early days

Basically right; I'd add this post and this post.

FWIW I can't really tell what this website is supposed to be/do by looking at the landing page and menu

The title reads ambiguous to me; I can't tell if you mean "learn to [write well] before" or "learn to write [well before]".

DM me if you're interested.

I, too am quite interested in trialing more people for roles on this spectrum.

Thanks. Is "pass@1" some kind of lingo? (It seems like an ungoogleable term.)

I guess one thing I want to know is like... how exactly does the scoring work? I can imagine something like, they ran the model a zillion times on each question, and if any one of the answers was right, that got counted in the light blue bar. Something that plainly silly probably isn't what happened, but it could be something similar.

If it actually just submitted one answer to each question and got a quarter of them right, then I think it doesn't particularly matter to me how much compute it used.

Load More