Mathematician, agent foundations researcher, doctor. A strange primordial spirit left over from the early dreamtime, the conditions for the creation of which no longer exist; a creature who was once told to eat math and grow vast and who took that to heart; an escaped feral academic.
Reach out to me on Discord and tell me you found my profile on LW if you've got something interesting to say; you have my explicit permission to try to guess my Discord handle if so. You can't find my old abandoned-for-being-mildly-infohazardously-named LW account but it's from 2011 and has 280 karma.
A Lorxus Favor is worth (approximately) one labor-day's worth of above-replacement-value specialty labor, given and received in good faith, and used for a goal approximately orthogonal to one's desires, and I like LessWrong because people here will understand me if I say as much.
Apart from that, and the fact that I am under no NDAs, including NDAs whose existence I would have to keep secret or lie about, you'll have to find the rest out yourself.
I tried, but never really figured out how, to get them to understand that they were engaging in process of downloading a consensus to defer to, and why it matters that that's what they're doing.
Here's the thing, as one such person you mentored and who you counseled not to defer: you can realize you're deferring, and recognize that it's not particularly endorsed, and then do it anyway because all the social pressures point that way, and so do all the grantmaking pressures, and all the selection effects. It totally matters that that's what you're doing! But being openly non-deferential is a very good way to paint a target on your back, to take on huge costs that you would never agree to up front. Even being quietly privately non-deferential can and will show up in the kinds of research-flavored conversations you have, and will cause you to miss connections and get excluded. You'll wind up being held to the epistemic standards of the deferrees - much higher than those of your peers; never mind that that's not the standard most other people get held to. You'd better be willing and able to lay out your entire argument, from scratch, for anyone who asks you, or you look worse than merely stupid - you look like a waste of time.
So... the biggest savings here, by far, is the rent. At a guess it's bigger than everything else combined. If you don't have enough friends, or your friends all live in full houses, guess you're screwed here. Hope we're OK with the tyranny of structurelessness?
I'd like to see a breakdown by "years since doing MATS". What's the retention like, basically? Another breakdown I'd like to see, either for the displayed data or the years-since-MATS one - what's the split in AI safety between "(co)founded an org", "joined a new org", "joined an established org/(research/policy) group (technical vs governance)", and "on a grant, no real org", along with the existing "academic" (split by core academic/alt-academia?), "government" (which country?), and for-profit (maybe a breakdown of product type?). In any case, thanks for posting this!
First off, TFTP. I marked some stuff I thought was most relevant. This is helping remind me of some things I think about LLM confabulation and lack of binding/reasoning... I don't have my thoughts fully formed but there's something here about global inconsistency despite local compatibility, and how that cashes out in Problems. Something a little like an inability to define a sheaf, or homology detection, or something like that? I might say more better words later about it.
Thanks for writing this. I've noticed something in the same vein as well - for the last few months I've felt increasingly like a lot of the pieces and systems agent foundations types use have some kind of important commonality to them, though I'm not yet sure what form that could take. Condensation and natural latents and some of Francis Rhys Ward's work in II-MAIDS; ontology mismatch and Bayes nets and imprecise probability and category theory showing up repeatedly. There's something there to construct, but what?
I'm doing Budget Inkhaven: https://tiled-with-pentagons.blogspot.com/search/label/budget inkhaven
Similar to my earlier writing regimen, but shorter, faster, and slightly more daring. I'll crosspost anything I like especially well here.
Hm. So it's some larger problem of which the Curse of It From Bit is a subcluster, then? If I've understood you right, this is also something I've spent time thinking about by way of finding them with my face. Something like... some unfeeling machine/system/structure that fails to live up to its own spec, or which has a bad spec or no spec at all, and which makes that state of affairs your problem, you silly goose who wants to actually do things, you. There's the flavor of that thing from Zen and the Art of Motorcycle Maintenance where there's this tiny trivial-feeling problem that humiliates you with a combination of feeling trivially small while nonetheless demanding close attention and deep understanding by way of being a critical blocker to solving whatever problem, I think? I think that the yakshaving bit is downstream of the spec failure piece - it seems like a common failure mode of degraded systems/designs.
Oh! I have thoughts about this one!
On my model, it's a matter of basically anything that turns Bits into Its being kinda cursed, especially if whatever process lacks human actuation or even oversight and isn't done to truly exacting standards. Printing is an example of this; 3D printing and general CNC is another; graphic design (and the depths of what a color is) is a third. I'm thinking of the kind of thing you might find on r/shittyrobots as well; a minimal non-example might be polypeptide/oligonucleotide preparation, though I'm not totally clear on the workflow there or how much human labor is involved.
My gears-level model here is most starkly illustrated by 3d printing - something about the physical object creation might change, or be underspecified, or rely on shoddily-made connectors of some kind, and in any case, the actual machine that does the actuating isn't set up to check for failures during workflow or indeed do much more than the simple actuation and an initial calibration step. On top of that, the thing goes from being data which is everywhere and wherever you want it, to being a specific thing in a specific place; often you don't even get to have the affordance of knowing in advance which place it is. Possibly even the yakshaving you notice is a matter of some kind of associated ugh field.
Spoons are for doing ordinary things, doing things at all. [Blood] is for doing difficult things, doing them more or harder, doing them at significant internal cost.
This resonates with something in me. Something kinda malnourished and shaky and weirdly shaped but it resonates all the same. Most of the things I like best about myself, that I'm proudest of having done or that have caused other people to be impressed with me or that nourished me to do, were things that had this pattern. I'm trying to do more of that now. Thanks for painting it so clearly.