Mathematician, agent foundations researcher, doctor. A strange primordial spirit left over from the early dreamtime, the conditions for the creation of which no longer exist; a creature who was once told to eat math and grow vast and who took that to heart; an escaped feral academic.
Reach out to me on Discord and tell me you found my profile on LW if you've got something interesting to say; you have my explicit permission to try to guess my Discord handle if so. You can't find my old abandoned-for-being-mildly-infohazardously-named LW account but it's from 2011 and has 280 karma.
A Lorxus Favor is worth (approximately) one labor-day's worth of above-replacement-value specialty labor, given and received in good faith, and used for a goal approximately orthogonal to one's desires, and I like LessWrong because people here will understand me if I say as much.
Apart from that, and the fact that I am under no NDAs, including NDAs whose existence I would have to keep secret or lie about, you'll have to find the rest out yourself.
I'm doing Budget Inkhaven: https://tiled-with-pentagons.blogspot.com/search/label/budget inkhaven
Similar to my earlier writing regimen, but shorter, faster, and slightly more daring. I'll crosspost anything I like especially well here.
Hm. So it's some larger problem of which the Curse of It From Bit is a subcluster, then? If I've understood you right, this is also something I've spent time thinking about by way of finding them with my face. Something like... some unfeeling machine/system/structure that fails to live up to its own spec, or which has a bad spec or no spec at all, and which makes that state of affairs your problem, you silly goose who wants to actually do things, you. There's the flavor of that thing from Zen and the Art of Motorcycle Maintenance where there's this tiny trivial-feeling problem that humiliates you with a combination of feeling trivially small while nonetheless demanding close attention and deep understanding by way of being a critical blocker to solving whatever problem, I think? I think that the yakshaving bit is downstream of the spec failure piece - it seems like a common failure mode of degraded systems/designs.
Oh! I have thoughts about this one!
On my model, it's a matter of basically anything that turns Bits into Its being kinda cursed, especially if whatever process lacks human actuation or even oversight and isn't done to truly exacting standards. Printing is an example of this; 3D printing and general CNC is another; graphic design (and the depths of what a color is) is a third. I'm thinking of the kind of thing you might find on r/shittyrobots as well; a minimal non-example might be polypeptide/oligonucleotide preparation, though I'm not totally clear on the workflow there or how much human labor is involved.
My gears-level model here is most starkly illustrated by 3d printing - something about the physical object creation might change, or be underspecified, or rely on shoddily-made connectors of some kind, and in any case, the actual machine that does the actuating isn't set up to check for failures during workflow or indeed do much more than the simple actuation and an initial calibration step. On top of that, the thing goes from being data which is everywhere and wherever you want it, to being a specific thing in a specific place; often you don't even get to have the affordance of knowing in advance which place it is. Possibly even the yakshaving you notice is a matter of some kind of associated ugh field.
Spoons are for doing ordinary things, doing things at all. [Blood] is for doing difficult things, doing them more or harder, doing them at significant internal cost.
Several years on from this post and @Raemon 's comment, where do we stand? From my own starkly limited ground-level perspective, it looks like pretty much everyone who's tried to figure out the secrets of training has smashed into the same flavor of mixed-success, shattered, and given up on it as a bad job. This seems sad and desperately misguided, but what do I know? I haven't exactly tried of my own accord, and I showed up too late to the party for the height of CFAR and the like. I know a few people are still trying to poke at questions like this one from various angles, but everyone I've talked to about the topic (5 or so of them?) seem extremely burned out on the whole enterprise, so maybe it's all for the best that I never went.
What would it look like, to actually try again in these latter days? If the model in this post is correct - and I think that it is, that training is vastly more valuable than selection - then there's gains to be had. It'd start - I think - by figuring out what went wrong previously and noticing and carefully attending to all the skulls without drowning in despair about them. The next step might be some careful trials - maybe randomized ones, maybe they need to be thoughtfully tailored to the student - and tests after the fact at a few time intervals. Something about the zone of proximal development is tickling the back of my mind here; so is something about crystallization and application of the skill, especially in a way that psychologically accords that you Have The Skill, which probably calls for some ordeal? Hard to say. I only have scattered thoughts about this. If anyone wants to try to elicit more of them from me I welcome the attempt.
I didn't realize it when I posted this, but the anvil problem points more sharply at what I want to argue about when I say that making the NAS blind to its own existence will make it give wrong answers; I don't think that the wrong answers would be limited to just such narrow questions, either.
How did this turn out?
Flip a coin if you are struggling to decide between option in a situation where there are relatively low stakes. This exposes to you your gut instinct immediately, which is more than good enough most times, and it is far faster than logically finding an answer.
Better yet, if you subscribe to many-worlds and you do actually care about trying both options, use a quantum coin. Don't take one option - take both of them.
...benign scenarios in which AIs get legal rights and get hired to run our society fair and square. A peaceful AI takeover would be good, IMO.
...humans willingly transfer power to AIs through legal and economic processes. I think this second type will likely be morally good, or at least morally neutral.
Why do you believe this? For my part, one of the major ruinous scenarios on my mind is one where humans delegate control to AIs that then goal-misgeneralize, breaking complex systems in the process; another is one where AIs outcompete ~all human economic efforts "fair and square" and end up owning everything, including (e.g.) rights to all water, partially because no one felt strongly enough about ensuring an adequate minimum baseline existence for humans. What makes those possibilities so unlikely to you?
Thanks for writing this. I've noticed something in the same vein as well - for the last few months I've felt increasingly like a lot of the pieces and systems agent foundations types use have some kind of important commonality to them, though I'm not yet sure what form that could take. Condensation and natural latents and some of Francis Rhys Ward's work in II-MAIDS; ontology mismatch and Bayes nets and imprecise probability and category theory showing up repeatedly. There's something there to construct, but what?