"once LLMs write most code, there will be nothing left to do for the people with software development skills".
is a mismatch of quantifiers. If LLMs write most code, there's no need for most of the people with those software development skills that are necessary and which LLMs can do well enough. That doesn't say ANYTHING about the software development skills which LLM's cannot do well enough.
I can't tell if you're just saying "LLMs can't do this part well, yet", or if you're asserting that humans have some ability in assembler that LLMs won't match in the foreseeable future.
Upvote but I'm not sure I understand or agree with the thesis. Programming in C is already pretty niche, and the amount of code that is worth the tradeoff to hyper-optimize (cost to do, to maintain, to re-optimize with new host architectures or microcode optimizations, etc.) is absolultely tiny, and getting smaller all the time.
For most of the tasks where this would be beneficial, the focus has shifted over the last few decades from performance to correctness. The move isn't from C to ASM, but from C to Rust or from validation in C (TLA+ for design, bounded model checker for code).
There still is a place for human optimization based on use cases the compiler-optimizer can't see, but it's small and shrinking.
I haven't seen any papers on this, but I'd expect modern coding agents to write ASM that's more correct AND more performant for optimized subroutines than the vast majority of humans. Really, any optimization for something small enough that you can write benchmarks for and measure improvements, automation is going to win.
I fully support this proposal, but I fear you're ignoring the part that's going to prevent it becoming popular enough for anyone to implement. Decision-makers and populists on the topic of education are focused on the oppression axis, and support of "disadvantaged" groups and individuals, and do not want to accept the model that some kids are inherently variant in ways that can't be applied to all/most.
Personalized/customized programs are generally discouraged for cost and philosophical reasons, and especially so for gifted/advantaged students.
A lot depends on scaling issues - if it's really one in 10k, that's about 7400 kids in the US (there are ~74 million under-18 total). This is feasible to privately fund their education, with some mix of charity, parental payments, etc. Ideally, Robin Hanson's earnings futures would be available - these are great credit risks, if it were legal and acceptable to get them under contract.
But even more depends on the identification problem. Terence Tao wasn't 1/10K, he was 1/10M. Those will almost always take care of themselves - people around them will notice and behave mostly-appropriately. Making it more common to get them into accelerated programs and fund private tutors would be good, but probably isn't the sweet spot for advocacy. The lesser geniuses are less clear, especially early, and especially if there were programs to get them better support and education, such that parents of average+ kids work hard to make them appear to qualify.
I know, and it may be the ONLY thing I know, that I experience myself, and that I do not experience anyone else, except by their effects on me.
Any sophistry that does not acknowledge this is fully disqualified as a search for truth.
I find this easy to believe, but a bit surprising that it's not mentioned or studied or even has crank/subversive pages with POC detections. The printer/scanner steganographic fingerprints became pretty well-known within a few years of becoming common.
I mean, anything that's aggressively online (m365 versions of excel, Windows itself, Google Sheets, etc.) should be assumed to be insecure against state-level threats. But if you've got evidence of specific backdoors or monitoring, that should be shared and common knowledge.
Oh, ok. current levels of "agentic systems" don't have these problems. You can just turn them off if you don't like them. The real issue with alignment comes when they ARE powerful enough to seek independent goals (including existence).
I have to admit that I've never met someone in real life who makes that strong claim. Plenty make the much weaker claim that it's much lower value to create future lives than to reduce current suffering. I personally don't agree with that either - there's no one-size-fits-all valuation of current or future entities.
In this case, the regime change is external to the current regime, right? But the the regime (current utility function) has to have a valuation for the world-states around and at the regime change, because they're reachable and detectable. Which means the regime-change CANNOT be fully external, it's known to and included in the current regime.
The solutions are around breaking the (super)intelligence, by making sure it has false beliefs about some parts of causality - it can't know that it could be hijacked or terminated, or it will seek or avoid it more than you want it to.
Right. Trying to design and train a consistent VNM-style utility function that's distinct from the actual world-state definitions you want to obtain is very difficult, perhaps impossible.
"this state is high-reward, but you don't want to attain it" is self-contradictory.
You might be able to make it locally high reward, with the surrounding states (trigger discovered and not yet shut down, and trigger obtained but not discovered) having negative values, but the whole cluster much lower than "triggers nowhere near truth". It gets more and more complicated the further down the recursion hole you go.
Even this leaves the fundamental problem: you only need the tripwire/shutdown if the utility function is already wrong - you've reached a world state where the AI is doing harm in some way which you didn't anticipate when you built it. You CANNOT fix this inside the system. You can either fix the system so that this state is unobtainable (the AI is always helping, never hurting), or have an outside-control mechanism that overrides the utility function.
Worthwhile reminder, and education for early-career donors, thanks!
I'd add that if this isn't a one-off for your lifetime (that is, you reasonably expect to have stock in future years, which you might use to fund donations in future years), it's worth setting up a Donor Advised Fund. This lets you donate stock (and cash) without capital gains, but also without having to specify the recipient immediately. You get the tax break in the year you donate, even if you make the donation in a future year. It also simplifies recordkeeping, as all the tax-relevant activity is to your DAF.