Nathan Helm-Burger

AI alignment researcher, ML engineer. Masters in Neuroscience.

I believe that cheap and broadly competent AGI is attainable and will be built soon. This leads me to have timelines of around 2024-2027. Here's an interview I gave recently about my current research agenda. I think the best path forward to alignment is through safe, contained testing on models designed from the ground up for alignability trained on censored data (simulations with no mention of humans or computer technology). I think that current ML mainstream technology is close to a threshold of competence beyond which it will be capable of recursive self-improvement, and I think that this automated process will mine neuroscience for insights, and quickly become far more effective and efficient. I think it would be quite bad for humanity if this happened in an uncontrolled, uncensored, un-sandboxed situation. So I am trying to warn the world about this possibility. 

See my prediction markets here:

 https://manifold.markets/NathanHelmBurger/will-gpt5-be-capable-of-recursive-s?r=TmF0aGFuSGVsbUJ1cmdlcg 

I also think that current AI models pose misuse risks, which may continue to get worse as models get more capable, and that this could potentially result in catastrophic suffering if we fail to regulate this.

I now work for SecureBio on AI-Evals.

relevant quotes: 

"There is a powerful effect to making a goal into someone’s full-time job: it becomes their identity. Safety engineering became its own subdiscipline, and these engineers saw it as their professional duty to reduce injury rates. They bristled at the suggestion that accidents were largely unavoidable, coming to suspect the opposite: that almost all accidents were avoidable, given the right tools, environment, and training." https://www.lesswrong.com/posts/DQKgYhEYP86PLW7tZ/how-factories-were-made-safe 

 

"The prospect for the human race is sombre beyond all precedent. Mankind are faced with a clear-cut alternative: either we shall all perish, or we shall have to acquire some slight degree of common sense. A great deal of new political thinking will be necessary if utter disaster is to be averted." - Bertrand Russel, The Bomb and Civilization 1945.08.18

 

"For progress, there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment." - John von Neumann

 

"I believe that the creation of greater than human intelligence will occur during the next thirty years.  (Charles Platt has pointed out the AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a         relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)" - Vernor Vinge, Singularity

Wikitag Contributions

Comments

Sorted by

My personal take is that projects where the funder is actively excited about them and understands the work and wants frequent reports tend to get stuff done faster... And considering the circumstances, faster seems good. So I'd recommend supporting something you find interesting and inspiring, and then keep on top of it.

In terms of groups which have their eyes on a variety of unusual and underfunded projects, I recommend both the Foresight Institute and AE Studio.

In terms of specific individuals/projects that are doing novel and interesting things, which are low on funding... (Disproportionately representing ones I'm involved with since those are the ones I know about)...

Self-Other Overlap (AE studio)

Brain-like AI safety (Stephen Byrnes, or me (very different agenda from Stephen's, focusing on modularity for interpretability rather than on Stephen's idea about reproducing human empathy circuits))

Deep exploration of the nature and potential of LLMs (Upward Spiral Research, particularly Janus aka repligate)

Decentralized AI Governance for mutual safety compacts (me, and ??? surely someone else is working on this)

Pre-training on rigorous ethical rulesets, plus better cleaning of pretraining data (Erik Passoja, Sean Pan, and me)

  • this one I feel like would best be tackled in the context of a large lab that can afford to do many experimental pre-training runs on smallish models, but there seems to be a disconnect between safety researchers at big labs who are focused on post-training stuff versus this agenda which focuses more on pre-training.

Oh, for sure mammals have emotions much like ours. Fruit flies and shrimp? Not so much. Wrong architecture, missing key pieces.

I call this phenomenon a "moral illusion". You are engaging empathy circuits on behalf of an imagined other who doesn't exist. Category error. The only unhappiness is in the imaginer, not in the anthropomorphized object. I think this is likely what's going with the shrimp welfare people also. Maybe shrimp feel something, but I doubt very much that they feel anything like what the worried people project onto them. It's a thorny problem to be sure, since those empathy circuits are pretty important for helping humans not be cruel to other humans.

Answer by Nathan Helm-Burger*40

Update: Claude Code and s3.7 has been a significant step up for me. Previously, s3.6 was giving me about a 1.5x speedup. s3.5 more like 1.2x CC+s3.7 is solidly over 2x, with periods of more than that when working on easy well-represented tasks not in an area I know myself (e.g. Node.js)

Here's someone who seems to be getting a lot more out of Claude Code though: xjdr

i have upgraded to 4 claude code sessions working in parallel in a single tmux session, each on their own feature branch and then another tmux window with yet another claude in charge of merging and resolving merge conflicts

"Good morning Claude! Please take a look at the project board, the issues you've been assigned and the open PR's for this repo. Lets develop a plan to assign each of the relevant tasks to claude workers 1 - 5 and LETS GET TO WORK BUDDY!"

https://x.com/_xjdr/status/1899200866646933535

Been in Monk mode and missed the MCP and Manus TL barrage. i am averaging about 10k LoC a day per project on 3 projects simultaneously and id say 90% no slop. when slop happens i have to go in an deslop by hand / completely rewrite but so far its a reasonable tradeoff. this is still so wild to me that this works at all. this is also the first time ive done something like a version of TDD where we (claude and i) agonize over the tests and docs and then team claude goes and hill climbs them. same with benchmarks and perf targets. Code is always well documented and follows google style guides / rust best practices as enforced by linters and specs. we follow professional software development practices with issues and feature branches and PRs. i've still got a lot of work to do to understand how to make the best use of this and there are still a ton of very sharp edges but i am completely convinced this is workflow / approach the future in a way cursor / windsurf never made me feel or believe (i stopped using them after being bitten by bugs and slop too often). this is a power user's tool and would absolutely ruin a codebase if you weren't a very experienced dev and tech lead on large codebases already. ok, going back into the monk mode cave now

This is a big deal. I keep bringing this up, and people keep saying, "Well, if that's the case, then everything is hopeless. I can't even begin to imagine how to handle a situation like that."

I do not find this an adequate response. Defeatism is not the answer here.

If what the bad actor is trying to do with the AI is just get a clear set of instructions for a dangerous weapon, and a bit of help debugging lab errors... that costs only a trivial amount of inference compute.

Finally got some time to try this. I made a few changes (with my own Claude Code), and now it's working great! Thanks!

This seems quite technologically feasible now, and I expect the outcome would mostly depend on the quality and care that went into the specific implementation. I am even more confident that if the detail of 'the comments of the bot get further tuning via feedback, so that initial flaws get corrected', then the bot would quickly (after a few hundred such feedbacks) get 'good enough' to pass most people's bars for inclusion.

Load More