I asked Claude how relevant this is to protecting something like a H100, here are the parts that seem most relevant from my limited understanding:
1. Reading (not modifying) data from antifuse memory in a Raspberry Pi RP2350 microcontroller
2. Using Focused Ion Beam (FIB) and passive voltage contrast to extract information
Thanks! Is this true for a somewhat-modern chip that has at least some slight attempt at defense, or more like the chip on a raspberry pi?
(Could you link to the context?)
Patching security problems in big old organizations involves problems that go a lot beyond "looking at code and changing it", especially if aiming for a "strong" solution like formal verification.
TL;DR: Political problems, code that makes no sense, problems that would be easy to fix even with a simple LLM that isn't specialized on improving security.
The best public resource I know is about this is Recoding America.
Some examples iirc:
I also learned some surprising things from working on fixing/rewriting a major bank in Israel. I can't share such juicy stories as Recoding America publicly, but here are some that I can:
[written with the hope that orgs trying to patch security problems will do well]
I want the tool to proactively suggest things while working on the document, optimizing for "low friction for getting lots of comments from the LLM". The tool you suggested does optimize for this property very well
Things I'd suggest to an AI lab CISO if we had 5 minutes to talk
Example categories of such projects:
I'm assuming the CISO's team has limited focus, but spending this focus on delegating projects is a good deal. I'm also assuming this is a problem they're happy to solve with money.
I endorse communicating why you want to do this and getting employee agreement, not just randomly following them
e.g monitored
I'm aware this example is more focused on model weights, but it felt shorter to write than other product-market-fit examples. e.g I think "experiment with opening a new office for employees who like to WFH" is more realistic for an air gapped network but was longer for me to explain
This post helped me notice I have incoherent beliefs:
I think I've been avoiding thinking about this.
So what do I actually expect?
If OpenAI (currently in the lead) would say "our AI did something extremely dangerous, this isn't something we know how to contain, we are shutting down and are calling other labs NOT to train over [amount of compute], and are not discussing the algorithm publicly because of fear the open source community will do this dangerous thing, and we need the government ASAP", do I expect that to help?
Maybe?
Probably nation states will steal all the models+algorithm+slack as quickly as they can, probably a huge open source movement will protest, but it still sounds possible (15%?) that the major important actors would listen to this, especially if it was accompanies by demos or so?
What if Anthropic or xAI or DeepSeek (not currently in the lead) would shut down now?
...I think they would be ignored.
Does that imply I should help advance the capabilities of the lab most likely to act as you suggest?
Does this imply I should become a major player myself, if I can? If so, should I write on my website that I'm open to a coordinated pause?
Should I give up on being a CooperateBot, given the other players have made it so overwhelmingly clear they are happy to defect?
This is painful to think about, and I'm not sure what's the right thing to do here.
Open to ideas from anyone.
Anyway, great post, thanks
Hey,
On anti-trust laws, see this comment. I also hope to have more to share soon