When verification brings no significant risks, refusal is a confession While the subtitle might echo the authoritarian trope of "if you have nothing to hide, you have nothing to fear," let me be immediately clear: I am not talking about machines (and by "machines," I also mean institutions like governments...
Fieldbuilding for AI verification is beginning. A consensus for what to build, what key problems to solve, and who to get in on the problem is emerging. Last week, ~40 people in total, including independent researchers and representatives from various companies, think tanks, academic institutions and non-profit organisations met for...
Part 2 of "Actually, hardware noise is useful for AI governance and verification" Here I go into the issue of "But now you need to use/match the prover's hardware, so how can you trust your replay device?". Short answer: You don't. TLDR of the idea: Use both a trusted (noisy),...
All modern processors have unique identifiers burnt into silicon at the fab. TSMC should cryptographically commit to their record of these IDs as soon as possible, to help combat AI chip smuggling today and track down secret AI datacenters in the future. The commitment costs practically nothing and reveals no...
“Hardware noise” in AI accelerators is often seen as a nuisance, but it might actually turn out to be a useful signal for verification of claims about AI workloads and hardware usage. With this post about my experiments (GitHub), I aim to 1. Contribute more clarity to the discussion about...
TLDR: In response to Leopold Aschenbrenner’s ‘Situational Awareness’ and its accelerationist national ambitions, we argue against the claim that artificial superintelligence will inevitably be weaponised and turn its country of origin into an untouchable hegemony. Not only do we see this narrative as extremely dangerous, but also expect that the...