I have just seen this in curated, but I had previously commented on Zvi's reporting on it.
Obviously, any nation state aware of the escalation ladder who wanted to be the first to develop ASI would put their AI cluster deep underground and air-gap it. We must not allow a mine shaft gap and all that. Good luck to their peer superpowers to actually conquer the hardened bunker.
Also, to MAIM, you have to know that you are in imminent danger. But with ASI nobody is sure when the point of fast takeoff -- if there is any -- might start. Is that cluster in that mine still trying to catch up to ChatGPT, or has it reached the point where it can do useful AI research and find algorithmic gains far beyond what humans would have discovered in a millennium? Would be hard to tell from the outside.
Emphasize terrorist-proof security over superpower-proof security. Though there are benefits to state-proof security (SL5), this is a remarkably daunting task that is arguably much less crucial than reaching security against non-state actors and insider threats (SL3 or SL4).
This does not seem to have the least to do with super-intelligence. Daesh is not going to be the first group to build ASI, not in a world where US AI companies burn through billions to get there as soon as possible.
The Superintelligence Strategy paper mentions the 1995 Tokyo subway sarin attack, which killed 13 people. If anything, that attack highlights how utterly impractical nerve gas is for terrorist attacks. That particular group of crazies spent a lot of time on synthesizing a nerve gas (as well as a few other flashy plans) only for their their death toll being similar to a lone wolf school shooter or a someone driving a truck into a crowd. Even if their death toll was increased by an order of magnitude due to AI going "Sure, here are some easy ways to disperse Sarin in a subway carriage", their attacks would still be pretty ineffective compared to more mundane attacks such as bombs or knives.
Basically, when DeepSeek released their weights (so terrorist groups can run it locally instead of foolishly relying on company hosted AI services where any question towards the productions of "WMD" would raise a giant red flag), I did not expect that this would be a significant boon for terrorists, and so far I have not seen anything convincing me of the opposite.
But then again, that paper seems to be clearly targeted towards the state security apparatus, and terrorists were the bogeyman of that apparatus since GWB, so it seems obvious to emphasize the dangers of AIs with "but what if terrorists use them" instead of talking about x-risks or the like.
The reason why MAD works (sometimes) is because
By contrast, there is no fire alarm for ASI. Nobody knows how many nodes an neural net needs to start a self-improvement cascade which will end in the singularity, or if such a thing is even possible. Also, nobody knows if an ASI can jump by a few 100 IQ points just through algorithmic gains or if it will be required to design new chip fabs first.
--
Some more nitpicks about specifics:
To protect against 'cyber' attacks, the obvious defense is to air-gap your cluster. Granted, there have been attacks on air-gapped systems such as Stuxnet. But that took years of careful planning and an offensive budget which was likely a few OOM higher than what the Iranians were spending on IT security, and it worked exactly once.
Geolocation in chips: Data centers generally have poor GPS reception. You could add circuitry which requires the chip to connect to its manufacturer and measure the network delay, though. I will note that DRM, TPM, security enclaves and the like have been a wet dream of the content industry for as long as I have been alive, and that their success is debatable -- more often than not, they are cracked rather sooner than later. If your adversary can open the chips and modify the circuitry (at a larger feature size -- otherwise they would build their own AI chips), protecting against all possible attacks seems hard. Also, individual chips likely do not have the big picture context of what they are working on, e.g. if they are training a large LLM or a small one. To extend the nuclear weapon analogy: Sure, give your nukes anti-tampering devices, but the primary security should be that it is really hard to steal your nukes, not that removing the anti-tampering device is impossible.
Drones and pre-existing military: An AI which works by running more effective agents in human-designed drones is not an ASI. An ASI would do something much more grand and deadly -- grey goo, engineered plagues, hacking the enemies ICBMs or something else, perhaps something which no human has even thought of yet. Pretending that our current weapon systems will be anything more than sticks and stones in the face of ASI is pure pandering to the military industrial complex.
Likewise, I do not think it is wise to "carefully integrate AI into military command and control". As much as I distrust human decision making with regard to ICBMs, I will take them over ChatGPT any day of the week.
--
If we end up with MAIM, here is how I think it might work:
Of course, there are some problems with that approach:
Unlike Word, the human genome is self-hosting. That means that it is paying fair and square for any complexity advantage it might have -- if Microsoft found that the x86 was not expressive enough to code in a space-efficient manner, they could likewise implement more complex machinery to host it.
Of course, the core fact is that the DNA of eukaryotes looks memory efficient compared to the bloat of word.
There was a time when Word was shipped on floppy disks. From what I recall, it came on multiple floppies, but on the order of ten, not a thousand. With these modern CD-ROMs and DVDs, there is simply less incentive to optimize for size. People are not going to switch away from word to libreoffice if the latter was only a gigabyte.
I think formally, the Kolmogorov complexity would have to be stated as the length of a description of a Turing Machine (not that this gets completely rid of any wiggle room).
Of course, TMs do not offer a great gaming experience.
"The operating system and the hardware" is certainly an upper bound, but also quite certainly to be overkill.
Your floating point unit or your network stack are not going to be very busy while you play tetris.
If you cut it down to the essentials (getting rid of things like scores which have to displayed as characters, or background graphics or music), you have a 2d grid in which you need to toggle fields, which is isomorphic to a matrix display. I don't think that having access to boost or JCL or the python ecosystem is going to help you much in terms of writing a shorter program than you would need for a bit serial processor. And these things can be crazy small -- this one takes about 200 LUTs and FFs. If we can agree that an universal logic gate is a reasonable primitive which would be understandable to any technological civilization , then we are talking on the order of 1k or 2k logic gates here. Specifying that on a circuit diagram level is not going to set you back by more than 10kB.
So while you are technically correct that there is some overhead, I think directionally Malmesbury is correct in that the binary file makes for a reasonable estimate of the information content, while adding the size of the OS (sometimes multiple floppy disks, these days!) will lead to a much worse estimate.
Agreed.
If the authors claim that adding randomness in the territory in classical mechanics requires making it more complex, they should also notice that for quantum mechanics, removing the probability from the territory for QM (like Bohmian mechanics) tends to make the the theories more complex.
Also, QM is not a weird edge case to be discarded at leisure, it is to the best of our knowledge a fundamental aspect of what we call reality. Sidelining it is like arguing "any substance can be divided into arbitrary small portions" -- sure, as far as everyday objects such as a bottle of water are concerned, this is true to some very good approximation, but it will not convince anyone.
Also, I am not sure that for the many world interpretation, the probability of observing spin-up when looking at a mixed state is something which firmly lies in the map. From what I can tell, what is happening in MWI is that the observer will become entangled with the mixed state. From the point of view of the observer, they find themselves either in the state where they observed spin-up and spin-down, but their world model before observation as "I will find myself either in spin-up-world or spin-down-world, and my uncertainty about which of these it will be is subjective" seems to grossly misrepresent their model. They would say "One copy of myself will find itself in spin-down-world, and one in spin-up-world, and if I were to repeat this experiment to establish a frequentist probability, I would find that that the probability of each outcome is given by the coefficient of that part of the wave function to the power of two".
So, in my opinion,
Okay. So from what I understand, you want to use a magnetic effect observed in plasma as a primary energy source.
Generally, a source of energy works by taking a fuel which contains energy and turning it into a less energetic waste product. For example, carbon and oxygen can be burned to form CO2. Or one can split some uranium nucleus into two fragments which are more stable and reap the energy difference as heat.
Likewise, a wind turbine will consume some of the kinetic energy of the air, and a solar panel will take energy from photons. For a fusion reactor, you gain energy because you turn lighter nuclei (hydrogen isotopes or helium-3) into helium-4, which is extraordinarily stable.
Thus, my simple question: where does the energy for your invention come from? "The plasma" is not a sufficient answer, because on Earth we generally encounter plasma rarely for us to exploit, in fusion reactor designs, it is generated painstakingly at a huge energy cost.
Something goes into your reactor, and something comes out of it. If it is the same, then it can hardly have expended energy in your reaction.
Seconded. Also, in the second picture, that line is missing, so it seems that it is just Zvi complaining about the "win probability"?
My guess is that the numbers (sans the weird negative sign) might indicate the returns in percent for betting on either team. Then, if the odds were really 50:50 and the bookmaker was not taking a cut, they should be 200 each? So 160 would be fair if the first team had a win probability of 0.625, while 125 would be fair if the other team had a win probability of 0.8. Of course, these add up to more than one, which is to be expected, the bookmaker wants to make money. If they were adding up to 1.1, that would (from my gut feeling) be a ten percent cut for the bookie. Here, it looks like the bookie is taking almost a third of the money? Why would anyone play at those odds? I find it hard to imagine that anyone can outperform the wisdom of the crowds by a third. The only reason to bet here would be if you knew the outcome beforehand because you had rigged the game.
This is all hypothetical, for all I know the odds in sports betting are stated customarily as the returns on a 70$ bet or whatever.
Edit: seems I was not very correct in my guess.
There is also a quantum version of that puzzle.
I have two identical particles of non-zero spin in identical states (except possibly for the spin direction). One of them is spin up. What is the probability that both of them are spin up.
For fermions, that probability is zero, of course. Pauli exclusion principle.
For bosons, ...
... the key insight is that you can not distinguish them. The possible wave functions are either (spin-up, spin-up) or (spin-up,spin-down)=(spin-down,spin-up). Hence, you get p=1/2. (From this, we can conclude that boys (p=1/3) are made up from 2/3 bosons and 1/3 fermions.)
Let us assume that the utility of personal wealth is logarithmic, which is intuitive enough: 10k$ matter a lot more to you if you are broke than if your net worth is 1M$.
Then by your definition of exploitation, every transaction where a poor person pays a rich person and enhances their personal wealth in the process is exploitive. The worker surely needs the rent money more than the landlord, so the landlord should cut the rent to the point where he does not make a profit. Likewise the physician providing aid to the poor, or the CEO selling smartphones to the middle class.
Classifying most of capitalism as "exploitive" (per se) is of course not helpful. You can add epicycles to your definition by considering money rather than its subjective utilitarian value, but under that definition all financial transactions would be 'fair': the poor person values 1k$ more than his kidney, while the rich person values the kidney more than 1k$, so they trade and everyone is better off (even though we would likely consider this trade exploitative).
More generally, to have a definition of fairness, we need some unit of inter-personal utility. Consider the Ultimatum game. If both participants are of similar wealth and with similar needs and have worked a similar amount for the gains, then a 50:50 split seems fair. But if we don't have a common frame of utility, perhaps because one of them is a peasant and the other is a feudal king, then objectively determining what is a fair split seems impossible.
I am not sure what you are suggesting. Is your call for sanctioning private "hack-backs" of foreign organizations and individuals suspected of ransomware attacks, or is it more generally sanctioning attacks against organizations residing in state entities which are suspected of fostering ransomware groups? The analogy to letters-of-marque would suggest the latter -- it did not matter if you were a peaceful merchant, if you were flying the flag of the enemy you were fair game.
Either one seems to be a terrible idea.
First, attribution of IT attacks is notoriously hard. Sure, in many cases you can see which IP addresses the attacker used, but chances are that they did not use their personal DSL to attack you directly, but attacked you through another victim whose only crime is that it too had insufficient network security.
Second, I do not really see how hacking ransomware groups backed by nation state actors would lead to their arrests. Redirecting plane flights to your own sovereign soil so you can arrest someone is a tactic which was used by several states, but I do not think that it is either feasible or wise to try this purely through software exploits. Nor do I think it is wise to normalize this behavior.
Third, ransomware groups have a much smaller attack surface than their targets. Your Texan municipality likely ran a bunch of services on a shoestring budget, probably based on a Microsoft stack, possibly end-of-life. By contrast, a successful ransomware group will have a security budget per employee that is orders of magnitude higher. They will likely not use Outlook+Office+AD in their network, and might not even use email for internal communication. They will also not run a bunch of half-baked online services to get an appointment at the DMV or whatever. Sure, the NSA might get into their network, but they would also be reluctant to burn through their precious hoarded 0-days to do so.
--
My counter-proposal would be to just criminalize paying the ransom. At the end of the day, ransomware is a coordination problem. If nobody ever paid, it would not be a thing. But once you have been hit with it, it is generally cheaper to pay than to accept an extended outage (at least, if the attacker calculated their ransom correctly). By making the expected costs for paying the ransom higher (e.g. through corporate death penalty or excessive taxes if caught), one can easily adjust incentives for the negative externalities.
More fundamentally, there is the victim's narrative that IT security is impossible in the face of ransomware groups. "We bought advanced endpoint snake oil 2025 enterprise edition and sprinkled it all over our AD, and still we got hacked. This proves that when facing an adversary with a lot of criminal energy, even state of the art cybersecurity is insufficient." Bollocks. If you had spent the kind of money you suddenly had when it came to paying the ransom beforehand, you likely would not have been hacked.