Richard Hollerith. 15 miles north of San Francisco. hruvulum@gmail.com
My probability that AI research will end all human life is .92. It went up drastically when Eliezer started going public with his pessimistic assessment in April 2022. Till then my confidence in MIRI (and knowing that MIRI has enough funding to employ many researchers) was keeping my probability down to about .4. (I am glad I found out about Eliezer's assessment.)
Currently I am willing to meet with almost anyone on the subject of AI extinction risk.
Last updated 26 Sep 2023.
As far as I can tell, Eliezer and Nate are relying on no results of experiments done on AI models (REAIMs) to conclude that superintelligence is dangerous, and if some clever young person (or more realistically a series of clever young people each building on the work of their predecessors) comes up with a good plan for creating an aligned superintelligence (which probably won't happen any year soon) that plan probably also will rely on no REAIMs nor will Eliezer and Nate require any REAIMs to conclude that the plan is safe.
Experiments and tests are very useful; human engineers and designers facing sufficiently difficult challenges will usually choose to use them; but sufficiently capable people are far from helpless even in domains in which they cannot do experiments (because the experiments would be too risky or because they would rely an GPUs and data centers that have been banned by international agreement).
For more information, a person could do worse than read Eliezer's Einstein's arrogance.
Before Christianity was discredited, it acted as a sort of shared lens through which the value of any proposed course of action could be evaluated. (I'm limiting my universe of discourse to Western society here.) I'm tempted to call such a lens an "ideological commitment" (where the "commitment" is a commitment to view everything that happens to you through the lens of the ideology -- or at least a habit of doing so).
Committing to an ideology is one of the most powerful things a person can do to free himself from anxiety (because the commitment shifts his focus from his impotent vulnerable self to something much less vulnerable and much longer lived). Also, people who share a commitment to the same ideology tend to work together effectively: a small fraction of the employees of an organization for example who share a commitment to the same ideology have many times taken the organization over by using loyalty to the ideology to decide who to hire and who to promote. They've also taken over whole countries in a few cases.
The trouble with reducing the prestige and the influence of Christianity even now in 2025 is that the ideologies that have rushed in to fill the void (in the availability of ways to reduce personal anxiety and of ways to coordinate large groups of people) have had IMHO much worse effects than Christianity.
You, Ben, tend to think that society should "eat the costs and spend the decades/centuries required to build better things" than Christianity. The huge problem with that is that the extreme deadliness of one of ideologies that has rushed in to fill the void caused by the discrediting of Christianity: namely, the one (usually referred to vaguely by "progress" or "innovation") that views every personal, organizational and political decision through the lens of which decision best advances or accelerates science and technology.
In trying to get frontier AI research stopped or paused for a few decades, we are facing off against not only trillions of dollars in economic / profit incentives, but also an ideology, and ideologies (including older ideologies like Christianity) have proven to be fierce opponents in the past.
Reducing the prestige and influence of Christianity will tend to increase the prestige and influence of all the other ideologies, including the ideology, which is already much more popular than I would prefer, that we can expect to offer up determined sustained opposition to anyone trying to stop or long-pause AI.
the importance of drones, I think, is not going to go down thanks to AI.
I agree. What I tried to say though was that my guess is that for drones to stay as effective as they currently are for 5 years would require AI capable enough that it would transform so many aspects of society that for us in 2025 to try to project out that far becomes futile.
When you imagine a force overwhelming a position using very many drones, are you imagining one human per drone or are you imagining most of the drones (or more precisely, most of the drone flying time in "drone hours") being flown by AI?
I would imagine that by the time a drone doesn't need a human operator for most of the time it is in combat, lots of other things about war will have changed, e.g., whether an infantry soldier is obsolete.
I agree that multirotor helicopter drones have fundamentally transformed the war in Ukraine.
I am willing to believe 70-90% Russian casualties as being caused by these weapons, but the fraction of Ukrainian casualties will be significantly lower even though Russia has been innovating furiously with drones because Russia is less constrained in supply of artillery shells. "While estimates vary, a common figure cited is a 5-to-1 or even higher ratio of shells fired by Russian forces compared to Ukrainian forces in some areas" (Google Gemini which has a knowledge cutoff date of Jan 2025).
Relevant to your original question (many levels of indentation ago) particularly the part about Israel's vulnerability, is still think the crux is that countermeasures will probably reduce very significantly the effectiveness of this class of drones over the next 3 years at least in areas protected by well-funded militaries that are not in a state of civil war. The tech does not seem to have as much potential to stay very important as for example the fuzed artillery shell, which has remained very important for over 100 years because it is relatively difficult to develop countermeasures for it.
Aside from the sound issue already discussed, weapons makers will probably be unable to make the class of weapon we are discussing much faster than they already are (namely 50 to 120 km per hour) and if they do manage to increase the speed significantly, that will probably make the sound problem worse. In contrast, according to Gemini, during the terminal portion of its trajectory, a large artillery shell travels at 1080 to 2160 kilometers per hour.
You might think that, because LLMs are grown without much understanding and trained only to predict human text, they cannot do anything except regurgitate human utterances. But that would be incorrect. [...]
Furthermore, AIs nowadays are not trained only to predict human-generated text. An AI-grower might give their AI sixteen tries at solving a math problem, thinking aloud in words about how to solve it; then, the “chain-of-thought” for whichever of the sixteen tries went best would get further reinforced by gradient descent, yielding what’s called a reasoning model. That’s a sort of training that can push AIs to think thoughts no human could think.
How does that conclusion follow? If a base model can only regurgitate human utterances, how is generating sixteen utterances and then reinforcing some of them leads to it… not regurgitating human utterances?
In the first sentence, Eliezer and Nate are (explicitly) stating that LLMs can say things that are not just regurgitations of human utterances.
Consider though that if a defense contractor were able to reduce the noise of a helicopter enough to matter militarily, the Pentagon would have poured many billions into that contractor, especially during the Vietnam War during which helicopters were relied on very extensively. Also, the main constraint on the use of civilian helicopters is probably complaints about the noise. And the fans at the front of engines of airliners is responsible for producing most of the thrust on the airliner, and there have been large economic incentives to make those quiet (to eliminate the copious restrictions on airliners designed to limit noise) and although airliners have gotten quieter, they remain quite loud, loud enough to detect and triangulate with arrays of microphones many many miles away. (The reason the are called "fans" and not "propellers" is merely the number of blades.)
Remotely-piloted gliders or glide bombs of course don't have much of a noise signature, which is why I have tried to be careful in my comments to restrict the scope of my statements to multirotor helicopter-style drones.
The host definitely says that the guest (the soldier) was a drone operator or worked on a team the purpose of which is to operate drones during the first 3 minutes: I re-listened to that much before I wrote my description. The word "drone" was definitely used.
I want to revise my statement that "I listen to defense experts talk as a weird form of relaxation." Actually what I listen to are geopolitics experts, who often have hours-long conversations specifically about military matters. Here are some suggestions:
https://www.youtube.com/@DecodingGeopoliticsPodcast
https://www.youtube.com/@GeopoliticalFuturesGPF
https://www.youtube.com/@DAlperovitch
the interviews with John Mearsheimer on https://www.youtube.com/@DanielDavisDeepDive
This soldier spent 2 years fighting for Ukraine, including 6 months recently as an operator of FPV drones, and he is also skeptical that drones will revolutionize military affairs during the next few years. I don't recall anything about his arguments, but my recollection is he does provide some argumentation in this interview.