If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.
My main "claims to fame":
Isn't living in cities itself driven at least in part by memetics (e.g., glamour/appeal of city living shown on TV/movies)? Certainly memes can cause people to not live in cities, e.g., the Amish or the meme of moving out to the suburbs to raise kids.
Oops, thought I could trust "reasoning" AI (Gemini 3 Pro) for such a simple seeming question. Had it redo the estimate taking your comment into account, and it came up with 1m assuming N(90,15) globally, which still felt wrong, so I had it redo the estimate using country-level data, and it ended up with 7.5m with 6.1m in East Asia, 1.1m in the West, and .3m in RoW. This assumed N(105,15) for East Asia (so not quite using country-level data), which Opus and GPT point out might be an overestimate due to China being a bit lower than this. Had them redo the EA estimate using country-level data and they came up with 4.5m and 5.5m for EA (using N(103,15) and N(104,15) for China) respectively.
This is actually a significant update for my mental world model, as I didn't previously realize that China had more than half of the world's population of IQ>145 people.
It looks like Part 1 was never cross-posted to LW. Please fix this?
Assuming I have an IQ of 145, there are ~11 million people on Earth smarter (have a higher IQ) than me, but almost none of them, including e.g. Terence Tao, are trying to do something about AI x-risk even at this late date. Updating on this has to move one away from HIA directionally, right, versus the prior 10 years ago?
10 years ago you could say that those ~11m have just never thought about AI, but today the conclusion seemingly has to be that strategic competence is surprisingly little correlated with, or not much scaled by, intelligence, which if true would mean that HIA wouldn't do much for the key bottleneck of humanity's strategic incompetence[1], but could easily make things worse by creating more and harder strategic problems.
Yeah, so I think this is probably my most salient crux at this point: what does the "strategic competence landscape" look like after significant HIA has occurred?
A less-spooky solution might involve the principal simply asking the agent to write a comprehensive guide to building a truly friendly AGI which would be aligned with human values in a way that was robustly good, then follow that guide (with the corrigible agent’s help) to produce an aligned, superintelligent sovereign.
Please take a look at A Conflict Between AI Alignment and Philosophical Competence (especially the last paragraph, about corrigibility), which is in part a reaction to this.
I'm not surprised by this, my sense is that it's usually young people and outsiders who pioneer new fields. Older people are just so much more shaped by existing paradigms, and also have so much more to lose, that it outweighs the benefits of their expertise and resources.
All of the fields that come to my mind (cryptography, theory of computation, algorithmic information theory, decision theory, game theory) were founded by much more established researchers. (But on reflection these all differ from AI safety by being fairly narrow and technical/mathematical, at least at their founding.) Which fields are you thinking of, that were founded by younger people and outsiders?
Perplexity AI Pro (with GPT-5.1-Thinking)'s answer to "Who were the founders of academic cryptography research as a field and what where their jobs at the time?"
There isn’t a single universally agreed-on “founder” of academic cryptography. Instead, a small group of researchers in the 1940s–1970s are usually credited with turning cryptography into an open, university-based research field.
Histories of the subject generally describe a progression: Claude Shannon’s mathematical theory of secrecy in the 1940s, followed by the public‑key revolution of the 1970s and early 1980s that created today’s academic cryptography community. Shannon’s work was foundational, but it did not yet create an academic field in the modern sense; that came later with Whitfield Diffie, Martin Hellman, Ralph Merkle, and the inventors of RSA, whose work is often described as pioneering “modern” cryptography and has been recognized by ACM Turing Awards for cryptography pioneers.wikipedia+1
Claude Shannon is widely regarded as the founder of mathematical cryptography; in the 1940s he worked at Bell Labs as a researcher, where he developed the information‑theoretic framework for secrecy systems that later influenced public‑key cryptography. At roughly the same time and into the 1960s, cryptography research also existed in industry—most notably at IBM, where Horst Feistel headed an internal cryptography research group that designed ciphers such as Lucifer, which evolved into the Data Encryption Standard (DES), but this work was largely not yet an open academic discipline.research.ibm+1
Most accounts of “academic cryptography as a field” point first to the group around Stanford in the 1970s, whose work on public‑key ideas made cryptography a mainstream research topic in universities. In that period, the key people and their roles were approximately:sandilands+2
These three are often singled out as the core founders of modern, open, university‑based cryptography research because their work shifted cryptography from a mostly classified or industrial activity into a widely studied academic topic.awards.acm+2
Immediately after the Diffie–Hellman–Merkle work, Ronald Rivest, Adi Shamir, and Leonard Adleman introduced RSA while they were all at MIT, with Rivest and Shamir as faculty members and Adleman as a researcher/graduate student; this solidified public‑key cryptography as a central academic topic and led to sustained university research groups in cryptography. Their roles at MIT and the impact of RSA meant that, alongside the Stanford group, they are commonly treated as co‑founders of modern academic cryptography.
In the 1980s, researchers such as Manuel Blum, Shafi Goldwasser, and Silvio Micali (then faculty and graduate students at Berkeley and MIT) developed the complexity‑theoretic foundations of cryptography—zero‑knowledge proofs, formal security definitions, and provable security—which are why they are frequently listed among the leading theoreticians and foundational figures of the field in modern cryptographer lists.wikipedia
"What about theory of computation, algorithmic information theory, decision theory, game theory? You can be more concise with these."
Each of these fields has several widely recognized “founders,” mostly mathematicians, logicians, and economists working in universities or research institutes. Below is a concise mapping from field → key founders → what they were professionally doing at the time.
Modern theory of computation is usually traced to Alonzo Church and Alan Turing in the 1930s, along with related work by Kurt Gödel. Church was a logician and professor of mathematics at Princeton University when he introduced the lambda calculus and the Church–Turing thesis–style view of effective calculability. Turing was a young mathematician at Cambridge University when he published his 1936 paper defining Turing machines and formalizing computation, later becoming widely regarded as a founder of theoretical computer science.wikipedia+1
Algorithmic information theory was founded by Ray Solomonoff, an American mathematician who, working largely as an independent researcher on algorithmic probability and inductive inference, first presented the core ideas at a Caltech conference and in a 1960 report on a “General Theory of Inductive Inference.” It was then independently and almost simultaneously redeveloped by Andrey Kolmogorov, a leading Soviet mathematician, and Gregory Chaitin, a young mathematician working in industrial research, both of whom formalized similar notions of algorithmic complexity.wikipedia+2
Modern game theory is generally credited to John von Neumann and Oskar Morgenstern. Von Neumann, a Hungarian mathematician working in pure and applied mathematics (including at Princeton and later the Institute for Advanced Study), introduced the foundational minimax theorem for zero‑sum games and then co‑authored Theory of Games and Economic Behavior in 1944 with Morgenstern, who was at the time an economist and professor (later at Princeton), thereby establishing game theory as a major branch of economics and social science.cmu+2
Normative decision theory in economics is typically traced to the von Neumann–Morgenstern expected‑utility framework, developed in the same 1944 book that founded game theory. In that work, von Neumann (mathematician) and Morgenstern (economist) provided an axiomatic treatment of rational choice under uncertainty, which is widely treated as the foundational formulation of modern decision theory in economics and statistics.wikipedia+2
Though I guess the thing I'm pointing at could also be summarized as "why hasn't someone created a new paradigm of AI safety in the last decade?"
Creating a new paradigm within an existing field seems different enough from creating a new field that the important factors might differ a lot. Also, by asking this question it seems like you're assuming that someone should have created a new paradigm of AI safety in the last decade, which a lot of people would presumably disagree with (because they either think the existing paradigms are good enough, or this is just too hard technically). (Basically I'm suggesting it may be hard to interest people in this question, until someone has created such a paradigm, and then you can go back and say "why didn't someone do this earlier?")
Do you believe in a quantum multiverse, or a spatially infinite universe (beyond the observable universe)? You can get a similar conclusion with either of these (which are Tegmark Levels 3 and 1, respectively).
Try applying:
Also recall that we're in a tiny tiny corner of Reality (whatever Tegmark level it is, it's probably much larger than what we can see), and it's pretty unclear how to update EU(Reality | human history).
After reading this article by a human historian (Bill Black), I think there's a number of inaccuracies in Claude's account above, but the key point I wanted to verify is that Truman's reaction happened after just that one sentence by Oppenheimer (which in my mind seems like an appropriate expression of reflection/remorse, not being a drama queen, if he didn't do or say anything else "dramatic"), and that does seem to be true.
The author's conclusions, which seems right to me:
He, the president, dropped the bomb, not Oppenheimer. How dare this scientist — this government employee — assume the guilt for the greatest weapon ever used in human history? How dare he make himself the hero, albeit a tragic one?
I think Nolan got this right — this was what really annoyed Truman about Oppenheimer’s comment. By assuming guilt for the bomb, Oppenheimer was taking credit for it. And Truman resented this. He wanted the credit for dropping the bomb and saving American lives, whatever bloodguilt that may have entailed.
"Utility" literally means usefulness, in other words instrumental value, but in decision theory and related fields like economics and AI alignment, it (as part of "utility function") is now associated with terminal/intrinsic value, almost the opposite thing (apparently through some quite convoluted history). Somehow this irony only occurred to me ~3 decades after learning about utility functions.