Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.
I'm reminded of this chart I first came across in this AI Impacts page. Caption:
Figure 2: Illustration of the phenomenon in which the first entirely ‘human-level’ system is substantially superhuman on most axes. (Image from Superintelligence Reading Group)
I'll also echo sunwillrise's comment in being partial to Steven Byrnes's take on AGI.
I wonder to what extent these impressions by Some Guy over at Extelligence are shared by others:
Some of this may be biased by my own background, but basically I’ve always found mathematicians and physicists to be the most humble and approachable people in all of the sciences. Second, are chemists and material scientists who can be somewhat cold but are always honest and straightforward because they have nothing to prove. But biologists? Man. The descriptive sciences have a chip on their shoulders and while social sciences are usually full of people who make up flowery language to cover up for that, biology is close enough to the harder sciences it has a chip on its shoulder. Once you move away from the necessary honesty of mathematical and atomic mechanism people can become savage assholes. The rudest people I have ever met in my life were biologists.
So, there are my biases laid out on the table. Scientists who aren’t very good at math tend to be dicks because they’re self-conscious about it.
(I can think of plenty of personal counterexamples.)
Anyway, picturing what the world would look like if I moved up the intelligence scale, the thoughts that output sound like your posts. Most people are basically cats, if you expect to be treated like an adult you have to be trying to have a counterfactual impact.
This tangentially reminded me of this quote about John von Neumann by Edward Teller, himself a bright chap (father of the hydrogen bomb and all that):
von Neumann would carry on a conversation with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle when he talked to the rest of us.
That said in John Wentworth's case moral agency/ambition/tsuyoku naritai seems more key than intelligence, cf. what he said earlier:
What made it hurt wasn’t that they were stupid; this was a college where the median student got a perfect score on their math SATs, they were plenty smart. They just… hadn’t put in the effort. ... The disappointment came from seeing what they could have been, and seeing that they didn’t even try for it. ...
I think a core factor here is something like ambition or growth mindset. When I have shortcomings, I view them as shortcomings to be fixed or at least mitigated, not as part of my identity or as a subject for sympathy. On the positive side, I have goals and am constantly growing to better achieve them. Tsuyoku naritai. I see people who lack that attitude, who don’t even really want to grow stronger, and when empathy causes the suspension of disbelief to drop… that’s when I feel disgust or disappointment in my so-called fellow humans. Because if I were in their shoes, I would feel disgust or disappointment in myself.
So I think you're misdiagnosing.
Your comment reminded me tangentially of Mario Gabriele's now-paywalled essay Compassion is the enemy, published in mid-2020 during the height of BLM, which had this passage I saved:
Do you remember Alan Kurdi? In September 2015, a harrowing photograph circulated. A three-year-old boy, Syrian, drowned and face down on a Turkish beach. He and his family had been trying to reach Greece.
For a brief moment, the world turned their full attention to the toddler and the atrocities he represented. Never mind that 250K had died already by then, now was the time to pay real attention and take action. The hashtag #RefugeesWelcome circulated, donations to the Red Cross spiked, and both the German and Austrian governments decided to open their borders.
And then? Then interest faded, donations dwindled, and governments turned their attention elsewhere. In short, our compassion was redirected or exhausted. The latter is almost the inevitable result of social media with its heightened emotion and endless feed. Meanwhile, bombs continued to fall on Aleppo, and Syrians continued to flee, and boats still floundered, and children drowned in the same sea. Just a year after his death, Alan's father said, "My Alan died for nothing.”
Compassion is the enemy. In its vividness and high-color, in the delight it gives its exponent, in its brevity, compassion impedes enduring change. It is the least we should be able to provide as humans, the bare minimum, and as such, unworthy of celebration or mention.
The entirety of the corporate world was awash with sentiments of contrition... What meaning can be gleaned from such platitudes? Without accompanying action — enduring commitments that go beyond one-time donations or temporary offers to meet with black founders — this is rhetoric with the nutritional value of paper. The most charitable appraisal would be that these expressions are well-intentioned banalities. At worse, it is craven opportunism, marketing masquerading as conscience. Do we believe that capital allocators will make meaningful changes to their practices without additional pressure, without building systems that run without the intervention of compassion?
As we think of how we can move beyond empty sentiment, that, I would posit, is the true challenge. Not feeling more or undertaking some token, episodic deed, but constructing systems such that we don't need to rely on the vagaries of compassion to do good. As the author, Chinua Achebe, said, "While we do our good works let us not forget that the real solution lies in a world in which charity will have become unnecessary.”
Just as technology has had a role to play in magnifying our flaws, so too should it be part of correcting them. Payment processors allow consumers to donate to charities on a recurring basis, avoiding the mental overhead required to recommit to a cause each month. Video games and virtual reality can reduce prejudice. Racist policing algorithms, like those previously used by the LAPD, can be amended or junked. Just as W.E.B Du Bois once did, data visualizations can be used to convey the scale of a problem in new, evocative ways.
Social media is a reflection of who we are, flaws included. But technology can be so much more, correcting for the bugs in our human software. My hope is that we may use it to find a road beyond compassion, eschewing the narrowness of Mother Teresa's words. Kierkegaard described a poet as someone whose mouth was so formed that when they cried in pain, the world heard only music. We must recognize what it is to be a human: to have a pair of ears shaped such that we can hear the cries of a single person, yet be deaf to the cacophony of the many, still suffering.
I just learned about the idea of "effectual thinking" from Cedric Chin's recent newsletter issue. He notes, counterintuitively to me, that it's the opposite of causal thinking, and yet it's the one thing in common in all the successful case studies he could find in business:
The only answer that fits with the twenty-seven odd cases that we’ve published on the Idea Maze is a thing called effectual thinking.
Effectual thinking is a concept proposed by Professor Saras D Sarasvathy in a 2001 paper — a paper that was also, hilariously, described as “the first good paper I’ve seen” by Vinod Khosla, the renowned venture capitalist and cofounder of Sun Microsystems.
Saravathy didn’t pull this theory out of thin air: she hunted down a list of entrepreneurs who, at the time of her paper, represented a near-complete list of experienced founders (of enduring companies) in the US from the period of 1960 to 1985.
These folk were:
- Repeat entrepreneurs (minimum three ventures, though the average number of companies started in her sample was seven new ventures)
- Were successful and serious businesspeople (at least one of those ventures had gone public, and they had spent minimum 10 years with that company)
- Were filtered from a list of the ‘top 100 most successful’ entrepreneurs compiled by VC David Silver in 1985, and a list of Entrepreneur of the Year awards, compiled by Ernst & Young.
We can be quite sure these entrepreneurs are ‘expert’, by most definitions of the term.
Sarasvathy found that all of them demonstrated a similar style of thinking when given a new venture brainstorming task. She named this style of thinking ‘effectual thinking’.
So what is effectual thinking? In short, effectual thinking is the opposite of causal thinking.
Causal thinking is what you learn in business school. It starts with a fixed goal and works backwards to figure out how to get there — like deciding to cook carbonara for dinner, looking at your pantry, then working backwards to figuring out the ingredients needed, and shopping for those missing ingredients.
Effectual thinking does the opposite: it starts with opening your fridge and asking, “What can I make with what I find here?” You work forwards from available resources, not backwards from a predetermined plan.
Entrepreneurs who practice this effectual thinking follow three principles, and these are the rules that are actually useful for navigating the Idea Maze:
- You structure your life to make survivable bets.
Successful entrepreneurs don’t go all-in on a single idea. They set themselves up — financially, emotionally, and logistically — to take repeatable, reasonable risks. They aim to stay in the game long enough for something to work.- You take action instead of getting stuck analyzing.
In the earliest stages of an idea, competitive analysis is misleading. If a market gap could be spotted through research alone, it’s probably not that valuable (and would likely be quickly exploited by an established competitor). Instead, experienced entrepreneurs take action and attempt to cut deals with relevant people — customers, partners, collaborators. Action generates real information.- You treat entrepreneurship as improvisation.
There is no master plan. There’s no strategy that will guarantee success. You take action, learn from what happens, and adapt. This is a game that rewards curiosity, flexibility, and sheer staying power.
Jack Clark's most recent issue of Import AI mentioned AI security startup XBOW's "fully autonomous AI-driven penetration tester" (also called XBOW), which topped HackerOne:
AI pentesting systems out-compete humans:
…Automated pentesting…
AI security startup XBOW recently obtained the top rank on HackerOne with an autonomous penetration tester - a world first. "XBOW is a fully autonomous AI-driven penetration tester," the company writes. "It requires no human input, operates much like a human pentester, but can scale rapidly, completing comprehensive penetration tests in just a few hours."
What they did: As part of its R&D process, XBOW deployed its automated pen tester onto the HackerOne platform, which is a kind of bug bounty for hire system. "Competing alongside thousands of human researchers, XBOW climbed to the top position in the US ranking," the company writes. "XBOW identified a full spectrum of vulnerabilities including: Remote Code Execution, SQL Injection, XML External Entities (XXE), Path Traversal, Server-Side Request Forgery (SSRF), Cross-Site Scripting, Information Disclosures, Cache Poisoning, Secret exposure, and more."
Why this matters - automated security for the cat and mouse world: Over the coming years the offense-defense balance in cybersecurity might change due to the arrival of highly capable AI hacking agents as well as AI defending agents. This early XBOW result is a sign that we can already develop helpful pentesting systems which are competitive with economically incentivized humans.
Read more: The road to Top 1: How XBOW did it (Xbow, blog).
I have no knowledge of pentesting at all, but was immediately skeptical of XBOW's real-world utility on account of your post.
You reminded me of this part of Rudolf's story:
Big Tech headcounts grow, as they hire more people both to flatter the egos of managers—they are drowning in cash anyway—and in particular many product managers to oversee the AI codegen agents that are unleashing a massive series of new products now that they're mostly no longer constrained by development taking lots of time. Internal company office politics becomes even more of a rate-limiter: if teams are functional, the AI codegen boost means more products shipped, whereas if teams are not, the gains are eaten up by employees working less or by factional fights within companies.
Kishore Mahbubani, Singaporean diplomat and former president of the UN Security Council, studied philosophy full-time as an undergraduate in the late 60s. Recounting that period in his autobiography Living the Asian Century he wrote
For the final examinations, which I took at the end of my fourth year, our degree was determined by how well we did in eight three-hour examinations. In one of the papers, we had to answer a single question. The one question I chose to answer over three hours was “Can a stone feel pain?”
From my exam results, I gained a first-class honours degree, which was rare in the Department of Philosophy. Since our final examination papers were also sent to Peter Winch, one of the leading scholars on Wittgenstein in the world, I felt honoured that my first-class honours had been endorsed by him.
Wittgenstein was Mahbubani's favorite philosopher; back then, “like all other philosophy departments in the Anglo-Saxon world, our department had been captured by the linguistic-analytic school of philosophy that Wittgenstein had launched with his Philosophical Investigations”.
At risk of revealing possible narrow-mindedness, a three-hour free response exam to the question “Can a stone feel pain?” makes me think of Luke's philosophy: a diseased discipline. The questions Richard Ngo answered in his All Souls Fellowship exam got wacky at times, but never “can a stone feel pain?”-wacky.
Mahbubani continued:
... I could write eight pages over three hours in response to the question “Can a stone feel pain?” because Wittgenstein’s ideas allowed me to deconstruct the meanings of the words in this apparently simple question.
The process of focusing on the language we use came in very handy when I joined the Ministry of Foreign Affairs (MFA) in April 1971 and embarked on my long career in the study of geopolitics. Our understanding of “objective reality” is clearly conditioned by the language we use. The first major war that I had to analyse as a Foreign Service officer was the Vietnam War. The “facts” were clear: soldiers from North Vietnam were fighting soldiers from the United States. We could see this. But what were they fighting about? The US leaders, Johnson and Nixon, had no doubt: they were fighting against a global push by the Soviet Union and China to expand communism. But the North Vietnamese soldiers also had no doubt: they were fighting for “national liberation” from the “imperialist” US forces. So who was right? What is the truth here? Adding to the elusiveness of an absolute “truth” is the fact that fifty years after the United States withdrew ignominiously from Vietnam, one of the best friends of the United States in Southeast Asia will be the Communist Party of North Vietnam—the United States wants to upgrade its ties with Vietnam to a strategic partnership.
I find myself completely unpersuaded by his applied example here, but I suppose I'm just the wrong audience...
The discord invite has expired, can we get an updated one?