My blog is here. You can subscribe for new posts there.
My personal site is here.
You can contact me using this form.
I will admit I'm not an expert here. The intuition behind this is that if you grant extreme performance at mathsy things very soon, it doesn't seem unreasonable that the AIs will make some radical breakthrough in the hard sciences surprisingly soon, while still being bad at many other things. In the scenario, note that it's a "mathematical framework" (implicitly a sufficiently big advance in what we currently have such that it wins a Nobel) but not the final theory of everything, and it's explicitly mentioned empirical data bottlenecks it.
Thanks for these speculations on the longer-term future!
while I do think Mars will be exploited eventually, I expect the moon to be first for serious robotics effort
Maybe! My vague Claude-given sense is that the Moon is surprisingly poor in important elements though.
not being the fastest amongst them all (because replicating a little better will usually only get a little advantage, not an utterly dominant one), combined with a lot of values being compatible with replicating fast, so value alignment/intent alignment matters more than you think
This is a good point! However, more intelligence in the world also means we should expect competition to be tighter, reducing the amount of slack by which you can deviate from the optimal. In general, I can see plausible abstract arguments for the long-run equilibrium being either Hansonian zero-slack Malthusian competition or absolute unalterable lock-in.
Given no nationalization of the companies has happened, and they still have large freedoms of action, it's likely that Google Deepmind, OpenAI and Anthropic have essentially supplanted the US as the legitimate government, given their monopolies on violence via robots.
I expect the US government to be competent enough to avoid being supplanted by the companies. I think politicians, for all their flaws, are pretty good at recognising a serious threat to their power. There's also only one government but several competing labs.
(Note that the scenario doesn't mention companies in the mid and late 2030s)
the fact that EA types got hired to some of the most critical positions on AI was probably fairly critical in this timeline for preventing the worst outcomes from the intelligence curse from occurring.
In this timeline, a far more important thing is the sense among American political elite that they are freedom-loving people and that they should act in accordance with that, and a similar sense among Chinese political elite that they are a civilised people and that Chinese civilisational continuity is important. A few EAs in government, while good, will find it difficult to match the impact of the cultural norms that a country's leaders inherit and that proscribe their actions.
For example: I've been reading Christopher Brown's Moral Capital recently, which looks at how opposition to slavery rose to political prominence in 1700s Britain. It claims that early strong anti-slavery attitudes were more driven by a sense that slavery was insulting to Britons' sense of themselves as a uniquely liberal people, than by arguments about slave welfare. At least in that example, the major constraint on the treatment of a powerless group of people seems to have been in large part the political elite managing its own self-image.
I built this a few months ago: https://github.com/LRudL/devcon
Definitely not production-ready and might require some "minimal configuration and tweaking" to get working.
Includes a "device constitution" that you set; if you visit a website, Claude will judge whether the page follows that written document, and if not it will block you, and the only way past it is winning a debate with it about why your website visit is in-line with your device constitution.
I found it too annoying but some of my friends liked it.
However, I think there is a group of people who over-optimize for Direction and neglect the Magnitude. Increasing Magnitude often comes with the risk of corrupting the Direction. For example, scaling fast often makes it difficult to hire only mission-aligned people, and it requires you to give voting power to investors that prioritizes profit. To increase Magnitude can therefore feel risky, what if I end up working at something that is net-negative for the world? Therefore it might be easier for one's personal sanity to optimize for Direction, to do something that is unquestionably net-positive. But this is the easy way out, and if you want to have the highest expected value of your Impact, you cannot disregard Magnitude.
You talk here about an impact/direction v ambition/profit tradeoff. I've heard many other people talking about this tradeoff too. I think it's overrated; in particular, if you're constantly having to think about it, that's a bad sign.
Instead, I think the real value of doing things that are startup-like comes from:
Thanks for the heads-up, that looks very convenient. I've updated the post to link to this instead of the scraper repo on GitHub.
As far as I know, my post started the recent trend you complain about.
Several commenters on this thread (e.g. @Lucius Bushnaq here and @MondSemmel here) mention LessWrong's growth and the resulting influx of uninformed new users as the likely cause. Any such new users may benefit from reading my recently-curated review of Planecrash, the bulk of which is about summarising Yudkowsky's worldview.
i continue to feel so confused at what continuity led to some users of this forum asking questions like, "what effect will superintelligence have on the economy?" or otherwise expecting an economic ecosystem of superintelligences
If there's decision-making about scarce resources, you will have an economy. Even superintelligence does not necessarily imply infinite abundance of everything, starting with the reason that our universe only has so many atoms. Multipolar outcomes seem plausible under continuous takeoff, which the consensus view in AI safety (as I understand it) sees as more likely than fast takeoff. I admit that there are strong reasons for thinking that the aggregate of a bunch of sufficiently smart things is agentic, but this isn't directly relevant for the concerns about humans within the system in my post.
a value-aligned superintelligence directly creates utopia
In his review of Peter Singer's commentary on Marx, Scott Alexander writes:
[...] Marx was philosophically opposed, as a matter of principle, to any planning about the structure of communist governments or economies. He would come out and say it was irresponsible to talk about how communist governments and economies will work. He believed it was a scientific law, analogous to the laws of physics, that once capitalism was removed, a perfect communist government would form of its own accord. There might be some very light planning, a couple of discussions, but these would just be epiphenomena of the governing historical laws working themselves out.
Peter Thiel might call this "indefinite optimism": delay all planning or visualisation because there's some later point where it's trusted things will all sort themselves out. Now, if you think that takeoff will definitely be extremely hard and the resulting superintelligence will effortlessly take over the world, then obviously it makes sense to focus on what that superintelligence will want to do. But what if takeoff lasts months or years or decades? (Note that there can be lots of change even within months if the stakes look extreme to powerful actors!) Aren't you curious about what an aligned superintelligence will end up deciding about society and humans? Are you so sure about the transition period being so short and the superintelligence being so unitary and multipolar outcomes being so unlikely that we'll never have to worry about problems downstream of the incentive issues and competitive pressures that I discuss (which Beren recently had an excellent post on)? Are you so sure that there is not a single interesting, a priori deducible fact about the superintelligent economy beyond "a singleton is in charge and everything is utopia"?
- The bottlenecks to compute production are constructing chip fabs; electricity; the availability of rare earth minerals.
Chip fabs and electricity generation are capital!
Right now, both companies have an interest in a growing population with growing wealth and are on the same side. If the population and its buying power begins to shrink, they will be in an existential fight over the remainder, yielding AI-insider/AI-outsider division.
Yep, AI buying power winning over human buying power in setting the direction of the economy is an important dynamic that I'm thinking about.
I also think the AI labor replacement is initially on the side of equality. [...] Now, any single person who is a competent user of Claude can feasibly match the output of any traditional legal team, [...]. The exclusive access to this labor is fundamental to the power imbalance of wealth inequality, so its replacement is an equalizing force.
Yep, this is an important point, and a big positive effect of AI! I write about this here. We shouldn't lose track of all the positive effects.
Great post! I'm also a big (though biased) fan of Owain's research agenda, and share your concerns with mech interp.
I'm therefore coining the term "prosaic interpretability" - an approach to understanding model internals [...]
Concretely, I've been really impressed by work like Owain Evans' research on the Reversal Curse, Two-Hop Curse, and Connecting the Dots[3]. These feel like they're telling us something real, general, and fundamental about how language models think. Despite being primarily empirical, such work is well-formulated conceptually, and yields gearsy mental models of neural nets, independently of existing paradigms.
[emphasis added]
I don't understand how the papers mentioned are about understanding model internals, and as a result I find the term "prosaic interpretability" confusing.
Some points that are relevant in my thinking (stealing a digram from an unpublished draft of mine):
So overall, I don't think the type of work you mention is really focused on internals or interpretability at all, except incidentally in minor ways. (There's perhaps a similar vibe difference here to category theory v set theory: the focus being relations between (black-boxed) objects, versus the focus being the internals/contents of objects, with relations and operations defined by what they do to those internals)
I think thinking about internals can be useful—see here for a Neel Nanda tweet arguing the reversal curse if obvious if you understand mech interp—but also the blackbox research often has a different conceptual frame, and is often powerful specifically when it can skip all theorising about internals while still bringing true generalising statements about models to the table.
And therefore I'd suggest a different name than "prosaic interpretability". "LLM behavioural science"? "Science of evals"? "Model psychology"? (Though I don't particularly like any of these terms)
If takeoff is more continuous than hard, why is it so obvious that there exists exactly one superintelligence rather than multiple? Or are you assuming hard takeoff?
Also, your post writes about "labor-replacing AGI" but writes as if the world it might cause near-term lasts eternally
If things go well, human individuals continue existing (and humans continue making new humans, whether digitally or not). Also, it seems more likely than not that fairly strong property rights continue (if property rights aren't strong, and humans aren't augmented to be competitive with the superintelligences, then prospects for human survival seem weak since humans' main advantage is that they start out owning a lot of the stuff—and yes, that they can shape the values of the AGI, but I tentatively think CEV-type solutions are neither plausible nor necessarily desirable). The simplest scenario is that there is continuity between current and post-singularity property ownership (especially if takeoff is slow and there isn't a clear "reset" point). The AI stuff might get crazy and the world might change a lot as a result, but these guesses, if correct, seem to pin down a lot of what the human situation looks like.
The scenario does not say that AI progress slows down. What I imagined to be happening is that after 2028 or so, there is AI research being done by AIs at unprecedented speeds, and this drives raw intelligence forward more and more, but (1) the AIs still need to run expensive experiments to make progress sometimes, and (2) basically nothing is bottlenecked by raw intelligence anymore so you don't really notice it getting even better.