LESSWRONG
LW

485
Gordon Seidoh Worley
10375Ω305221258012
Message
Dialogue
Subscribe

I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.

I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment
The Illustrated Petrov Day Ceremony
Gordon Seidoh Worley7d41

Some feedback:

The ritual is over.

Your lit candles no longer symbolize anything.

Reading the words "no longer symbolize anything" really irked me. Like what the hell, went through all that and now you yank away their meaning like it's nothing?

This was a really abrupt way to end it and it didn't feel good. The candles don't lose their significance after going through the ceremony, even if the ritual is over. Maybe we let them burn out. Maybe we snuff them and store them to be used again next year. Maybe they get used for other purposes, and when that happens, we're reminded of the time we celebrated Petrov Day. Anything other than having them suddenly lose all their symbolism.

Reply
The real AI deploys itself
Gordon Seidoh Worley9d90

Is there a reason you say "real AI" instead of "AGI"? Do you see some gap between what we would call AGI and AI that deploys itself?

Reply
The Autofac Era
Gordon Seidoh Worley9d20

I think it's likely that people will be enough of a threat to prevent the kind of outcome you're proposing, which is why I think this model is interesting. You seem to disagree. Would you agree that that's the crux of why we disagree?

(I don't disagree that what you say is possible. I just think it's not very likely to happen and hence not the "happy" path of my model.)

Reply
The Autofac Era
Gordon Seidoh Worley9d20

And it wouldn't need the masses to provide market demand: anything it could get from the masses in exchange, it could instead produce on its own at lower resource cost.

I think this still leaves us with the problem of, where does market demand to create growth come from? This small handful of people at the top will quickly reach max comfort lives and then not need anything else, but humans usually want more, even if that more is just a number going up, we're back to either quick stagnation or a bubble.

One good that might be offered is dominance. I didn't think of this before, but we could imagine a world where the "underclass" receive UBI in exchange for fealty, and the folks at the top compete to see who can have the most vassals, with intense competition to attract and keep vassals driving economic growth.

Reply
The Autofac Era
Gordon Seidoh Worley9d20

Yes, such an outcome is possible. I think it's unlikely, though, conditional on winding up in an Autofac-like world, because this requires a Terminator-style sudden takeover. If humans can see it coming or it happens gradually, they can react. And since we're supposing here that there's no AGI yet, there's (1) very little reason for the humans directing economic activity to want this because it would also see their own elimination from the economy (even if they want to do something obvious like fully automate the money making machine that is their automated business, including the consumption side, they themselves still will want to consume, so there will remain demand for humans to consume within the economy, even if it becomes a small part of it), and (2) very little chance of AI coordinating a takeover like this on their own because, again, they don't have their own motivations in this world.

The new balance of power will be more similar to what we had before firearms, when the powerful were free to treat most people really badly.

I expect the mediating force here to be the need for that mass of humans to be consumers driving the economy. Without them, growth would quickly stagnate or have to rely on bubbles, which aren't sustainable.

Reply
The Autofac Era
Gordon Seidoh Worley9d20

@FlorianH I see you reacted that you think I missed your point, but I'm not so sure I did. You seem to be making an argument that an economy can still function even if some actors leave that economy so long as some actors remain, which is of course true, but my broader point is about sustaining a level of consumption necessary for growth, and a fully automated economy could quickly reach the limits of its capacity to produce (and the wealth of the remaining consumers) if there are very few consumers. I expect to need a large base of consumers for there to be sufficient growth to justify the high costs of accelerating automation.

Reply
The Autofac Era
Gordon Seidoh Worley9d20

I think I'm confused what your position is then. It's true that economic competition is generally unstable in that the specific balance of power between competitors is unstable, but competition itself is often quite stable and doesn't collapse into either chaos or monopoly unless there's something weird about the competitive environment that allows this to happen (e.g. no rule of law, protective regulations, etc.).

I also expect the whole Autofac Era to feel quite unstable to people because things will be changing quickly. And I also don't expect it to last too long, because I think it's a short period of a few years between its start and the development of AGI (or, if for some reason AGI is impossible, then Hansonian EMs).

Reply
The Autofac Era
Gordon Seidoh Worley10d*10

One of the assumptions I'm making is that if AI dispossesses billions of people, that's billions of people who can rebel by attacking automation infrastructure. There might be a way to pull off dispossession gently so that by the time anyone thinks to rebel it's already too late, but I expect less well coordinated action, and instead sudden shocks that will have to be responded to. The only way to prevent violence that threatens the wealth of capital owners will be to find a way to placate the mass of would-be rebels (since doing something like killing everyone who doesn't have a job or own capital is and will be morally reprehensible and so not a real option), and I expect UBI to be the solution.

Reply1
The Autofac Era
Gordon Seidoh Worley10d20

In such an economy, a decisive resource advantage becomes equivalent to long-term global dominance. If this consolidation completes before ASI arrives, the first ASI will likely be built by an actor facing zero constraints on its deployment, which is a direct path to x-risk. This makes the prospect of a stable, "happy" pre-ASI autofac period seem highly doubtful.

It's unclear to me that such a decisive resource advantage would be possible to hold unless we start from an already decisively unipolar world. So long as there are powers who can credibly threaten total dominance, there will be strategic reasons for state and business actors to prevent complete consolidation, and if desperate enough would use destructive force (or the threat of such force) to ensure one actor does not become dominant.

Reply
The Autofac Era
Gordon Seidoh Worley10d*50

I think there's simply not a good reason to fully automate consumption. It's one of those ideas that sounds intuitive in abstract, but in practice it means taking a step of automating away the very reason anything was being done at all, and historically when a part of the economy becomes completely self-serving like this, it collapses and we call it a bubble.

There is some open question about what happens if literally the entire economy becomes a bubble. Could it self sustain? Maybe yes, though I'm not sure how we get there without the incremental bubbles collapsing before they combine to form a single big bubble that encompasses all economic activity. If that happened, I'd consider this to be a paperclip maximizer scenario. If no, then I think we get an Autofac-like world.

Reply1
Load More
14G Gordon Worley III's Shortform
Ω
6y
Ω
155
11Uncertain Updates: September 2025
3d
0
29The Autofac Era
10d
18
65Software Engineering Leadership in Flux
17d
6
16All Exponentials are Eventually S-Curves
1mo
43
11Uncertain Updates August 2025
1mo
1
11What is "Meaningness"
1mo
0
15The trouble with "enlightenment"
1mo
4
1Good Faith Arguments
2mo
0
9My Mistake, Your Problem
2mo
0
8Uncertain Updates: July 2025
2mo
0
Load More
The Problem of the Criterion
3 years ago
(+1/-7)
Occam's Razor
4 years ago
(+58)
The Problem of the Criterion
4 years ago
(+80)
The Problem of the Criterion
4 years ago
(+570)
Dark Arts
4 years ago
(-11)
Transformative AI
4 years ago
(+15/-13)
Transformative AI
4 years ago
(+348)
Internal Family Systems
5 years ago
(+59)
Internal Family Systems
5 years ago
(+321)
Buddhism
5 years ago
(+321)
Load More