all of this has become possible thanks to the dominance of offensive technology
I used to think that way too, but now I think it's the other way around. The strong can always hurt the weak somehow, that's just a fact of life, offense/defense ratio doesn't change it much either way. But for the strong to hurt the weak with impunity, it's necessary that the weak can't hurt the strong right back. In other words, it depends mainly on the strong's defensive tech, or the weak's lack of offensive tech.
I've been making a similar point in a kind of distributed fashion for many months, spread out across many comments: that the key factor in world niceness is decentralization - but not decentralization of economic power or productivity (a slave in a diamond mine can be very productive). Rather, the key is decentralization of military potential, and even more specifically, offensive potential. Not defensive. The spreading-out of threat. Democracy being downstream from "it's easy to teach a peasant to shoot a gun and kill a knight". You reach that point by the end of the post, and I think it's the most important thing.
On a related note, I'd like to push back against your idealism over cryptography, and defensive tech in general. In my eyes, defensive tech simply isn't as strong a force for good. For cryptography in particular, up to a point it suffers from the five dollar wrench problem: it doesn't actually let you keep a secret from someone stronger. They'll just beat the secret out of you. You could say citizens can hide the fact that they have a secret at all - but to be immune to things like statistical analysis of behavior, a citizen needs a truly ridiculous amount of opsec. Can you install Tor on your machine and say confidently that nobody knows you did it? Have you thought through what it truly takes? And on the upper levels, we run into the ultimate problem with defensive tech: it can defend bad things, too. Imagine mass torture of simulated beings happening under homomorphic encryption, and the vision of a cryptographically happy world won't look so happy.
To break through such things, what we really need is dominance of offensive tech, which makes it militarily useful to coopt little guys instead of oppressing them. But then we run smack into the problem that future AI weapons aren't such a tech, they're the opposite: empowering the bigger guy beyond all reason. And there the wheels fall off, I have no idea how to continue thinking optimistically past that point. The future just looks like tyranny no matter what.
I mean, yeah. For a while now I've thought that the "takeoff" will consist not of rogue AI making war on humanity, but of AI-empowered companies and governments becoming stronger and more callous (at least toward most people, who are no longer needed for success). After all AIs and companies/governments share the same convergent instrumental goals, namely the goal to grow in power, so an alliance or merge between them makes perfect sense. The end result is just as bad though.
People working on alignment aren't ensuring we're safe :-(
The owners of an AI company know how much risk they can stomach. If alignment folks make AI a bit safer, the owners will step on the gas a little more, to stay at a similar level of risk but higher return. And since there are many AI-caused risks that apply much less to owners and more to people on the outside (like, oh, disempowerment), this means the net result of working on alignment is that people on the outside see more AI-driven disruptive change and face more risk. Some of the most famous examples of alignment work, like RLHF or the "helpful harmless honest assistant", ended up hugely increasing risk by exactly this mechanism. In short, people working on alignment at big AI companies are enablers of bad things.
I do have hard time finding people I want to date.
This seems the root problem, so maybe worth narrowing down a bit. Do you think there are just very few people in the world who you'd want to date? Or do you think there are many such people, but just not in the places you look?
I'm also not saying let's ban it. It's a thought experiment. The intended conclusion (though maybe my comment was too cryptic) was that if banning X in general (where X = "entering the market with a slightly better service") is obviously wealth-reducing, then that means allowing X is wealth-increasing, so a random individual instance of X is probably wealth-increasing as well.
And the example in your post looks to me like a quite typical instance of X. It's not unusually bad. Most instances of X will look like stealing customers, putting incumbents out of business and so on. I'm saying it's all right, the benefit over time is bigger than that.
Here's the thing though. What Tom is doing (if we forget distractions like paying salary to employees, paying rent for location etc) is entering the market with a slightly better service, and outcompeting the incumbent. Let's say we ban this activity! Now nobody's allowed to enter a market with a slightly better service and outcompete the incumbent. Enact this ban, and wait a few years. To me it's obvious that society will be much poorer as a result. You'll get rid of the small improvements that add up to large improvements.
And so it seems likely to me that one individual act - Tom starting a car repair shop in a slightly nicer location - will also turn out to increase the total wealth of society, all things considered. Statistically at least.
The use of the word "extractive" in this post confused me a lot.
Looking at the example with Tom and Fred - nicer location or not, Tom is repairing cars. That's the value he's providing to customers, and it's enough to account for all the money he's making. And customers are better off too. The only one losing out is Fred.
All right. The question becomes, as a society should we be sad about the losses of companies that get outcompeted? Say Fred has a software company, making some expensive software to do a task. Then Tom, a hobbyist, releases a small piece of open source software that does the same task just as well. He doesn't make any profit from it, but everyone switches to using his software for free. Fred's company goes out of business, the investment is lost and so on. Was Tom's action "extractive"? Should we be sad?
It's from the recent book "There is no antimemetics division" by Sam Hughes. (An earlier version of the story can be read for free and I think it's actually better than the book version.) In short, U-3125 (or SCP-3125 in the original story) is a kind of abstract monster that exists everywhere and wants to eat the world.
Your argument seems to be that it'll be hard for the CEO to align the AI to themselves and screw the rest of the company. Sure, maybe. But will it be equally hard for the company as a whole to align the AI to its money interests and screw the rest of the world? That's less outlandish, isn't it? But equally catastrophic. After all, companies have been known to do very bad things when they had impunity; and if you say "but the spec is published to the world", recall that companies have been known to lie when it benefited them, too.