I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.
I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.
I also agree with all of this.
For what an okayish possible future could look like, I have two stories in mind:
Humans end up as housecats. Living among much more powerful creatures doing incomprehensible things, but still mostly cared for.
Some humans get uplifted to various levels, others stay baseline. The higher you go, the more aligned you must be to those below. So still a hierarchy, with super-smart creatures at the top and housecats at the bottom, but with more levels in between.
A post-AI world where baseline humans are anything more than housecats seems hard to imagine, I'm afraid. And even getting to be housecats at all (rather than dodos) looks to be really difficult.
Thanks for writing this, it's a great explanation-by-example of the entire housing crisis.
Well, Christianity sometimes spread by conquest, but other times it spread peacefully just as effectively. Same for democracy. So I don't think the spread of moral values requires conquest.
Wait, but we know that people sometimes have happy moments. Is the idea that such moments are always outweighed by suffering elsewhere? It seems more likely that increasing the proportion of happy moments is doable, an engineering problem. So basically I'd be very happy to see a world such as in the first half of your story, and don't think it would lead to the second half.
Your theory would predict that we'd be much better at modeling tigers (which hunted us) than at modeling antelopes (which we hunted), but in reality we're about equally bad at modeling either, and much better at modeling other humans.
I don't think this post addresses the main problem. Consider the exchange ratio between labor and land. You need land to live, and your food needs land to be grown. Will you be able to afford more land use for the same work hours, or less? (As programmer, manager, CEO, super high productivity job, whatever.) Well, if the same land can be used to run AIs that can do your job N times over, then from your labor you won't be able to afford it, and that closes the case.
So basically, the only way the masses can survive long term is by some kind of handouts. It won't just happen by itself due to tech progress and economic laws.
I don't buy it. Lots of species have predators and have had them for a long time, but very few species have intelligence. It seems more likely that most of our intelligence is due to sexual selection, a Fisherian runaway that accidentally focused on intelligence instead of brightly colored tails or something.
Yeah, I stumbled on this idea a long time ago as well. I never drink sugary drinks, my laptop is permanently in grayscale mode and so on. And it doesn't feel like missing out on fun; on the contrary, it allows me to not miss out. When I "mute" some big, addictive, one-dimensional thing, I start noticing all the smaller things that were being drowned out by it. Like, as you say, noticing the deliciousness of baked potatoes when you're not eating sugar every day, or noticing all the colors in my home and neighborhood when my screen is on grayscale.