I do believe that if Altman does manage to create his superAI's, the first such eats Altman and makes squiggles. But if I were to engage in the hypothetical where nice corrigible superassistants are just magically created, Altman does not appear to treat this future he claims to be steering towards seriously.
The world where "everyone has a superassitant" is inherently incredibly volatile/unstable/dangerous due to an incredibly large offence-defence assymetry of superassistants attacking fragile-fleshbags (with optimized viruses, bacteria, molecules, nanobots etcetc) or hijacking fragile minds with supermemes.
Avoiding this kind of outcome to me seems difficult. Nonsystematic "patches" are always workaroundable.
If openAI's superassistant refuses your request to destroy the world, use it to build your own superassistant, or use it for subtasks etc etc. Humans are fragile-fleshbags, and if strong optimization is ever pointed in their direction, they die.
There are ways to make such a world stable, but all of them that I can see look incredibly authoritarian, something Altman says hes not aiming for. But Altman does not appear to be proposing any alternatives as to how this will turn out fine, and I am not aware of any research agenda at openai trying to figure out how "giving everyone a superoptimizer" will result in a stable world with humans doing human things.
I know only three coherent ways to interpret what Altman is saying, and none of them take the object of writing seriously:
1) I wanted to have the stock go up and wrote words which do that
2) I didnt really think about it, oops
3) I'm actully gonna keep the superassistants all to myself and rule, and this nicecore writing will make people support me as I approach the finish line
This is less meant to be critical of the writing, and more me asking for help of how to actually make sense of what Altman says
I suppose the superassistants could form coalitions and end up as a kind of "society" without too much aggression. But this all seems moot, because superassistants will anyway get outcompeted by AIs that focus on growth. That's the real danger.
I don't see a reason why we should trust Altman's words on this topic more than his previous words on making OpenAI a non-profit.
Before Singularity, I think it just means that OpenAI would like to have everyone as a customer, not just the rich (although the rich will get higher quality), which makes perfect sense economically. Even if governments paid you billions, it would still make sense to also collect $20 from each person on the planet individually.
After Singularity... this just doesn't make much sense, for the reasons you wrote.
I was trying to steelman the plan -- I think the nearest possible option that would work is having one superintelligence that keeps everyone safe and tries to keep the world "normal" as much as people in general want it to have; and to give every human an individual assistant which will do exactly as much as the human wants it to do.
But even this doesn't make much sense, because people interact with other e.g. on the market, so the ones who choose to do it slowly will be hopelessly outcompeted by the ones who choose to do it fast, so there won't be much of a choice.
I imagine we could fix this by e.g. splitting the planet into "zones" with different levels of AI assistants allowed (but the superintelligence making sure all zones are safe), and people could choose which zone they want to live in, and would only compete with other people within the same zone. But these are just my fantasies inspired by reading Yudkowsky, and have little to do with Altman's statements, and shouldn't be projected into them.
I think "enforce NAP then give everyone a giant pile of resources to do whatever they want with" is a reasonable first-approximation idea regarding what to do with ASI, and it sounds consistent with Altman's words.
But I don't believe that he's actually going to do that, so I think it's just (3).
There are ways to make such a world stable, but all of them that I can see look incredibly authoritarian, something Altman says hes not aiming for.
If he were aiming for an authoritarian outcome, would it make any sense for him to say so? I don't think so. Outlining such a plan would quite probably lead to him being ousted, and would have little upside.
The reason I think it would lead to his ouster is that most Americans' reaction to the idea of an authoritarian AI regime would be strongly negative rather than positive.
So, I think his current actions align with his plan being something authoritarian.
Out of (1)-(3), I think (3)[1] is clearly most probable:
(Of course one could also come up with other possibilities besides (1)-(3).)[2]
or some combination of (1) and (3) ↩︎
E.g. maybe he plans to keep ASI to himself, but use it to implement all-of-humanity's CEV, or something. OTOH, I think the kind of person who would do that, would not exhibit so much lying, manipulation, exacerbating-arms-races, and gambling-with-everyone's-lives. Or maybe he doesn't believe ASI will be particularly impactful; but that seems even less plausible. ↩︎
I don't quite understand the plan. What if I get access to cheap friendly AI, but there's also another much more powerful AI that wants my resources and doesn't care much about me? What would stop the much more powerful AI from outplaying me for these resources, maybe by entirely legal means? Or is the idea that somehow the AIs in public access are always the strongest possible? That isn't true even now.
This might be obvious, but I don't think we have evidence to support the idea that there really is anything like a concrete plan. All of the statements I've seen from Sam on this issue so far are incredibly basic and hand-wavy.
I suspect that any concrete plan would be fairly controversial, so it's easiest to speak in generalities. And I doubt there's anything like an internal team with some great secret macrostrategy - instead I assume that they haven't felt pressured to think through it much.
The only sane version of this I can imagine is where there's either one aligned ASI, or a coalition of aligned ASIs, and everyone has equal access. Because the AI(s) are aligned they won't design bioweapons for misanthropes and such, and hopefully they also won't make all human effort meaningless by just doing everything for us and seizing the lightcone etc etc.
First part just talks about scaling laws, nothing really new. Second part is apparently his latest thoughts on a post-AGI world. Key part:
Edit to add commentary:
That last part sounds like he thinks everyone should be on speaking terms with an ASI by 2035? If you just assume alignment succeeds, I think this is a directionally reasonable goal - no permanent authoritarian rule, ASI helps you as little or as much as you desire.