I agree, I'm probably not as sure about sufficient alignment but yes.
I suppose this also assumes a kind of orderly world where it actually is within the means of humanity, AGIs (within their Molochian frames), and trivial means of later superintelligences to preserve humans. (US office construction spending and data center spending are about to cross https://x.com/LanceRoberts/status/1953042283709768078 .)
Thanks for the reply, I have gripes with
analogy doesn't by itself seem compelling, given that humanity as a whole (rather than particular groups within it or individuals) is a sufficiently salient thing in the world
etc. because don't you think humanity from the point of view of ASI at the 'branch point' of deciding its continued existence may well be on the order of importance of an individual to a billionaire?
Yes and my reply to that (above) is humanity has a bad track record at that so why would AIs trained on human data be better? Think also of indigenous peoples, extinct species humans didn't care enough about etc. The point also in the Dyson sphere parabel is not wanting something, it's wanting something enough so that it happens.
since the necessary superintelligent infrastructure would only take a fraction of the resources allocated to the future of humanity.
I'm not sure about that and the surrounding argument. I find Eliezer's analogy compelling here: When constructing a Dyson sphere around the sun, leaving just a tiny sliver of light enough for earth would correspond to a couple of dollars of the wealth of a contemporary billionaire. Yet you don't get these couple of dollars.
(This analogy has caveats like Jeff Bezos lifting the Apollo 11 rocket motors from the ocean ground and giving them to the Smithsonian, which should be worth something to you. Alas it kinda means you don't get to choose what you get. Maybe it is storage space for your brain scan like in AI 2027.)
Plus spelling out the Dyson sphere thing: The superintelligent infrastructure should highly likely by default get in the way of humanity's existence at some point. At this point the AIs will have to consciously make a decision to avoid that at some cost to them. Humanity has a bad track record at doing that (not completely sure here but thinking of e.g. Meta's effect on wellbeing of teenage girls). So why would AIs be more willing to do that?
They released the new models and updated apps in tranches.