I've been trying my best to think of something that AGI could do which I really love and deeply endorse.
I can think of some nice things. New kinds of animals. Non-habit-forming heroine. Stupid-pointless-disagreements-between-family-members fixomatic maybe. Turn everyone hot.
None of this makes me joyful and hopeful. Just sounds neat and good. Humans seem pretty damn good at inventing tech etc ourselves anyway.
I think I might have assumed or copy-pasted the "AI is truly wonderful if it goes truly well" pillar of my worldview. Or maybe I've forgotten the original reasons I believed it.
What exactly did that great AI future involve again?
I would say value preservation and alignment of the human population. I think these are the hardest problems the human race faces, and the ones that would make the biggest difference if solved. You're right, humanity is great at developing technology, but we're very unaligned with respect to each other and are constantly losing value in some way or another.
If we could solve this problem without AGI, we wouldn't need AGI. We could just develop whatever we want. But so far it seems like AGI is the only path for reliable alignment and avoiding Molochian issues.
I agree deeply with the first paragraph. I was going to list coordination as the only great thing I know of where AI might be able to help us do something we really couldn't do otherwise. But I removed it because it occurred to me that I have no plausible story for how that would actually happen. How do you imagine that going down? All I've got is "some rogue benevolent actor does CVE or pivotal act" which I don't think is very likely.