From the outside, depending on your level of detail of understanding, any franchise could look that way. Avon and Tupperware look a bit that way. Some MLM companies are more legitimate than others.
From a more abstract point of view, I could argue that "cities" are an example. "Hey, send your kids to live here, let some king and his warriors be in charge, and give up your independence, and you'll all get richer!" It wasn't at all clear in the beginning how "Pay taxes and die of diseases!" was going to be good for anyone but the rulers, but the societies that did it more and better thrived and won.
To the latter: my point is that except to the extent we're resource constrained, I'm not sure why anyone (and I'm not saying you are necessarily) would argue against any safe line of research even if they thought it was unlikely to work.
To the former: I think one of the things we can usefully bestow on future researchers (in any field) is a pile of lines of inquiry, including ones that failed and ones we realized we couldn't properly investigate yet, and ones where we made even a tiny bit of headway.
I'd say their value is instrumental, not terminal. The sun and stars are beautiful, but only when there are minds to appreciate them. They make everything else of value possible, because of their light and heat and production of all the various elements beyond Helium.
But a dead universe full of stars, and a sun surrounded by lifeless planets, have no value as far as I'm concerned, except insofar as there is remaining potential for new life to arise that would itself have value. If you gave me a choice between a permanently dead universe of infinite extent, full of stars, or a single planet full of life (of a form I'm capable of finding value in, so a planet full of only bacteria doesn't cut it) but surrounded by a bland and starless sky that only survives by artificial light and heat production (assume they've mastered controlled fusion and indoor agriculture), I'd say the latter is more valuable.
Unfortunately in the world I live in, the same people who would accept "This is obviously a Ponzi scheme" (but who don't understand AI x-risk well) have to also contend with the fact that most people they hear talking about AI are indistinguishable (to them) from people talking about crypto as an investment, or about how transformative AI will lead to GDP doubling times dropping to years, months, or weeks. So, the same argument could be used to get (some of) them to dismiss the notion that AI could become that powerful at all with even less seeming-weirdness.
Arguments that something has the form of a Ponzi scheme are, fortunately and unfortunately, not always correct. Some changes really do enable permanently (at least on the timescales the person thinks of as permanent) faster growth.
Yes on the overall gist, and I feel like most of the rest of the post is trying to define the word "things" more precisely. The Spokesperson things "past annual returns of a specific investment opportunity" are a "thing." The Scientist thinks this is not unreasonable, but that "extrapolations from established physical theories I'm familiar with" are more of a "thing." The Epistemologist says only the most basic low-level facts we have, taken as a whole set, are a "thing" and we would ideally reason from all of them without drawing these other boundaries with too sharp and rigid a line. Or at least, that in places where we disagree about the nature of the "things," that's the direction in which we should move to settle the disagreement.
I often think about this as "it's hard to compete with future AI researchers on moving beyond this early regime". (That said, we should of course have some research bets for what to do if control doesn't work for the weakest AIs which are very useful.)
I see this kind of argument a lot, but to my thinking, the next iteration of AI researchers will only have the tools today's researchers build for them. You're not trying to compete with them. You're trying to empower them. The industrial revolution wouldn't have involved much faster growth rates if James Watt (and his peers) had been born a century later. They would have just gotten a later start at figuring out how to build steam engines that worked well. (Or at least, growth rates may have been faster for various reasons, but at no single point would the state of steam engines in that counterfactual world be farther along than it was historically in ours).
(I hesitate to even write this next bit for fear of hijacking in a direction I don't want to go, and I'd put it in a spoiler tag if I knew how. But, I think it's the same form of disagreement I see in discussions of whether we can 'have free will' in a deterministic world, which to my viewpoint hinges on whether the future state can be predicted without going through the process itself.)
Who are these future AI researchers, and how did they get here and get better if not by the efforts of today's AI researchers? And in a world where Sam Altman is asking for $7 trillion and not being immediately and universally ridiculed, are we so resource constrained that putting more effort into whatever alignment research we can try today is actually net-negative?
Ok, then that I understand. I do not think it follows that I should be indifferent between those two ways of me dying. Both are bad, but only one of them necessarily destroys everything I value.
In any case I think it's much more likely a group using an aligned-as-defined-here AGI to kill (almost) everyone by accident, rather than intentionally.
Before anything else, I would note that your proposed scenario has winners. In other words, it's a horrible outcome, but in this world where alignment as you've defined it is easy but a group of humans uses it to destroy most of the population, that group of humans likely survives, repopulates the Earth, and continues on into the future.
This, to put it mildly, is a far, far better outcome than we'd get from an AI that wants to kill everyone of its own agency, or that doesn't care about us enough to avoid doing so as a side effect of other actions.
I don't remember where or when, but IIRC EY once wrote that if an evil person or group used ASI to take over the world, his reaction would be to 'weep for joy and declare victory' that any human-level agents whatsoever continue to exist and retain enough control to be able to be said to "use" the AI at all.
That said, yes, if we do figure out how to make an AI that actually does what humans want it to do, or CEV-would-want it to do, when they ask it to do something, then preventing misuse becomes the next major problem we need to solve, or ideally to have solved before building such an AI. And it's not an easy one.
Each of you, go and make a list of all the assumptions you made in setting up and carrying out the experiment, along with your priors for the likelihood of each.
Compare lists, make sure you're in agreement on what the set of potential failure points is for where you went wrong.
Then figure out how to update your probabilities based on this result, and what experiment to perform next.
(Basically, the students' world models are too narrow to even realize how many things they're assuming away, and then pointing a figure at the only thing they knew to think of as a variable. And the professor (and all past science teachers they've had) failed to articulate what it was they were actually teaching and why).
Yeah, you're right, but for most of history they were net population sinks that generated outsized investment returns. Today they're not population sinks because of sanitation etc. etc.
I know I'm being imprecise and handwavy, so feel free to ignore me, but really my thought was just that lots of things look vaguely like ponzi schemes without getting into more details than most people are going to pay attention to.