The scenario I am most concerned about is a strongly multipolar Malthusian one. There is some chance (maybe even a fair one) that a singleton or oligopoly ASI decides or rigorously coordinate respectively to preserve the biosphere - including humans - at an adequate or superlative level of comfort or fulfillment, or help them ascend themselves, due to ethical considerations, for research purposes, or simulation/karma type considerations.
In a multipolar scenario of gazillions of AI at Malthusian subsistence levels, none of that matters in the default scenar...
It's not at all insane IMO. If AGI is "dangerous" x timelines are "short" x anthropic reasoning is valid...
... Then WW3 will probably happen "soon" (2020s).
https://twitter.com/powerfultakes/status/1713451023610634348
I'll develop this into a post soonish.
It's ultimately a question of probabilities, isn't it? If the risk is ~1%, we mostly all agree Yudkowsky's proposals are deranged. If 50%+, we all become Butlerian Jihadists.
My point is I and people like me need to be convinced it's closer to 50% than to 1%, or failing that we at least need to be "bribed" in a really big way.
I'm somewhat more pessimistic than you on civilizational prospects without AI. As you point out, bioethicists and various ideologues have some chance of tabooing technological eugenics. (I don't understand your point about assortative ...
It's ultimately a question of probabilities, isn't it? If the risk is ~1%, we mostly all agree Yudkowsky's proposals are deranged. If 50%+, we all become Butlerian Jihadists.
Uhh... No, we don't? 1% of 8 billion people is 80 million people, and AI risk involves more at stake if you loop in the whole "no more new children" thing. I'm not saying that "it's a small chance of a very bad thing happening so we should work on it anyways" is a good argument, but if we're taking as a premise is that the chance of failure is 1%, that'd be sufficient to justify sev...
I disagree with AI doomers, not in the sense that I consider it a non-issue, but that my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes threaten to produce a lot of outcomes things, possibly including the devolution of AI control into a cult (we have a close analogue in post-1950s public opinion towards civilian applications of nuclear power and explosions, which robbed us of Orion Drives amongst other things), what may well be a delay in life extension timeli...
I totally get where you're coming from, and if I thought the chance of doom was 1% I'd say "full speed ahead!"
As it is, at fifty-three years old, I'm one of the corpses I'm prepared to throw on the pile to stop AI.
The "bribe" I require is several OOMs more money invested into radical life extension research
Hell yes. That's been needed rather urgently for a while now.
Couple of points:
Frontier LLM performance on offline IQ tests is improving at perhaps 1 S.D. per year, and might have recently become even faster. These tests are a good measure of human general intelligence. One more such jump and there will be PhD-tier assistants for $20/month. At that point, I expect any lingering problems with invoking autonomy to be quickly fixed as human AI research acquires a vast multiplier through these assistants, and a few months later AI research becomes fully automated.