The risk that it simply ends up being owned by the few who create it leading thus to a total concentration of the productive power of humanity isn't immaterial, in fact it looks like the default outcome.
Yes, this is why I've been frustrated (and honestly aghast, given timelines) at the popular focus on AI doom and paperclips rather than the fact that this is the default (if not nigh-unavoidable) outcome of AGI/ASI, even if "alignment" gets solved. Comparisons with industrialization and other technological developments are specious because none of them had the potential to do anything close to this.
Wouldn't an important invention such as the machine gun or obviously fission weapons fit your argument pattern? You could make a reasonable case that, like a world with technological unemployment, worlds where humans are cheap to slaughter is overall worse. That if you could coordinate with the world powers at that time to agree to an "automatic weapon moratorium" it would resort in a better world.
...The problem is Kaiser Wilhelm and other historical leaders are going to say "suuurrrreee", agree to the deal, and you already know the nasty surprise any pow
I very much agree with you here and in your AGI deployment as an act of aggression post; the overwhelming majority of humans do not want AGI/ASI and its straightforward consequences (total human technological unemployment and concomitant abyssal social/economical disempowerment), regardless of what paradisaical promises (for which there is no recourse if they are not granted: economically useless humans can't go on strike, etc) are promised them.
The value (this is synonymous with "scarcity") of human intelligence and labor output has been a foundation of e...
I have read your comments on the EA forum and the points do resonate with me.
As a layman, I do have a personal distrust with the (what I'd call) anti-human ideologies driving the actors you refer to and agree that a majority of people do as well. It is hard to feel much joy in being extinct and replaced by synthetic beings in probably a way most would characterize as dumb (clippy being the extreme)
I also believe that fundamental changing of the human subjective experience (radical bioengineering or uploading to an extent) in order to erase the abilit...
AGI is potentially far more useful and powerful than nuclear weapons ever were, and also provides a possible route to breaking the global stalemate with nuclear arms.
If this is true -- or perceived to be true among nuclear strategy planners and those with the authority to issue a lawful launch order -- it might creates disturbingly (or delightfully; if you see this as a way to prevent the creation of AGI altogether) strong first-strike incentives for nuclear powers which don't have AGI, don't want to see their nuclear deterrent turned to dust, and don't want to be put under the sword of an adversary's AGI.
Re "they should also pop out in a sane person's mind within like 15 seconds of actually hearing that statement" I agree with that in the abstract; few people will say that a state of high physiological alertness/vigilance is Actually A Good Idea to cultivate for threats/risks not usefully countered by the effects of high physiological alertness.
Being able to reason about that in the abstract doesn't necessarily transfer to actually stopping doing that. Like personally, I feel like being told something along the line of "you're working yourself up into a co...
I think there's a case to be made for AGI/ASI development and deployment as a "hostis humani generis" act; and others have made the case as well. I am confused (and let's be honest, increasingly aghast) as to why AI doomers rarely ... (read more)