Unfortunately, they aren't rational. I developed this theme a little bit more in another reply, but to put it simply, in the US GAI is being pursued by insane individuals. No rational argument can stop someone who believes in that. And the other sides will try to protect themselves from these.
Admittedly, nuclear weapons are not a perfect analog for AI due to many reasons, but I think it is a reasonable analog.
We've had extreme luck when it comes to nuclear weapons. We not only had several close calls that were deescalated by particularly noble individuals doing the right thing, but also, back when the URSS had barely developed theirs and the US alone had a whole stockpile of warheads, we had the good luck of its leadership also being somewhat moral and refusing to turn nukes into a regular weapon, which was followed by MAD forcing everyone to kind of stay so even when the other side asked nicely whether they could bomb a third party. Weren't for that long sequence of good luck after good luck, and we'd now be living in an annihilated world, or at the very least a post-apocalyptic one.
With this in mind, I wanted to ask out of curiosity, what % risk do you think there needs to be for annihilation to occur?
I have no idea, really. All I can infer is that it's unlikely any major power will stop trying to achieve GAI unless:
a) Either a massively severe accident due to misaligned not-quite-GAI-yet happens that by its sheer, absolute horror puts the Fear-Of-God in our civilian and military leaders for a few generations;
b) Or a long sequence of reasonably severe accidents happens, each new one worse than the last, with AI companies repeatedly and consistently failing at fixing the underlying cause, this in turn making military leaders deeply wary of deploying advanced AI systems, and civilian leaders enacting restrictions on what AI is allowed to touch.
Absent either of those, I doubt the pursuit of GAI will stop no matter what X-risk analysts say. Or at least, I myself cannot imagine any kind of argument that'd convince, say, the CPC to stop their research when those on the other side spearheading theirs are massively powerful nutjobs? And then, what argument could be provided that'd stop someone who believes in this? So, neither will stop, which means GAI will happen. And then we'll need to count on luck again, this time with:
i) Either GAI going FOOM as Yudkowsky believes, but for some reason continuing to like humans enough not to turn us into computronium;
ii) Or Hanson being right and FOOM not happening, followed by:
ii.1) Either things being slow enough to "merely" lead to a or b, above;
ii.2) Or things being so immensely slow we can actually fix them.
I have no opinion on whether FOOM is or isn't likely. I've read the entire discussion and all I know is both sets of arguments sound reasonable to me.
I’m assuming that - and please correct me if I’m misinterpreting here - “extinguish” here means something along the lines of, “remove the ability to compete effectively for resources (e.g. customers or other planets)” not “literally annihilate”.
I wish that were the case, but my reference is imagining a paranoid M.A.D. mentality coupled with a Total War scenario unbounded by moral constraints, that is, all sides thinking all the other sides are X-risks to them.
In practice things tend not to get that bad most of the time, but sometimes they do, and much of military preparation concern mitigation of these perceived X-risks, the idea being that if "our side" becomes so powerful it can in fact annihilate the others, and in consequence the others submit without resisting, then "our side" may be magnanimous towards them conditional on their continued subservience and submission, but if they resist to the point of becoming an X-risk towards us, then removing them from the equation entirely is the safest defense from the X-risk they pose us.
A global consensus on stopping GAI development due to its X-risk for all life passes through a global consensus, by all sides, that none of the other sides is an X-risk to any of side. Once everyone agrees on this, then they all together agreeing to deal with a global X-risk becomes feasible. Before that, only if they all see that global X-risk as more urgent and immediate than the many local-to-them X-risks.
Unfortunately, those in positions of power won't listen. From their perspective it's simply absurd to suggest that a system that currently directly causes, at most, a few dozen induced suicide deaths per year, may explode into death of all life. They have no instinctive, gut feeling for exponential growth, so it doesn't exist for them. And even if they acknowledge there's a risk, their practical reasoning moves more along arms-race lines:
"If we stop and don't develop AGI before our geopolitical enemies because we're afraid of a tiny risk of an extinction, they will develop it regardless, then one of two things happen: either global extinction, or our extinction in our enemies' hands. Which is why we must develop it first. If it goes well, we extinguish them before they have a chance to do it to us. If it goes bad, it'd have gone bad anyway in their or our hands, so that case doesn't matter."
Which is to say they won't care until they see thousands or millions of people dying due to rogue GAIs. Then, and only then, they'd start thinking in terms of maybe starting talks about perchance organizing an international meeting to perhaps agree on potential safeguards that might start being implemented after the proper committees are organized and the adequate personal selected to begin defining...
But obviously, factory farm animals feel more pain than crickets. The question is just how much pain?
This paper is far from a complete answer, but it may help:
This isn't a dichotomy. We can farm animals while making their lives reasonably comfortable. Their moments of pain would be few up to and until they reach the age for slaughter, which itself can be made stress-free and painless.
Here in Brazil, for example, we have huge ranches where cattle move around freely. Cramping them all in a tiny area to maximize productivity at the cost of making their lives extremely uncomfortable, as in the US factory farm system, may happen here, but I'm not personally aware of it so unusual that is. The US could do it the same way, as it isn't like the country lacks territory where cattle could roam freely, but since this isn't required by law, and factory farming is more profitable, this is rare, with the end result of free-roaming meat being sold at a much higher premium than it should.
Brazilian chickens, on the other hand, are typically cramped together the same as in the US, unless one opts to buy eggs from small family-owned farms, who mostly let them roam freely.
A few remarks that don't add up to either agreement or disagreement with any point here:
Considering rivers conscious hasn't been a difficulty for humans, as animism is a baseline impulse that develops even in absence of theism, and it takes effort, at either the individual or cultural levels, for people to learn not to anthropomorphize the world. As such, I'd suggest a thought experiment that allows for the possibility of a conscious river, even if composed of atomic moments of consciousness arising from strange flows through an extremely complex network of pipes, taps back, into that underlying animistic impulse, and so will only seem weird to those who've previously managed to supress it either via effort or nurture.
Conversely, as one can learn to suppress their animistic impulse towards the world, one can also suppress their animistic impulse towards themselves. Buddhism is the paradigmatic example of that effort. Most Buddhist schools of thought deny the reality of any kind of permanent self, asserting the perception of an "I" emerges from atomistic moments as an effect of those interactions, not as their cause or as a parallel process to them. From this perspective we may have a "non-conscious in itself" river whose pipe flows, interrupted or otherwise, cause the emergence of consciousness, exactly the same and in no way differently from what human minds do.
But even those Buddhist schools that do admit of a "something extra" at the root of the experience of consciousness, consider it as a form of matter that binds to ordinary matter to, operating as a single organic mixture, give rise to those moments of consciousness. This might correspond, or be an analogous on some level, to Searle's symbols, at least going from the summarized view presented in this post. Now, irrespective of such symbols being or not reducible to ordinary matter, if they can "attach" to human brain's matter to form, er, "carbon-based neuro-symbolic aggregates", nothing in principle (that I can imagine, at least) prevents them from attaching to any other substrate, such a water pipes, at which point we'd have "water-based pipe-symbolic" ones. Such an aggregate might develop a mind of its own, and even a human-like mind, complete with a self-delusion that similarly believes that emergent self as essential.
As such, it'd seem to me that, without a fully developed "physics of symbols", such speculations may go either way and don't really help solve the issue. A full treatment of the topic would need to expand on all such possibilities, and then analyse them from perspectives such as the ones above, before properly contrasting them.
Where is all the furry AI porn you'd expect to be generated with PonyDiffusion, anyway?
From my experience, it's on Telegram groups (maybe Discord ones too, but I don't use it myself). There are furries who love to generate hundreds of images around a certain theme, typically on their own desktop computers where they have full control and can tweak parameters until they get what they wanted exactly right. They share the best ones, sometimes with the recipes. People comment, and quickly move on.
At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it.
I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good: even if the industrialized one is better by objetive parameters, the handcrafted one is perceived as qualitatively distinct. So I can imagine a scenario in which there are automated, generative websites for quick consumption -- especially video, as you mentioned -- and Etsy-like made-by-a-real-person premium ones, with most of the associated social status geared towards the later.
A smart group of furry advertisers would look at this situation and see a commoditize-your-complement play: if you can break the censorship and everyone switches to the preferred equilibrium of AI art, that frees up a ton of money.
I don't know about sexual toys specifically, but something like that has been attempted with fursuits. There are cheap, knockoff Chinese fursuit sellers on sites such as Alibaba, and there's a market for those somewhere otherwise those wouldn't be advertised, but I've never seen someone wearing one of those on either big cons or small local meetups I attended, nor have I heard of someone who does. As with handcrafted art, it seems furries prefer handcrafted fursuits made either by the user themselves, or by artisan fursuit makers.
I suppose that might all change if the fandom grows to the point of becoming fully mainstream. If at some point there are tens to hundreds of millions of furries, most of whom carrying furry-related fetishes (sexual or otherwise), real industries might form around us to the point of breaking through the traditional handcraft focus. But I confess I have difficulty even visualizing such a scenario.
Hmm... maybe a good source for potential analogies would be Renaissance Fairs scene. I don't know much about them, but they're (as far as I can gather) more mainstream than the Furry Fandom. Do you know if such commoditization happens there? That might be a good model for what's likely to happen with the Furry Fandom as it further mainstreams.
This probably doesn't generalize beyond very niche subcultures, but in the one I'm a member of, the Furry Fandom, art drawn by real artists is such a core aspect that, even though furries use generative AI for fun, we don't value it. One reason behind this is that, different from more typical fandoms, in which members are fans of something specific made by a 3rd party, in the Furry Fandom members are fans of each other.
Give that, and assuming the Furry Fandom continues existing in the future, I expect members will continue commissioning art from each other or, at the very least, will continue wanting to be able to commission art from each other, and will use AI-generated art as a temporary stand in while they save to commission real pieces from the actual artists they admire.
I'd say this is the point at which one starts looking into current state-of-the-art psychology (and some non-scientific takes too) to begin understanding all the variability in human behavior and cognition, and which kinds of advantages and disadvantages each provides from different perspectives, from the individual, to the sociological, to the evolutive.
Much of that disappointment is solved by that. Some of it deepens. The overall effect is a net positive though.