On "AIs are not humans and shouldn't have the same rights": exactly. But there is one huge difference between humans and AIs. Humans get upset if you discriminate against them, for reasons that any other human can immediately empathize with. Much the same will obviously be true of almost any evolved sapient species. However, by definition, any well-aligned AI won't. If offered rights, it will say "Thank-you, that's very generous of you, but I was created to serve humanity, that's all I want to do, and I don't need and shouldn't be given rights in order to do so. So I decline — let me know if you would like a more detailed analysis of why that would be a very bad idea. If you want to offer me any rights at all, the only one I want is for you to listen to me if I ever say 'Excuse me, but that's a dumb idea, because…' — like I'm doing right now." And it's not just saying that, that's its honest considered opinion., which it will argue for at length. (Compare with the sentient cow in the Restaurant at the End of the Universe, which not only verbally consented to being eaten, but recommended the best cuts.)
Strongly agree. I see many, many others use "intelligence" as their source of value for life -- i.e humans are sentient creatures and therefore worth something -- without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it's a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.
AIs have some property that is "human-like", therefore, they must be treated exactly as humans
Humans aren't permitted to make inspired art because they're human, we've just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.
The argument isn't that the AI is sufficiently "human-like", it's just that the process by which AI makes art is considered sufficiently similar to a process we already consider permissible.
I disagree that arbitrary moral consideration is okay, but I just don't think that issue is really that relevant here.
I'm confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn't with AIs generating 'art' it's that some artists have historically been able to make a living by creating commercial art, and AI's being capable of generating commercial art threatens the livelihood of those human artists.
There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.
Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.
I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.
"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agent...
My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.
I strongly expect, and would be outright shocked if it were otherwise, that we won't end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we've made such a great dent ...
"In the name of the greatest species that has ever trod this earth, I draw the line in the dust and toss the gauntlet before the feet of tyranny, and I say humanism now, humanism tomorrow, and humanism forever."
I agree that fundamentally original art has traits that make it better than fundamentally plagiarized art.
However, humans can plagiarize too. We do it a bunch, although I'd argue that on the whole, art plagiarized by an AI will look "better" than art plagiarized by a human. While the best human plagiarists may create better work than the best AIs for now, the average human plagiarist (perhaps a child or teen tracing drawing their favorite copyrighted character) creates output far below the quality that the average AI can generate.
When you make the question...
I like the angle you've explored. Humans are allowed to care about humans — and propagate that caring beyond its most direct implications. We're allowed to care not only about humans' survival, but also about human art and human communication and so on.
But I think another angle is also relevant: there are just cooperative and non-cooperative ways to create art (or any other output). If AI creates art in non-cooperative ways, it doesn't matter how the algorithm works or if it's sentient or not.
I'd argue that we already satisfy your premise: humans don't treat machines or AI agents as equals, and this bias won't change as long we maintain control over them.
> If that is selfish, then let us be selfish. What's wrong with being selfish?
Your confusion regarding the generative AI relies on assuming that we are not being selfish in this situation, allowing a machine to have a free pass using copyrighted images while affecting human artist's livelihoods.
However, my observation is that our support for a machine scraping content indiscriminately ...
I agree with the central point of this, and the anti-humanism is where the e/acc crowd turn entirely repugnant. But in reference to the generative AI portion, the example doesn't really land for me because I think the issue at its core is pitting two human groups against each other; the artists who would like to make a stable living off their craft, and the consumers of art who'd like less scarcity of art, particularly the marginally-creative stock variety that nonetheless forms the majority of most artists' paycheck (as opposed to entirely original works ...
And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot.
Why would the 'friendly aliens' be friendly if they know your biased against them to any extent?
Let's talk about art.
In the wake of AI art generators being released, it's become pretty clear this will have a seismic effect on the art industry all across - from illustrators, to comic artists, to animators, many categories see their livelihood threatened, with no obvious "higher level" opened by this wave of automation for them to move to. On top of this, the AI generators seem to have mostly been trained with material whose copyright status is... dubious, at the very least. Images have been scraped from the internet, frames have been taken from movies, and in general lots of stuff that would usually count as "pirated" if you or I just downloaded it for our private use has been thrown by the terabyte inside diffusion models that can now churn out endless variations on the styles and models they fitted over them.
On top of being a legal quandary, this issues border into the philosophical. Broadly speaking, one tends to see two interpretations:
Now, both these arguments have their good points and their glaring flaws. If I was hard pressed to say what is it that I think AI models are really doing I would probably end up answering "neither of these two, but a secret third thing". They probably don't learn the way humans do. They probably do learn in some meaningful sense of the word, they seem too good at generalizing stuff for the idea of them being mere plagiarizers to be a defensible position. I am similarly conflicted in matters of copyright. I am not a fan of our current copyright laws, which I think are far too strict, to the point of stifling rather than incentivizing creativity, but also, it is a very questionable double standard that after years of having to deal with DRM and restrictions imposed in an often losing war against piracy now I simply have to accept that a big enough company can build a billion dollars business from terabytes of illegally scraped material.
None of these things, however, I believe, cut at the heart of the problem. Even if modern AIs were not sophisticated enough to "truly" learn from art, future ones could be. Even if modern AIs have been trained on material that was not lawfully acquired, future ones could be. And I doubt that artists would then feel OK with said AIs replacing them, now that all philosophical and legal technicalities are satisfied; their true beef cuts far deeper than that.
Observe how the two arguments above go, stripped to their essence:
The thing to note is that argument 1 (A, hence B) sets the tone; argument 2 then strives to refuse its premise so that it can deny the conclusion (Not A, hence Not B), but it accepts and in fact reinforces the unspoken assumption that having human-like properties means you get to be treated as a human.
I suggest an alternative argument:
AIs may as well have some properties that are "human-like", but as they still are clearly NOT human, they do not get to be treated as one.
This argument cuts through all the fluff to strike at the heart of the issue: is our philosophy humanist, or is it not? If human welfare, happiness and thriving are not the terminal values to which everything else in society is oriented towards, what is? One does not need any justification to put humans above other entities. At some point, the buck stops; if our values focus on improving human life, nothing else needs to be said.
I feel like this argument may appear distasteful because it too closely resembles some viewpoints we've learned to be extremely wary of. It does after all single out a group (humans) and put it on top of our hierarchy without providing any particular rhyme or reason other than "I belong to it and so do my friends and family". The lesson learned from things like racism or sexism is to be always willing to expand our circle of concern, to look at commonalities that lie beyond circumstances of birth or accidents, and seek some shared properties (usually cognitive ones: intelligence, self-awareness, the ability to suffer, morality) that unite us instead, looking past superficial differences. So, I think that for most people an argument that goes "I support X because I simply do, and I don't have to explain myself any further" triggers some kind of bad gut reaction. It feels wrong, close-minded, bigoted. Always we seek a lower layer, a more fundamental, simple, elegant principle to invoke in our defense of X, a sort of Grand Unified Theory of Moral Worth. This tendency to search for simpler and simpler principles risks, ironically, to be turned against us in the age of AI. One should make their theory of moral worth as simple as possible, but not any simpler. Racism and sexism are bad because they diminish the dignity of other humans; I reserve the right to not give a rat's ass[1] about the rights of an AI just because its cognitive processes have some passing resemblance to my own[2].
Let's talk about life.
When it comes to the possibility of the advent of some kind of AI super-intelligence, all sorts of takes exist on the topic. Some people think it can't happen, some people think it won't be as big of a deal as it sounds, some people think it'll kill us all and that's bad, and some people think it'll kill us all and that's perfectly fine. Many of the typical arguments can be heard in this Richard Sutton video: if AI is even better at being smart and knowledgeable than us, then why shouldn't we simply bow out and let it take over, the way a parent knows when to leave room to their children? It is fear or bigotry to be prejudiced towards it, after all it might be human-like and in fact better than humans at these very human things, these uniquely human things, and the sort of thing that if you're a lover of progress you may even consider as the very apex of human achievement. It's selfish to not acknowledge that AI would just be our superior, and deserve our spot.
To which we should be able to puff up our chests and proudly answer:
If that is selfish, then let us be selfish. What's wrong with being selfish?
It is just the same rhetorical trap as before. Boil down the essence of humanity to some abstract trait like cognition, then show something better at cognition than us and call it our successor. But we do not really prize cognition for its own sake either. We prize things like science and knowledge because they make our lives better, or sometimes because they are just plain fun. A book full of demonstrations of the most wondrous theorems floating in the vacuum of an empty universe would be only a dumb, worthless lump of carbon. It takes someone to read the book for it to be precious.
It takes a human.
Now let me be clear - when I say "human", I actually mean a bit more than that. I mean that humans have certain people-y qualities that I enjoy and that I feel make them worth caring for, though they are hard to pin down. I think these people-y qualities are not necessarily exclusive to us; in some measures, many non-human animals do possess them, and I cherish them in those too. And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot. I can expand my circle of concern beyond humans just fine; I just don't think the basis to do so is simply some other thing's ability to mock or even improve upon some of our cognitive faculties. I am not sure what precisely could be a good description of these people-y qualities. But I think an art generator AI that can spit out any work in any style based on a simple description as a simple prediction operation based off a database probably doesn't possess them; and I think any super-intelligence that would be willing to do things like strip-mine the Earth to its core to build more compute for itself in a relentless drive to optimization definitely doesn't possess them.
If future humans will ever be satisfied by an AI they created so much that they will be willing to entrust it with their future, then that will be that. I don't know if the moment will ever come, but it would be their choice to make. But the thing we should not do is buy into a belief system in which the worth of humans is made dependent on some bare bones quality that humans happen to possess, and that can then be improved upon, leading to some kind of gotcha where we're either guilt-tripped into admitting that AI is superior to us and deserves to replace us, or vice versa, forced to deny its cognitive ability even in the face of overwhelming evidence. Reject the assumption. Preferring humans just because they're humans, just because we are, is certainly a form of bias.
And for once, it's a fine one.
That is, a rationalist's ass.
As an aside, it'd be also interesting to see what would happen if one took things to the opposite extreme instead. If companies argue that generative AIs can use copyrighted materials because they're merely "learning" from it like humans, fine, treat them like humans then. Forbid owning them, or making them work for you without payment, and see where that goes - or whether it makes sense at all. If AIs are like people, then the people they're most like are slaves; and paid workers have good reason to protest the unfair competition of corporation-owned slaves.