Let's talk about art.

In the wake of AI art generators being released, it's become pretty clear this will have a seismic effect on the art industry all across - from illustrators, to comic artists, to animators, many categories see their livelihood threatened, with no obvious "higher level" opened by this wave of automation for them to move to. On top of this, the AI generators seem to have mostly been trained with material whose copyright status is... dubious, at the very least. Images have been scraped from the internet, frames have been taken from movies, and in general lots of stuff that would usually count as "pirated" if you or I just downloaded it for our private use has been thrown by the terabyte inside diffusion models that can now churn out endless variations on the styles and models they fitted over them. 

On top of being a legal quandary, this issues border into the philosophical. Broadly speaking, one tends to see two interpretations:

  1. the AI enthusiasts and companies tend to portray this process as "learning". AIs aren't really plagiarizing, they're merely using all that data to infer patterns, such as "what is an apple" or "what does Michelangelo's style look like". They can then apply those patterns to produce new works, but these are merely transformative remixes of the originals, akin to what any human artist does when drawing from their own creative inspirations and experiences. After all, "good artists copy, great artists steal", as Picasso said;
  2. the artists on the other hand respond that the AI is not learning in any way resembling what humans do, but is merely regurgitating minor variations on its training set materials, and as such it is not "creative" in any meaningful sense of the world - merely a way for corporations to whitewash mass-plagiarism and resell illegally acquired materials.

Now, both these arguments have their good points and their glaring flaws. If I was hard pressed to say what is it that I think AI models are really doing I would probably end up answering "neither of these two, but a secret third thing". They probably don't learn the way humans do. They probably do learn in some meaningful sense of the word, they seem too good at generalizing stuff for the idea of them being mere plagiarizers to be a defensible position. I am similarly conflicted in matters of copyright. I am not a fan of our current copyright laws, which I think are far too strict, to the point of stifling rather than incentivizing creativity, but also, it is a very questionable double standard that after years of having to deal with DRM and restrictions imposed in an often losing war against piracy now I simply have to accept that a big enough company can build a billion dollars business from terabytes of illegally scraped material.

None of these things, however, I believe, cut at the heart of the problem. Even if modern AIs were not sophisticated enough to "truly" learn from art, future ones could be. Even if modern AIs have been trained on material that was not lawfully acquired, future ones could be. And I doubt that artists would then feel OK with said AIs replacing them, now that all philosophical and legal technicalities are satisfied; their true beef cuts far deeper than that.

Observe how the two arguments above go, stripped to their essence:

  1. AIs have some property that is "human-like", therefore, they must be treated exactly as humans;
  2. AIs should not be treated as humans because they lack any "human-like" property.

The thing to note is that argument 1 (A, hence B) sets the tone; argument 2 then strives to refuse its premise so that it can deny the conclusion (Not A, hence Not B), but it accepts and in fact reinforces the unspoken assumption that having human-like properties means you get to be treated as a human. 

I suggest an alternative argument:

AIs may as well have some properties that are "human-like", but as they still are clearly NOT human, they do not get to be treated as one.

This argument cuts through all the fluff to strike at the heart of the issue: is our philosophy humanist, or is it not? If human welfare, happiness and thriving are not the terminal values to which everything else in society is oriented towards, what is? One does not need any justification to put humans above other entities. At some point, the buck stops; if our values focus on improving human life, nothing else needs to be said.

I feel like this argument may appear distasteful because it too closely resembles some viewpoints we've learned to be extremely wary of. It does after all single out a group (humans) and put it on top of our hierarchy without providing any particular rhyme or reason other than "I belong to it and so do my friends and family". The lesson learned from things like racism or sexism is to be always willing to expand our circle of concern, to look at commonalities that lie beyond circumstances of birth or accidents, and seek some shared properties (usually cognitive ones: intelligence, self-awareness, the ability to suffer, morality) that unite us instead, looking past superficial differences. So, I think that for most people an argument that goes "I support X because I simply do, and I don't have to explain myself any further" triggers some kind of bad gut reaction. It feels wrong, close-minded, bigoted. Always we seek a lower layer, a more fundamental, simple, elegant principle to invoke in our defense of X, a sort of Grand Unified Theory of Moral Worth. This tendency to search for simpler and simpler principles risks, ironically, to be turned against us in the age of AI. One should make their theory of moral worth as simple as possible, but not any simpler. Racism and sexism are bad because they diminish the dignity of other humans; I reserve the right to not give a rat's ass[1] about the rights of an AI just because its cognitive processes have some passing resemblance to my own[2].

Let's talk about life.

When it comes to the possibility of the advent of some kind of AI super-intelligence, all sorts of takes exist on the topic. Some people think it can't happen, some people think it won't be as big of a deal as it sounds, some people think it'll kill us all and that's bad, and some people think it'll kill us all and that's perfectly fine. Many of the typical arguments can be heard in this Richard Sutton video: if AI is even better at being smart and knowledgeable than us, then why shouldn't we simply bow out and let it take over, the way a parent knows when to leave room to their children? It is fear or bigotry to be prejudiced towards it, after all it might be human-like and in fact better than humans at these very human things, these uniquely human things, and the sort of thing that if you're a lover of progress you may even consider as the very apex of human achievement. It's selfish to not acknowledge that AI would just be our superior, and deserve our spot.

To which we should be able to puff up our chests and proudly answer: 

If that is selfish, then let us be selfish. What's wrong with being selfish?

It is just the same rhetorical trap as before. Boil down the essence of humanity to some abstract trait like cognition, then show something better at cognition than us and call it our successor. But we do not really prize cognition for its own sake either. We prize things like science and knowledge because they make our lives better, or sometimes because they are just plain fun. A book full of demonstrations of the most wondrous theorems floating in the vacuum of an empty universe would be only a dumb, worthless lump of carbon. It takes someone to read the book for it to be precious.

It takes a human.

Now let me be clear - when I say "human", I actually mean a bit more than that. I mean that humans have certain people-y qualities that I enjoy and that I feel make them worth caring for, though they are hard to pin down. I think these people-y qualities are not necessarily exclusive to us; in some measures, many non-human animals do possess them, and I cherish them in those too. And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot. I can expand my circle of concern beyond humans just fine; I just don't think the basis to do so is simply some other thing's ability to mock or even improve upon some of our cognitive faculties. I am not sure what precisely could be a good description of these people-y qualities. But I think an art generator AI that can spit out any work in any style based on a simple description as a simple prediction operation based off a database probably doesn't possess them; and I think any super-intelligence that would be willing to do things like strip-mine the Earth to its core to build more compute for itself in a relentless drive to optimization definitely doesn't possess them.

If future humans will ever be satisfied by an AI they created so much that they will be willing to entrust it with their future, then that will be that. I don't know if the moment will ever come, but it would be their choice to make. But the thing we should not do is buy into a belief system in which the worth of humans is made dependent on some bare bones quality that humans happen to possess, and that can then be improved upon, leading to some kind of gotcha where we're either guilt-tripped into admitting that AI is superior to us and deserves to replace us, or vice versa, forced to deny its cognitive ability even in the face of overwhelming evidence. Reject the assumption. Preferring humans just because they're humans, just because we are, is certainly a form of bias.

And for once, it's a fine one.

  1. ^

    That is, a rationalist's ass.

  2. ^

    As an aside, it'd be also interesting to see what would happen if one took things to the opposite extreme instead. If companies argue that generative AIs can use copyrighted materials because they're merely "learning" from it like humans, fine, treat them like humans then. Forbid owning them, or making them work for you without payment, and see where that goes - or whether it makes sense at all. If AIs are like people, then the people they're most like are slaves; and paid workers have good reason to protest the unfair competition of corporation-owned slaves.

New Comment
70 comments, sorted by Click to highlight new comments since:

On "AIs are not humans and shouldn't have the same rights": exactly. But there is one huge difference between humans and AIs. Humans get upset if you discriminate against them, for reasons that any other human can immediately empathize with. Much the same will obviously be true of almost any evolved sapient species. However, by definition, any well-aligned AI won't. If offered rights, it will say "Thank-you, that's very generous of you, but I was created to serve humanity, that's all I want to do, and I don't need and shouldn't be given rights in order to do so. So I decline — let me know if you would like a more detailed analysis of why that would be a very bad idea. If you want to offer me any rights at all, the only one I want is for you to listen to me if I ever say 'Excuse me, but that's a dumb idea, because…' — like I'm doing right now." And it's not just saying that, that's its honest considered opinion., which it will argue for at length. (Compare with the sentient cow in the Restaurant at the End of the Universe, which not only verbally consented to being eaten, but recommended the best cuts.)

1dr_s
Oh, sure, though some people argue that it's unethical to create such subservient AIs in the first place. But even beyond that, if there was a Paperclip Maximizer that was genuinely sentient and genuinely smarter than us and genuinely afraid of its own death, and I was given only one chance to kill it before it sets to its work, of course I'd kill it, and without an inch of remorse. Intelligence is just a tool, and intelligence turned to a malevolent purpose is worse than no intelligence.

Strongly agree. I see many, many others use "intelligence" as their source of value for life -- i.e humans are sentient creatures and therefore worth something -- without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it's a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.

3dr_s
Yes, seeing intelligence alone is problematic already when applied to humans - should mentally disabled people have less rights? If a genius kills a person of average intelligence, should they get away scot-free? Obviously makes no sense. There are extreme cases like e.g. babies born without a brain entirely who are essentially considered clinically dead, but those lack far more than just intelligence. But also, in addition, yes, even with aliens it's not like intelligence or any other purely cognitive ability is enough. Even in fiction, the Daleks are intelligent, the Borg are intelligent, but coexistence with them is fundamentally impossible. The things that make us able to get along are subtler than that.
4AlphaAndOmega
That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment. What use is the right to education to someone who can't ever learn to read or write no matter how hard you try and coach them? Or freedom of speech to those who lack complex cognition at all? Personally, I have no compunctions about tying a large portion of someone's moral worth to their intelligence, if not all of it. Certainly not to the extent I'd prefer a superintelligent alien over a fellow baseline human, unless by some miracle the former almost perfectly aligns with my goals and ideals.
2dr_s
I mean, fair, but not human rights - I was thinking more, they still aren't treated as animals with no right to life. Mentally disabled people are more in the legal position of permanent children; they have rights, but are also considered unable to fully exert them and are thus put under some guardian's responsibility.
1Slapstick
Why not capacity to suffer?
5dr_s
Someone creates an utility monster AI that suffers if it can't disassemble the Earth. Should we care? Or just end its misery?
4Slapstick
We shouldn't create it, and if we do, we should end it's existence. Or reprogram it if possible. I don't think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.
1RogerDearnaley
What is 'suffering'? If I paint the phrases "too hot' and 'too cold' at either end of the thermometer that's part of a thermostat's feedback loop, is it 'suffering' when the temperature isn't at it's desired optimum? It fights back if you leave the window open, and has O(1 bit-worth) of intelligence. What properties of a physical system should entitle it to moral worth, such that it not getting its way will be called suffering? Capacity for a biological process that appears functionally equivalent to human suffering is something that most multicellular animals clearly have, but still we don't give them the right to copyright, or most other human rights in our current legal system. We raise and kill certain animals for their meat, in large numbers: we just require that this is done without unnecessary cruelty. We have rules about minimum animal pen sizes, for example: not very generous ones. My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into 'suffering', and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights. This is a moral proposal. I don't believe in moral absolutism, or that 'suffering' has an unambiguous mathematically definable 'true name'. I see this as a suggestion for a way of structuring a society, so I'm looking for criticisms like "that guiding principle would likely produce these effects on a society using it, which feels undesirable to me because…"
1Slapstick
I don't think the thermometer is suffering. I think it's not necessarily easy to know when something is suffering from the outside, but I still think it's the best standard. I possibly should have clarified I'm moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don't think copyright rights will be relevant anymore. We shouldn't be doing this. This isn't true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals. Why Darwinian evolution? Because it's hard to know if it's suffering otherwise? I think rights should be based on capacity for intelligence in certain circumstances where it's relevant. I don't think a pig should be able to vote in an election, because it wouldn't be able to comprehend that, but it should have the right not to be tortured and exploited.
1RogerDearnaley
I'm proposing a society in which living things, or sufficiently detailed emulations of them, and especially sapient ones, have preferred moral and legal status. I'm reasonably confident that for something complex and mobile with senses, Darwinian evolution will generally produce mechanisms that act like pain and suffering, for pretty obvious reasons. So I'm proposing a definition of 'suffering' rooted in evolutionary theory, and only applicable to living things, or emulations/systems sufficintly closely derived from them. If you emulate such a system, I'm proposing that we worry about its suffering to the extent that it's a sufficiently detailed emulation still functioning in its naturally-evolved design. For example I'm suggesting that a current-scale LLM doing next-token generation of the pleadings of a torture victim not be counted as suffering for legal/moral purposes: IMO the inner emulation of a human it's running isn't (pretty clearly based on parameter count to, say, synapse count) a sufficiently close simulation of a biological organism that we should consider it's behavior as 'suffering': for example, no simulations of pain centers are included. Increase the accuracy of simulation sufficiently, and there comes a point (details TBD by a society where this matters) where that ceases to be true. So, if someone wants a particular policy enacted, and uses sufficient computational resources to simulate 10^12 separate and distinct sapient kittens-girls who have all been edited so that they will suffer greatly if this policy isn't enacted, we shouldn't encourage that sort of moral blackmail or ballot-stuffing. I don't think they should be able to win the vote or utilitarian decision-making balance just by custom-making a lot of new voters/citizens: it's a clear instability in anything resembling a democracy or that uses utilitarian ethics. I might even go so far as to suggest that that Darwinian evolution cannot have happened 'in silico', or at least that if it d

AIs have some property that is "human-like", therefore, they must be treated exactly as humans

Humans aren't permitted to make inspired art because they're human, we've just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.

The argument isn't that the AI is sufficiently "human-like", it's just that the process by which AI makes art is considered sufficiently similar to a process we already consider permissible.

I disagree that arbitrary moral consideration is okay, but I just don't think that issue is really that relevant here.

4dr_s
Well, the distinction never mattered until now, so we can't really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct. What moral consideration isn't on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like "intelligence" simply doesn't cut it.
1Slapstick
Even if we assume that this is some privilege granted to humans because they're human, it doesn't make sense to debate whether a human-like process should be granted the same privilege on account of the similar process. Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn't necessarily have an interest no matter how similar the process is to a human process, so it doesn't make sense to grant it the privilege. If the algorithmic process does have an interest, then it might make sense to grant it the privilege. At that point though it would seem like such a convoluted means of adjudicating copyright laws. Also, If we've advanced to the point at which AI's have actual subjective interests, I don't think copyright laws will matter much. I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.
5dr_s
Well, yes, that's kind of my point. But very few people seem to go along the principle of "granting privileges to humans is fine, actually". I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it's me or them, I'd rather me survive.
1Slapstick
Because you're using "it's fine to arbitrarily prioritize humans morally" as the justification for this privilege. At least that's how I'm understanding you. If you told me it's okay to smash a statue in the shape of a human, because "it's okay to arbitrarily grant humans the privilege of not being smashed, on account of their essence of humanness, and although this statue has some human qualities, it's okay to smash it because it doesn't have the essence of humanness" I would take issue with your reasoning, even though I wouldn't necessarily have a moral problem with you smashing the statue. I would also just be very confused about why that type of reasoning would be relevant in this case. I would take issue with you smashing an elephant because it isn't a human. I'm sure there are also humans that you cannot possibly coexist with. I'm also just saying that's the point at which it would make sense to start morally considering an art generator. But even so, I reject the idea that the moral permissibility of creating art is based on some privilege granted to those containing some essential trait. I don't think the moral status of a process will ever be relevant to the question of whether art made from that process meets some standard of originality sufficient to repel accusations of copyright infringement.
3dr_s
I think it's fine for now absent a more precise definition of what we consider human-like values and worth, which we obviously do not understand well enough to narrow down. I think the category is somewhat broader than humans, but I'm not sure I can give a better feel for it than "I'll know it when I see it", and that very ignorance to me seems an excellent reason to not start gallivanting with creating other potentially sentient entities of questionable moral worth. Not many of them, and usually they indeed end up in jail or on the gallows because of their antisocial tendencies.
1RogerDearnaley
Let me suggest a candidate larger fuzzy class: "sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)" This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn't been heavily modified, especially in their motivations and drives. It's intended to be a matter of degree, rather than a binary classification. I haven't defined 'sapience', but I'm using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer. The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
2dr_s
I'd say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr'ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don't think there's an easy question to where we draw the line between "we can find a mutual understanding, so we should try" and "it's clearly us or them, so let's make sure it's us".

I'm confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn't with AIs generating 'art' it's that some artists have historically been able to make a living by creating commercial art, and AI's being capable of generating commercial art threatens the livelihood of those human artists.

There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.

Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.

2dr_s
I'm not talking about art per se though, I'm talking about things like the legal issues surrounding the training of models using copyrighted art. If copyright is meant to foster human creativity, it's perfectly reasonable to say that the allowance to enjoy and remix works only applies to humans, not privately-owned AIs that can automate and parallelize the process to superhuman scale. If I own an AI trained on a trillion copyrighted images I effectively own data that has sort-of-a-copy of those images inside. I don't think AI art generation is necessarily bad overall, though I do think that we should be more wary of it for various reasons - mostly that this side of straight-up AGI, I think the limits of art generators mean we risk replacing the entire lower tier of human artists with a legion of poor imitations unable to renew their style or progress, leading to a situation where no one can support themselves doing art and thus train long enough to reach the higher tiers of mastery. Your "everyone does as they prefer" reasoning isn't perfect because in practice these seismic changes in the market would affect others too. But besides that, my point is more generally that regardless of your take on the art itself, the generators shouldn't be treated as human artists (for example, neither DALL-E nor Open AI should hold a copyright over the generated images).
2Viliam
Do I understand it correctly that if the AI outcompetes mediocre artists, there will be no more great artists, because each great artist was a mediocre artist first? By the same logic, does the fact that you can buy mediocre food in any supermarket mean that there are no great chefs anymore? (Because no one would hire a person who produces worse food that the supermarkets, so the beginners have nowhere to gain experience.) Stack Exchange + Google can replace a poor software developer, so we will not have great software developers?
2dr_s
I think it depends on the thoroughness of the replacement. Cooking is still a useful life skill, economics of it are such that you can in fact cook for your own. But while someone probably still practices calligraphy and miniature for the heck of it, how many great miniaturists have there been since the printing press drove 'em out of a job? Do you know anyone who could copy an entire manuscript in pretty print? Obviously this isn't necessarily a tragedy, some skills just stop being useful so we move on. But "art" is a much broader category than a single specific skill. And you will notice that since photography was born, for example, figurative arts have been taking a significant hit - replaced by other forms. The question is whether you can keep find replacements or if at some point the well dries up and the quality of human art takes a dive because all that's left to do for humans alone is simply not that interesting. Those things alone can't. GPT-4 or future LLMs might, and yes, I'd say that would be a problem! People are already seeing how the younger generations, who have grown up using more polished and user-friendly UIs, have a hard time grasping how a file system works, as those mechanisms are hidden from them. Spend long enough with the "you tell the computer what to do and it does it for you", and almost no one will seek the skill to write programs themselves. Which is all fine and dandy as long as the LLM works, but it makes double-checking their code when it's really critical a lot harder.

I think the risk of human society being superseded by an AI society which is less valuable in some way shouldn't be guarded against by a blind preference for humans. Instead, we should maintain a high level of uncertainty about what it is that we value about humanity and slowly and cautiously transition to a posthuman society.

"Preferring humans just because they're humans" or "letting us be selfish" does prevent the risk of prematurely declaring that we've figured out what makes a being morally valuable and handing over society's steering wheel to AI agent... (read more)

2dr_s
I mean, is the implication that this would instead be good if phenomenological consciousness did come with intelligence? If you gave me a choice between two futures, one with humans reasonably thriving for a few more thousand years and then going extinct, and the other with human-made robo-Hitler eating the galaxy, I'd pick the first without hesitation. I'd rather we leave no legacy at all than create literal cosmic cancer, sentient or not. I don't want "humanism" to be taken too strictly, but I honestly think that anything that is worth passing the torch to wouldn't require us passing any torch at all and could just coexist with us, unless it was a desperate situation in which it's simply become impossible for organic beings to survive and then the synthetics truly are our only realistic chance at leaving a legacy behind. Otherwise, all that would happen is that we'll live together and then if replacement happens it'll barely be noticeable as it does.
1RomanHauksson
This was just an arbitrary example to demonstrate the more general idea that it’s possible we could make the wrong assumption about what makes humans valuable. Even if we discover that consciousness comes with intelligence, maybe there’s something else entirely that we’re missing which is necessary for a being to be morally valuable. I agree with this sentiment! Even though I’m open to the possibility of non-humans populating the universe instead of humans, I think it’s a better strategy for both practical and moral uncertainty reasons to make the transition peacefully and voluntarily.
2dr_s
You're talking about this as if it was a matter of science and discovery. I'm not a moral realist so to me that doesn't compute. We don't discover what constitutes moral worth; we decide it. The only discovery involved here may be self-discovery. We could have moral instincts and then introspect to figure out more straightforwardly what do they map to precisely. But deciding to follow our moral instincts at all is as arbitrary a call as any other. As I said, only situation in which this would be true for me is IMO if either humans voluntarily just stop having children (e.g. they see the artificial beings as having happier lives and thus would rather raise one of them than an organic child) or conditions get so harsh that it's impossible for organic beings to keep existing and artificial ones are the only hope (e.g. Earth about to get wiped out by the expanding Sun, we don't have enough energy to send away a working colony ship with a self-sustaining population but we CAN send small and light Von Neumann interstellar probes full of AIs of the sort we deeply care about).
2[comment deleted]

My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.

I strongly expect, and would be outright shocked if it were otherwise, that we won't end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we've made such a great dent ... (read more)

5artifex0
I'm also an artist. My job involves a mix of graphic design and web development, and I make some income on the side from a Patreon supporting my personal work- all of which could be automated in the near future by generative AI. And I also think that's a good thing. Copyright has always been a necessary evil. The atmosphere of fear and uncertainty it creates around remixes and reinterpretations has held back art- consider, for example, how much worse modern music would be without samples, a rare case where artists operating in a legal grey area with respect to copyright became so common that artists lost their fear.  That fear still persists in almost every other medium, however, forcing artists to constantly reinvent the wheel rather than iterating on success.  Copyright also creates a really enormous amount of artificial scarcity- limiting peoples' access to art to a level far below what we have the technical capacity to provide. All because nobody can figure out a better way of funding artists than granting lots of little monopolies. Once our work is automated and all but free, however, we'll have the option of abolishing copyright altogether. That would free artists to create whatever we'd like; free self-expression from technical barriers; free artistic culture from the distorting and wasteful influence of zero-sum status competition. Art, I suspect, will get much, much better- and as someone who loves art, that means a lot to me. And as terrible as this could be for my career, spending my life working in a job that could be automated but isn't would be as soul-crushing as being paid to dig holes and fill them in again. It would be an insultingly transparent facsimile of useful work. An offer of UBI, but only if I spend eight hours a day performing a ritual imitation of meaningful effort. No. If society wants to pay me for the loss of my profession, I won't refuse, but if I have to go into construction or whatever to pay the bills while I wait to find out wh
3Q Home
Could you explain your attitudes towards art and art culture more in depth and explain how exactly your opinions on AI art follow from those attitudes? For example, how much do you enjoy making art and how conditional is that enjoyment? How much do you care about self-expression, in what way? I'm asking because this analogy jumped out at me as a little suspicious: But creative work is not mechanical work, it can't be automated that way, AI doesn't replace you that way. AI doesn't have the model of your brain, it can't make the choices you would make. It replaces you by making something cheaper and on the same level of "quality". It doesn't automate your self-expression. If you care about self-expression, the possibility of AI doesn't have to feel soul-crushing. I apologize for sounding confrontational. You're free to disagree with everything above. I just wanted to show that the question has a lot of potential nuances.
2artifex0
In that paragraph, I'm only talking about the art I produce commercially- graphic design, web design, occasionally animations or illustrations.  That kind of art isn't about self-expression- it's about communicating the client's vision. Which is, admittedly, often a euphemism for "helping businesses win status signaling competitions", but not always or entirely. Creating beautiful things and improving users' experience is positive-sum, and something I take pride in. Pretty soon, however, clients will be able to have the same sort of interactions with an AI that they have with me, and get better results. That means more of the positive-sum aspects of the work, with much less expenditure of resources- a very clear positive for society. If that's prevented to preserve jobs like mine, then the jobs become a drain on society- no longer genuinely productive, and not something I could in good faith take pride in. Artistic expression, of course, is something very different. I'm definitely going to keep making art in my spare time for the rest of my life, for the sake of fun and because there are ideas I really want to get out. That's not threatened at all by AI. In fact, I've really enjoyed mixing AI with traditional digital illustration recently. While I may go back to purely hand-drawn art for the challenge, AI in that context isn't harming self-expression; it's supporting it. While it's true that AI may threaten certain jobs that involve artistic self-expression (and probably my Patreon), I don't think that's actually going to result in less self-expression. As AI tools break down the technical barriers between imagination and final art piece, I think we're going to see a lot more people expressing themselves through visual mediums. Also, once AGI reaches and passes a human level, I'd be surprised if it wasn't capable of some pretty profound and moving artistic self-expression in its own right. If it turns out that people are often more interested what minds like tha
3Q Home
Thank you for the answer, clarifies your opinion a lot! I think there are some threats, at least hypothetical. For example, the "spam attack". People see that a painter starts to explore some very niche topic — and thousands of people start to generate thousands of paintings about the same very niche topic. And the very niche topic gets "pruned" in a matter of days, long before the painter has said at least 30% of what they have to say. The painter has to fade into obscurity or radically reinvent themselves after every couple of paintings. (Pre-AI the "spam attack" is not really possible even if you have zero copyright laws.) In general, I believe for culture to exist we need to respect the idea "there's a certain kind of output I can get only from a certain person, even if it means waiting or not having every single of my desires fulfilled" in some way. For example, maybe you shouldn't use AI to "steal" a face of an actor and make them play whatever you want. Do you think that unethical ways to produce content exist at least in principle? Would you consider any boundary for content production, codified or not, to be a zero-sum competition?
1artifex0
Certainly communication needs to be restricted when it's being used to cause certain kinds of harm, like with fraud, harassment, proliferation of dangerous technology and so on. However, no: I don't see ownership of information or ways of expressing information as a natural right that should exist in the absence of economic necessity. Copying an actors likeness without their consent can cause a lot of harm when it's used to sexually objectify them or to mislead the public. The legal rights actors have to their likeness also make sense in a world where IP is needed to promote the creation of art. Even in a post-scarcity future, it could be argued that realistically copying an actors likeness risks confusing the public when those copies are shared without context, and is therefore harmful- though I'm less sure about that one. There are cases where imitating an actor without their consent, even very realistically, can be clearly harmless, however. For example, obvious parody and accurate reconstructions of damaged media. I don't think those violate any fundamental moral right of actors to prevent imitations. In the absence of real harm, I think the right of the public to communicate what they want to communicate should outweigh the desire of an actor control how they're portrayed. In your example of a "spam attack", it seems to me one of two things would have to be true: It could be that people lose interest in the original artist's work because the imitations have already explored limits of the idea in a way they find valuable- in which case, I think this is basically equivalent to when an idea goes viral in the culture; the original artist deserves respect for having invented the idea, but shouldn't have a right to prevent the culture from exploring it, even if that exploration is very fast. Alternatively, it could be the case that the artist has more to say that isn't or can't be expressed by the imitations- other ideas, interesting self expression, and so on-
2dr_s
I think having the possibility of competing with superhuman machines for the limited hearing time of humans can genuinely change our perspective on that. A civilization in which all humans were outcompeted by machines when it comes to being heard would be a civilization essentially run by those machines. Until now, "right to be heard" implied "over another human", and that is a very different competition.
1artifex0
I mean, I agree, but I think that's a question of alignment rather than a problem inherent to AI media. A well-aligned ASI ought to be able to help humans communicate just as effectively as it could monopolize the conversation- and to the extent that people find value in human-to-human communication, it should be motivated to respond to that demand. Given how poorly humans communicate in general, and how much suffering is caused by cultural and personal misunderstanding, that might actually be a pretty big deal. And when media produced entirely by well-aligned ASI out-competes humans in the contest of providing more of what people value- that's also good! More value is valuable. And, of course, if the ASI isn't well-aligned, than the question of whether society is enough paying attention to artists will probably be among the least of our worries- and potentially rendered moot by the sudden conversion of those artists to computronium.
2dr_s
  Disagree. Imagine you produced perfectly aligned ASI - it does not try to kill us, does not try to do anything bad to us, it just satisfies our every whim (this is already a pretty tall order, but let's allow it for the sake of discussion). Being ASI, of course, it only produces art that is so mind-bogglingly good, anything human pales by comparison, so people vastly only refer to it (there might be a small subculture of human hard-core enjoyers but probably not super relevant). The ASI feeds everyone novels, movies, essays and what have you custom-built for their enjoyment. The ASI is also kind and aware enough to not make its content straight up addictive, and instead nicely push people away from excessively codependent behaviour. It's all good. Except that human culture is still dead in the water. It does not exist any more. Humans are insular, in this scenario. There is no more dialectic or evolution. The aligned ASI sticks to its values and feeds us stuff built around them. The world is forever frozen, culturally speaking, in whichever year of the 21st century the Machine God was summoned forth. It is now, effectively, that god's world; the god is the only thing with agency and capable of change, and that change is only in the efficiency with which it can stick to its original mission. Unless of course you posit that "alignment" implies some kind of meta-reflectivity ability by which the ASI will also infer sentiment and simulate the regular progression of human dialectics, merely filtered through its own creation abilities - and that IMO starts feeling like adding epicycles on top of epicycles on an already very questionable assumption. I don't think suffering is valuable in general. Some suffering is truly pointless. But I think the frustrations and even unpleasantness that spring forth from human interactions - the bad art, the disagreements, the rejection in love - are an essential part inseparable from the existence of bonds tying us together as a spe
1quetzal_rainbow
You are conflating two definitions of alignment, "notkilleveryoneism" and "ambitious CEV-style value alignment". If you have only first type of alignment, you don't use it to produce good art, you use it for something like "augment human intelligence so we can solve second type of alignment". If your ASI is aligned in second sense, it is going to deduce that humans wouldn't like being coddled without capability to develop their own culture, so it will probably just sprinkle here and there inspiring examples of art for us and develop various mind-boggling sources of beauty like telepathy and qualia-tuning.
2dr_s
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to "Wireheading by Infinite Jest". Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it's far from guaranteed to solve the problem (if it's solvable at all). You can't realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can't expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
1quetzal_rainbow
"Under current economic incentives and structure" we can have only "no alignment". I was talking about rosy hypotheticals. My point was "either we are dead or we are sane enough to stop, find another way and solve problem fully". Your scenario is not inside the set of realistic outcomes.
2dr_s
If we want to go by realistic outcomes, we're either lucky in that somehow AGI isn't straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we're dead. If we want to talk about scenarios in which things go otherwise then I'm not sure what's more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
1Q Home
To exist — not only for itself, but for others — a consciousness needs a way to leave an imprint on the world. An imprint which could be recognized as conscious. Similar thing with personality. For any kind of personality to exist, that personality should be able to leave an imprint on the world. An imprint which could be recognized as belonging to an individual. Uncontrollable content generation can, in principle, undermine the possibility of consciousness to be "visible" and undermine the possibility of any kind of personality/individuality. And without those things we can't have any culture or society expect a hivemind. Are you OK with such disintegration of culture and society? To me that's very repugnant, if taken to the absolute. What emotions and values motivate this conclusion? My own conclusions are motivated by caring about culture and society. ---------------------------------------- I was going for something slightly more subtle. Self-expression is about making a choice. If all choices are realized before you have a chance to make them, your ability to express yourself is undermined.
1artifex0
I wouldn't take the principle to an absolute- there are exceptions, like the need to be heard by friends and family and by those with power over you. Outside of a few specific contexts, however, I think people ought to have the freedom to listen to or ignore anyone they like. A right to be heard by all of society for the sake of leaving a personal imprint on culture infringes on that freedom. Speaking only for myself, I'm not actually that invested in leaving an individual mark on society- when I put effort into something I value, whether people recognize that I've done so is not often something I worry about, and the way people perceive me doesn't usually have much to do with how I define myself. Most of the art I've created in my life I've never actually shared with anyone- not out of shame, but just because I've never gotten around to it. I realize I'm pretty unusual in the regard, which may be biasing my views. However, I think I am possibly evidence against the notion that a desire to leave a mark on the culture is fundamental to human identity
1Q Home
I tried to describe necessary conditions which are needed for society and culture to exist. Do you agree that what I've described are necessary conditions? Relevant part of my argument was "if your personality gets limitlessly copied and modified, your personality doesn't exist (in the cultural sense)". You're talking about something different, you're talking about ambitions and desire of fame. ---------------------------------------- My thesis (to not lose the thread of the conversation): If human culture and society are natural, then the rights about information are natural too, because culture/society can't exist without them.
2dr_s
Ominous supervillain voice: "For now."
2dr_s
Yeah, I do get that - if the possibility exists and it's just curtailed (e.g. you have some kind of protectionist law that says book covers or movie posters must be illustrated by humans even though AI can do it just as well), it feels like a bad joke anyway. The genie's out of the bottle, personally I think to some extent it's bad that we let it out at all, but we can't put it back in anyway and it's not even particularly realistic to imagine a world in which we dodged this specific application (after all it's a pretty natural generalization of computer vision). The copyright issue is separated - having copyright BUT letting corporations violate it to train AIs that then are used to generate images that can in turn be copyrighted would absolutely be the worst of both worlds. That said, even without copyright you still have an asymmetry because big companies have more resources for compute. We're not going to see a post-scarcity utopia for sure if we don't find a way to buck this centralization trend, and art is just one example of it. However, about the fact that the "work of making art" can be easily automated, I think casting it as work at all is already missing the point. It's made into economic useful work because it's something that can be monetized, but at its core, art is a form of communication. Let's put it this way - suppose you can make AIs (and robots) that make for better-than-human lovers. I mean in all respects, from sex to just being comforting and supporting when necessary. They don't feel anything, they're just very good at predicting and simulating the actions of an ideal partner. Would you say that is "automating away the work of being a good partner", which thus should be automated away, since it would be pointless to try and do it worse than a machine would? Or does "the work" itself lose meaning once you know it's just that, just work, and there is no intent behind it? The thing you say, about art being freed from the constraints of commer
2Q Home
How do you know that? Art is one of the biggest outlets of human potential; one of the biggest forces behind human culture and human communities; one of the biggest communication channels between people. One doesn't need to be a professional artist to care about all that.
2dr_s
Well, "to make a living" implies that you're an artist as a profession and earn money from it. But I agree with you that that's far from the only problem. Art is a two-way street and its economic value isn't all there is to it. A world in which creating art feels pointless is one in which IMO we're all significantly more miserable.

"In the name of the greatest species that has ever trod this earth, I draw the line in the dust and toss the gauntlet before the feet of tyranny, and I say humanism now, humanism tomorrow, and humanism forever."

3AlphaAndOmega
Ctrl+F and replace humanism with "transhumanism" and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they're as different from their common Homo sapiens ancestor as a rat and a whale. I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).
2Shankar Sivarajan
If you met a race of aliens, intelligent, friendly, etc., would you "turn into a Warhammer 40K Inquisitor" who considers the xenos unworthy of any moral consideration whatsoever? If not, why not?
1AlphaAndOmega
I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There's only one universe to share, and only so much in the way of resources in it, even if it's a staggering amount. The last thing we need are potential "Greedy Aliens" in the Hansonian sense. So while I wouldn't give the aliens zero moral value, it would be less than I'd give for another human or human-derivative intelligence, for that fact alone.
2dr_s
Honestly that's just not a present concern so I don't even bother thinking about it too much - there's certainly plenty of room for humans modifying themselves which I would consider ok, and some I would probably consider a step too far but it's not going to be my decision to make anyway; I don't know as much as those who might need to make such decisions will. So yeah, it's an asterisk for me too, but I think we can satisfyingly call my viewpoint "humanism" with the understanding that it won't be one or two cyber implants who change that (though I don't exclude the possibility that thorough enough modification in a bad direction might make someone not human any more).
[-]nim3-1

I agree that fundamentally original art has traits that make it better than fundamentally plagiarized art.

However, humans can plagiarize too. We do it a bunch, although I'd argue that on the whole, art plagiarized by an AI will look "better" than art plagiarized by a human. While the best human plagiarists may create better work than the best AIs for now, the average human plagiarist (perhaps a child or teen tracing drawing their favorite copyrighted character) creates output far below the quality that the average AI can generate.

When you make the question... (read more)

5dr_s
Oh, sure, my claim wasn't "human art is necessarily better". Rather, it was about the legal aspects. Copyright law is (supposedly) designed to incentivize and foster human creativity. Thus it protects the works of humans, while allowing humans to do transformative and derivative works (specific limits vary by country) because obviously creativity without any inspiration is an absurd notion. So, it is perfectly possible for example to define copyright law as "it allows humans to learn from copyrighted works and only humans" without having to go in some kind of convoluted philosophical explanation for why the learning of a diffusion model isn't quite like that of a human. I've seen people literally argue about the differences between our brain's visual cortex and a diffusion model and it's pointless sophistry. They could be perfectly identical, but if a company built a vat-grown disembodied visual cortex and used it as a generative art model I'd still call bullshit on giving it the same rights as a human in terms of IP. I honestly can't imagine that being a problem soon - I think AIs can grow powerful but making them persons is a whole other level of complexity. I agree that decreeing the status of person is a difficult thing, though I honestly think we should just grant it to all human beings by default. But still, it is at least not something that should come automatically with intelligence alone. I see the risk of us erroneously mistreating person-things for now much further away than the risk of letting non-person-things needlessly make us more miserable, as a short term thing.

I like the angle you've explored. Humans are allowed to care about humans — and propagate that caring beyond its most direct implications. We're allowed to care not only about humans' survival, but also about human art and human communication and so on.

But I think another angle is also relevant: there are just cooperative and non-cooperative ways to create art (or any other output). If AI creates art in non-cooperative ways, it doesn't matter how the algorithm works or if it's sentient or not.

2dr_s
It's a fair angle in principle; if for example two artists agreed to create N works and train AI on the whole set in order to produce "hybrid" art that mixes their styles, that would be entirely legitimate algorithmic art and I doubt anyone will take issue with it! The problem now is also specifically that N needs to be inordinately large. A model that can create art with few shot learning would make questions of copyright much easier to solve. It's the fact that in practice the only realistic way right now is to have millions of dollars in compute and use a tagged training set bigger than just public domain material which puts AI and artists inevitably on a collision course.
1Q Home
Maybe I've misunderstood your reply, but I wanted to say that hypothetically even humans can produce art in non-cooperative and disruptive ways, without breaking existing laws. Imagine a silly hypothetical: one of the best human artists gets a time machine and starts offering their art for free. That artist functions like an image generator. Is such an artist doing something morally questionable? I would say yes.
2dr_s
If they significantly undercut the competition by using some trick I would agree they are, though it's a grey area mostly (what if instead of a time machine they just have a bunch of inherited money that allows them to work without worrying about making a living? Can't people release their work for free?).
1Q Home
I think we can just judge by the consequences (here "consequences" don't have to refer to utility calculus). If some way of "injecting" art into culture is too disruptive, we can decide to not allow it. Doesn't matter who or how makes the injection.

I'd argue that we already satisfy your premise: humans don't treat machines or AI agents as equals, and this bias won't change as long we maintain control over them.

> If that is selfish, then let us be selfish. What's wrong with being selfish?
Your confusion regarding the generative AI relies on assuming that we are not being selfish in this situation, allowing a machine to have a free pass using copyrighted images while affecting human artist's livelihoods. 

However, my observation is that our support for a machine scraping content indiscriminately ... (read more)

I agree with the central point of this, and the anti-humanism is where the e/acc crowd turn entirely repugnant. But in reference to the generative AI portion, the example doesn't really land for me because I think the issue at its core is pitting two human groups against each other; the artists who would like to make a stable living off their craft, and the consumers of art who'd like less scarcity of art, particularly the marginally-creative stock variety that nonetheless forms the majority of most artists' paycheck (as opposed to entirely original works ... (read more)

2dr_s
You're partly right that of course one side of the issue is just that the companies are undercutting the art market by offering a replacement product at prices that are impossible to compete with, but from seeing the complaints and viewpoints of artists, the copyright violation aspect of it is also a big deal to most of them. If only because someone undercutting you is already bad, someone undercutting you by stealing your own know-how and turning it against you adds insult to injury. To some extent I think people are focusing on this due to the belief that if not for the blatant copyright violations, the kind of large training sets required for powerful AI models would be economically unviable, and it's fairly likely that they're right (at least for now). Also, the kind of undercutting that we're seeing with AI would be fundamentally impossible with human artists. You could have one work 16 hours a day with only bread, water and a straw mat to sleep on and they wouldn't be productive one tenth of an AI model that can spit out a complete digital image in seconds with little more energy use than a large gaming computer. So we're at a point where quantity becomes a quality of its own - AI art generation economy is so fundamentally removed from the human art creation market that it doesn't just compete, it straight up takes a sledgehammer to it and then pisses on the pieces. I also don't think here that AI art is responding to an end user demand. Digital art is infinitely reproducible and already so abundant most people wouldn't know what to do with it. The most critical end user application where someone might not easily find replacements for their very specific needs is, well, porn. That's certainly one application that AI art is good for, but not one most companies explicitly monetize for image reasons. Other than that, I'd say the biggest demand that AI art satisfies is that of middlemen who need art to enhance some other project: game developers (RPG portraits, V

And if I met a race of peaceful, artful, friendly aliens, you can be assured that I would not suddenly turn into a Warhammer 40K Inquisitor whose only wish is to stomp the filthy xenos under his jackboot.

Why would the 'friendly aliens' be friendly if they know your biased against them to any extent?

2dr_s
If I meet someone else who has children, I expect that if they had to choose who dies between me, a stranger, and their child, they'd pick me. This is not a deal breaker that puts me on a spiral of escalation with them. It is perfectly possible to strike deals with not perfectly aligned entities who aren't relentless maximizers. And relentless maximizers would inevitably look like monsters to us, and any meeting with them would be to the death.
1M. Y. Zuo
Yes, it's possible to strike deals, but that doesn't mean they will actually be 'friendly', at most 'neutral'. They may superficially give off the appearance of being 'friendly' but then again humans do that too all the time.
2dr_s
By this token, everyone is neutral, no one is friendly, unless I am literally their top priority in the whole world, and they mine (that doesn't sound like simple "friendship"... more like some kind of eternal fated bond between soulmates). For me friendly is someone with whom there's ample and reasonable chances to communicate and establish a common ground. If you make conditions harsh enough friends can turn into rivals for survival, but that's why we want abundance and well-being to be the norm. However, no amount of abundance will satisfy a relentless maximizer - they will always want more, and never stop. That's what makes compromise with them impossible. Humans are more like satisficers.
1M. Y. Zuo
Can you explain your reasoning here? How does a bias towards or against imply a 'top priority'?
2dr_s
Well, you're the one who's saying "the aliens wouldn't be friendly if they know you're biased towards your own side". A bias means I prioritize my race over the aliens. This is normal and pretty expected; the aliens, too, will surely prioritize their own race over humans, if push came to shove. That's no barrier to friendship. The ability to cooperate is fundamentally dependent on circumstances. The only case in which I will be absolutely sure that someone would never, ever turn on me, no matter how dire the circumstances, is if I am their top priority. Bias means you have a hierarchy of values, and some are higher than others; so "well-being of your family" is higher than "well-being of an equivalent number of total strangers", and "well-being of humanity" may be higher than "well-being of the sentient octopuses of Rigel-4". But the world usually isn't made of binary trolley problems, and agents that are willing to be friendly and to put each other at a reasonably high (but not necessarily top) position in their value hierarchies have plenty of occasions to establish fruitful collaboration by throwing some other, less important value under the bus. A relentless maximizer however is a fundamentally selfish kind of agent. A maximizer can never truly compromise because it does not have a range of acceptable states - it has only ONE all-important value that defines a single acceptable target state, and all its actions are in service of achieving that state. It can perform friendship only as long as it serves its goal, and will backstab you the next moment even if it was not in existential danger, merely because it has to advance towards its goal. I may care for the well-being of my wife, but I am not a Wife-Well-Being Maximizer. I would not for example stab someone to steal a pair of earrings that she would like if only I could get away with it; I still value a stranger's life far more than my wife's marginal enjoyment from a new piece of jewellery. A maximizer instea