The worries that AI will push artists out of business are widespread. If, after all, AI can generate a song in seconds — a task that would require from a human musician years of training, expensive instruments, co-players, and studio time — then why not? My son, aged eleven, has recently used AI to produce a very heartfelt rock ballad with lyrics based on MIT license.

But I haven't yet seen anyone seriously contemplate how a world with no human musicians would look like.

Imagine AI streaming songs generated on the fly directly to your headphones. You might say things like:

"That was an interesting sequence of chords! Use that more often!"

"Make it a bit more syncopated."

"I like the Lydian fourth."

And:

"Cut the backpipes."

AI would adjust the music to your wishes, creating a feedback loop between the listener and the AI. Over time, the songs would drift through the space of possible music, eventually evolving into something entirely unique, something unlike anything anyone else is listening to.

(Some might believe that most people are mediocre, and their music would end up bland and similar to each other’s — but if no one else is in the feedback loop, how could that even happen? And if it did, what would this neutral, universally human music look like? A folk song from Borneo? Classical music? Hip-hop?)

Imagine being on a date. Your partner shares their headphones with you for a moment. Suddenly, you hear music unlike anything you’ve ever heard before. What does that mean for human interaction?

Anyway, in such a world some people would probably evolve music that is much more interesting to the public. Some people are just gifted. But if so, others might sometimes want to listen to their stream instead of their own and perhaps even incorporate parts of that style into their own music.

But once that happens, we return to the classic model of the music industry. There are creators and listeners. Even some kind of economy is possible. The only difference is that composers and bands are replaced by human/AI combos, and what’s distributed is “style” (AI models) instead of specific songs.


The above, of course, could apply to any kind of intellectual endeavor, not just music.

New Comment
8 comments, sorted by Click to highlight new comments since:

A world with no human musicians won't happen, unless there is some extinction-level event that at a minimum leads to a new dark age. AI music will not outcompete human music (at least not to the point where the latter is not practised professionally any more), because a large part of the appeal of music is the knowledge that another human made it.

We have a similar situation today in chess. Of course a cellphone can generate chess games that are of higher quality (less errors, awesome positional and tactical play) than those of human world-class players. If one generates a sufficient number of such self-play games, some will even be beautiful and contain interesting new chess ideas. Still, nobody is interested in self-play games from my cellphone, precisely because anyone can make more of the same at almost no cost. The games of Magnus Carlsen, on the other hand, are followed and analysed and scrutinised by many, precisely because there is a struggle of human wits in each of these games and they are not abundantly available; they are masterpieces of human chess, and better (not worse) for the flaws we can easily discover in them with engine help.

(Some might believe that most people are mediocre, and their music would end up bland and similar to each other’s — but if no one else is in the feedback loop, how could that even happen? And if it did, what would this neutral, universally human music look like? A folk song from Borneo? Classical music? Hip-hop?)

This doesn't seem like the default to me. The default is AI companies that do centralized work trying to make a good product. All the users are in the feedback loop. Some customization to individual users is valuable, but the prior that's been developed through interaction with lots of people is going to do a ton of the work. Your intuition that music becomes super-individualized seems based on an intuition that the AI customization "grows with you", going deep down a rabbit hole over years. This doesn't seem like the sort of thing the companies are incentivized to create. The experience to new users is much more important for adoption. 

"Make it a bit more syncopated."

"I like the Lydian fourth."

"Cut the backpipes."

At this level of detail, I'd view it as you playing the AI-music-generator as a new kind of instrument.

 

A cool idea is scaling up your date-briefly-sharing-her-headphones experience: imagine parties where you have a few different speakers distributed around the place, and the music each one plays is dynamically generated combining leitmotifs associated with the people in the near vicinity.

Or even sticking to individual headphones, you could have environmental music depending on where you are, or whom you're with, or the weather/time-of-day, a bit like what video games do (did? I recall the old Pokémon games doing things like this, but I don't know how common this is today). 

When the AI creates and we choose, we still have some input on the things created.

But the next generation AI might be also very good at guessing what we like, and then even our feedback becomes useless, because it was already correctly predicted and included in the equation.

While much of this can surely happen to varying degrees, I think an important aspect in music is also recognition (listening to the same great song you know and like many times with some anticipation), as well as sharing your appreciation of certain songs with others. E.g. when hosting parties, I usually try to create a playlist where for each guest there are a few songs in there that they will recognize and be happy to hear, because it has some connection to both of us. Similarly, couples often have this meme of "this is our song!", which throws them back into nostalgic memories of how they first met.

None of this is to disagree with the post though. I mostly just wanted to point out that novelty and "personal fit" are just two important aspects in any person's music listening experience, and I think it's unlikely these two aspects will dominate the future of music that much.

Anyway, in such a world some people would probably evolve music that is much more interesting to the public

I wouldn't be so sure.

I think the current diversity of music is largely caused by artists' different lived experiences. You feel something, this is important for you, you try to express that via music. As long as AIs don't have anything like "unique experiences" on the scale of humans, I'm not sure if they'll be able to create music that is that diverse (and thus interesting).

I assume the scenario you described, not a personal AI trained on all your life. With that, it could work.

(Note that I mostly think about small bands, not popular-music-optimised-for-wide-publicity).

With current music AI, the AI isn’t at all trained on my life and has no soul of its own, but I still get to ask it for music that’s specific to my interests.

I think the current diversity of music is largely caused by artists' different lived experiences. You feel something, this is important for you, you try to express that via music. As long as AIs don't have anything like "unique experiences" on the scale of humans, I'm not sure if they'll be able to create music that is that diverse (and thus interesting).

If the AI customized it for each listener (and does a good job), then music will reflect the unique experiences of the listeners, which would result in a more diverse range of music than music that only reflects the unique experiences of musicians.

Of course, we could end up in an awkward middle ground where AI only generates variations on a successful pop music formula, and it all becomes a bland mush. But I think in that case, people would just go back to human-generated music on Spotify and YouTube.