To summarize,
- When considering whether to delay AI, the choice before us is not merely whether to accelerate or decelerate the technology. We can choose what type of regulations are adopted, and some options are much better than others.
- Neo-luddites do not fundamentally share our concern about AI x-risk. Thus, their regulations will probably not, except by coincidence, be the type of regulations we should try to install.
- Adopting the wrong AI regulations could lock us into a suboptimal regime that may be difficult or impossible to leave. So we should likely be careful not endorse a proposal because it's "better than nothing" unless it's also literally the only chance we get to regulate AI.
- In particular, arbitrary data restrictions risk preventing researchers from having access to good data that might help with alignment, potentially outweighing the (arguably) positive effect of slowing down AI progress in general.
It appears we are in the midst of a new wave of neo-luddite sentiment.
Earlier this month, digital artists staged a mass protest against AI art on ArtStation. A few people are reportedly already getting together to hire a lobbyist to advocate more restrictive IP laws around AI generated content. And anecdotally, I've seen numerous large threads on Twitter in which people criticize the users and creators of AI art.
Personally, this sentiment disappoints me. While I sympathize with the artists who will lose their income, I'm not persuaded by the general argument. The value we could get from nearly free, personalized entertainment would be truly massive. In my opinion, it would be a shame if humanity never allowed that value to be unlocked, or restricted its proliferation severely.
I expect most LessWrong readers to agree with me on this point — that it is not worth sacrificing a technologically richer world just to protect workers from losing their income. Yet there is a related view that I have recently heard some of my friends endorse: that nonetheless, it is worth aligning with neo-luddites, incidentally, in order to slow down AI capabilities.
On the most basic level, I think this argument makes some sense. If aligning with neo-luddites simply means saying "I agree with delaying AI, but not for that reason" then I would not be very concerned. As it happens, I am sympathetic to many of the arguments in Katja Grace's recent post about delaying AI in order to ensure existential AI safety.
Yet I worry that some people intend their alliance with neo-luddites to extend much further than this shallow rejoinder. I am concerned that people might work with neo-luddites to advance their specific policies, and particular means of achieving them, in the hopes that it's "better than nothing" and might give us more time to solve alignment.
In addition to possibly being mildly dishonest, I'm quite worried such an alliance will be counterproductive on separate, purely consequentialist grounds.
If we think of AI progress as a single variable that we can either accelerate or decelerate, with other variables held constant upon intervention, then I agree it could be true that we should do whatever we can to impede the march of progress in the field, no matter what that might look like. Delaying AI gives us more time to reflect, debate, and experiment, which prima facie, I agree, is a good thing.
A better model, however, is that there are many factor inputs to AI development. To name the main ones: compute, data, and algorithmic progress. To the extent we block only one avenue of progress, the others will continue. Whether that's good depends critically on the details: what's being blocked, what isn't, and how.
One consideration, which has been pointed out by many before, is that blocking one avenue of progress may lead to an "overhang" in which the sudden release of restrictions leads to rapid, discontinuous progress, which is highly likely to increase total AI risk.
But an overhang is not my main reason for cautioning against an alliance with neo-luddites. Rather, my fundamental objection is that their specific strategy for delaying AI is not well targeted. Aligning with neo-luddites won't necessarily slow down the parts of AI development that we care about, except by coincidence. Instead of aiming simply to slow down AI, we should care more about ensuring favorable differential technological development.
Why? Because the constraints on AI development shape the type of AI we get, and some types of AIs are easier to align than others. A world that restricts compute will end up with different AGI than a world that restricts data. While some constraints are out of our control — such as the difficulty of finding certain algorithms — other constraints aren't. Therefore, it's critical that we craft these constraints carefully, to ensure the trajectory of AI development goes well.
Passing subpar regulations now — the type of regulations not explicitly designed to provide favorable differential technological progress — might lock us into bad regime. If later we determine that other, better targeted regulations would have been vastly better, it could be very difficult to switch our regulatory structure to adjust. Choosing the right regulatory structure to begin with likely allows for greater choice than switching to a different regulatory structure after one has already been established.
Even worse, the subpar regulations could even make AI harder to align.
Suppose the neo-luddites succeed, and the US congress overhauls copyright law. A plausible consequence is that commercial AI models will only be allowed to be trained on data that was licensed very permissively, such as data that's in the public domain.
What would AI look like if it were only allowed to learn from data in the public domain? Perhaps interacting with it might feel like interacting with someone from a different era — a person from over 95 years ago, whose copyrights have now expired. That's probably not the only consequence, though.
Right now, if an AI org needs some data that they think will help with alignment, they can generally obtain it, unless that data is private. Under a different, highly restrictive copyright regime, this fact may no longer be true.
If deep learning architectures are marble, data is the sculptor. Restricting what data we're allowed to train on shrinks our search space over programs, carving out which parts of the space we're allowed to explore, and which parts we're not. And it seems abstractly important to ensure our search space is not carved up arbitrarily — in a process explicitly intended for unfavorable ends — even if we can't know now which data might be helpful to use, and which data won't be.
True, if very powerful AI is coming very soon (<5 years from now), there might not be much else we can do except for aligning with vaguely friendly groups, and helping them pass poorly designed regulations. It would be desperate, but sensible. If that's your objection to my argument, then I sympathize with you, though I'm a bit more optimistic about how much time we have left on the clock.
If very powerful AI is more than 5 years away, we will likely get other chances to get people to regulate AI from a perspective we sympathize with. Human extinction is actually quite a natural thing to care about. Getting people to regulate AI for that explicit reason just seems like a much better, and more transparent strategy. And while AI gets more advanced, I expect this possibility will become more salient in people's minds anyway.
Interesting. I know few artists and even their lawyers and not one of them see AI art as a threat — alas this might be them not having the full picture of course. And while I know that everyone can call themselves an artist, I certainly don’t want to gate-keep here, for context I’ll add that I mean friends who finished actual art schools. I know this because I use AI art in my virtual tabletop RPG sessions I play with them and they seem more excited than worried about AI. What follows is based on my casual pub discussion with them.
As for me, I don’t like my adventures to feel like a train ride so I give a great degree of freedom to my players in terms of what they can do, where they can go, with whom they can speak. During the game, as they make plans between themselves, I can use AI generators to create just-in-time art about the NPC or location they are talking about. This, together with many other tricks, allows me to up quality of my game and doesn’t take away work from artists because sheer speed required to operate here was a factor prohibiting to hire them here anyway.
However — this only works because my sessions require suspension of disbelief by default and so nobody cares about the substance of that art. After all, we all roll dice around and pretend they mean how well we wave a sword around so nobody cares if styles or themes slightly differ between sessions, it’s not an art book.
For anything that’s not just fun times with friends you will still need an artist who will curate the message, modify or merge results from multiple AI runs, fine-tune parameters and even then probably do quite a lot of digital work on the result to bring it up to standards that passes the uncanny valley or portrays exactly what movie director had in mind.
Or is AI already here that’s capable of doing those things by itself with one or two sentences from an executive and churning out a perfect result? Because I’ve worked with many models and have yet to see one that wouldn’t require further creative work to actually be good. AFAIK all award winning AI-generated content was heavily curated, not some random shots.
It feels to me like low-level art is going to be delegated to AI while artists can focus on higher forms of art rather than doing the boring things. Just like boilerplate generators in code. Or they’ll be able to do more boring things faster, just like frameworks for developers pushing out similar REST apps one after another. And base building blocks are going to become more open source while the value will be in how you connect, use and support those blocks in the end.
This may allow a lot more people to discover their creative, artistic side who couldn’t do it previously because they lacked mechanical skill to wave a brush or paint pixels.
I write this comment haphazardly so sorry if my thought process here are unpolished but overall it feels like a massive boost to creativity and a good thing for the art, if not potentially the greatest artistic boost to humanity since ever.
AI is a new brush that requires less mechanical skill than before. You must still do creative work and make Art with it.