In modern society, many find meaning and fulfillment from their work and careers. I, for one, certainly get enjoyment from a productive day's work.

As a software engineer, I've benefited greatly from GPT-3. I estimate that I'm 5-10 times more productive on a large range of tasks. Beyond predicting lines of code, I actually use GPT-3 as a pseudo-search engine for things I haven't committed to memory. For example, I'll write a comment saying "add entry to database", and GPT-3 will effectively 'look-up' the solution, so I don't have to switch tabs and search it myself. I'm also learning like never before. By prompting GPT-3 via a code comment, I get instant, tailored solutions to my specific problem in new languages or with new tools that I've never used before. This saves me hours of reading general purpose tutorials and debugging to get the working code that I need.

I genuinely feel superhuman, and this boost in productivity lets me focus on more creative aspects of the work (e.g. designing the solution to a problem). In short, I've never found coding so satisfying. Unfortunately, I suspect this is a temporary 'sweet-spot' that won't last forever. Given how powerful the technology already is, with a few months or years of progress I might soon feel like I'm not really contributing to the work at all.

High-school students are using GPT-3 to write essays, and no doubt it's already in use within academia. 

As technology automates human labor, history tells us that we'll simply create new jobs in industries and careers not yet imaginable. How could 19th century agriculturalists have foreseen careers in web design? In the case of artificial general intelligence, however, this repeating pattern could be broken. What happens when AI also performs those new jobs better than us?

Will we use all this free time for leisure and enjoyment, or will we struggle to find meaning and fulfillment outside of work? This is not necessarily a new question, for it is exactly the challenge of those who retire (and remain in good health). It is, however, something that we'll need to ask on a larger scale than ever before.

My proposed solution is to promote education and skills in the arts. This will open up creative outlets and hobbies for individuals, which may be essential to maintaining fulfillment and satisfaction in the long-term. Alongside this, one can imagine a reinvigorated prioritization of community, social skills, family, sports, and more.

I would love to see some discussion of these points, and whether promoting the Arts could be a legitimate goal for long-termists. Comments are welcome, but I've also started an AI Safety Discord community to facilitate more 'real-time' discussions. Feel free to join here: Safe AI Discord Community.

-- 

Of course, this all optimistically assumes we produce safe and aligned AI in the first place...

New Comment
11 comments, sorted by Click to highlight new comments since:

DALL•E has begun eating the arts. Humanity prospering in the ever-shrinking gaps looks like a losing strategy.

[-][anonymous]61

I'm sure OP is already aware of DALL-E and other diffusion models.

Cards on the table: several months ago I would've agreed with you about the future of art being eaten entirely by AI. I'm much, much more sceptical now.

First of all, like any other hobby, art will still be valued by the people who do it just by virtue of being a fulfilling way to spend time, even if the works produced are never seen by another soul. Outside of just being a hobby though, I think much of art in the future (cinema, music, literature, art, new categories we don't yet have, etc.) will still be very human. Maybe or maybe not the majority, but a very sizeable portion regardless.

I think the standard view in tech circles is essentially that people in general only value art insofar as it compels them personally. Besides that, no other qualities of art matter. People love to bring up the Intentional Fallacy in support of this claim. Those same people however, are often also unaware that the the authors of the essay putting the idea forward (Wimsatt and Beardsley), also paired it with the Affective Fallacy, claiming that evaluating a text purely on its emotional effect on the reader is just as reductive.

The necessarily human details of a work are quite often crucial to how the work is valued by its audience. The circumstance of the author, the specifics of the human labor involved in its creation, the capacity of a viewer to ground and explore the text in a social capacity, etc.. These are all things that factor heavily into how people judge art. Not everyone cares in this way, but a huge portion of people do, at least to some extent. Engaging like this is certainly not niche, a behaviour reserved only for gallerists and snobs. It's much more fundamental than that.

I think this leaning into the Affective Fallacy happens because some people only understand art in terms of aesthetic appeal, rather than as a much larger cultural/social process, of which there is an aesthetic component.

The creative job market will certainly be affected by AI. But honestly, even there, simply because of the inherent value in human-made art for many people, I don't expect it to disappear.  The creative sector is unique in that regard, like sport and certain types of professional care. I use the idea of a job market pretty loosely here – I expect political economy to change substantially when AGI arrives. I'm really just referring to the tasks people perform in a society that aren't hobbies.

A lot of the above is made moot if human/AI/hybrid art labels are obfuscated, and there's no way of telling which is which. But I expect this issue to be largely solved by techniques involving content provenance initiatives like C2PA. If the artist is willing to open themselves up to some scrutiny during creation, we will largely be able to verify human authorship.

Basically, I agree strongly with the OP.  Moreover, giving people a creative education will cause them to care MORE about the necessarily human qualities of art. It just seems all around like a good thing to do.

I am familiar with DALL-E and the like, in fact I regularly use these tools for concept designs on projects, creating custom images for presentations, etc.

I agree with these points: AI has begun eating the arts commercial design work, just like it's bringing automation to other industries. Human made art, however, is not just about the end-product but also the context, the creative process, and the opportunity to communicate through art and empathize with other humans.

If I wanted to try GPT-3 as a coding aid, where would I start?

Github Copilot.

If only my employer paid for it.

Huh, it's only $10/mo. Surprised that would be an obstacle.

Oh, that's not bad at all.

Same question here as well.

As others mention, it's available via GitHub Co-pilot at a cost. Alternatively, groups such as Eleuther AI are making open-source alternatives (e.g. GPT-J) which will probably soon get free VS Code plugins.

 My proposed solution is to promote education and skills in the arts. This will open up creative outlets and hobbies for individuals, which may be essential to maintaining fulfillment and satisfaction in the long-term. Alongside this, one can imagine a reinvigorated prioritization of community, social skills, family, sports, and more.

 

I've seen this type of comment a few times.  Generally, it's not clear to me that an education in the humanities leads to better morals compared to an education in the sciences (see all the politicians, bankers, lawyers, etc. who have a humanities background -- not clear that they're as a class more moral or ethical than the engineers, physicists, etc.); clearer communication (see any number of employees in large organizations); more life fulfillment or happiness; more creativity; more community, or anything.