I am a defendant of the idea that we have already achieved rudimentary AGIs with modern LLMs (as much of a hot take this is), and even though the path to superintelligence is going to be difficult and will probably require a few more technical breakthroughs to make more effective use of available data, I don't think this will take us longer than a decade, or 15 years at most.

When I discuss this idea with some of my CS friends and co-workers, about how AI will inevitably replace most software engineering jobs (picture supercharged Github Copilot that can make entire websites and back-end services on command) most of them ask me the obvious follow-up question: So what can I do about it? Yeah, I can help with AI safety/alignment progress but my job is going to disappear no matter what I do, and probably sooner than many other more 'physically' demanding ones.

I am always left stumped by this  I simply don't know what to tell them, specially to undergraduates that are still full of hope and totally didn't sign up for this dumpster fire. Should I tell them to just continue doing their thing and see what happens? Let fate take its course and hope for the best? This sounds all too happy-go-lucky for my taste. 

I'd like to hear what you guys think about this matter, what do you answer when asked such questions?

New Answer
New Comment

4 Answers sorted by

Dagon

198

tl;dr if you think of yourself as an optimizer, not an implementer, you'll likely be in demand for a long time.  Your value is to discover and steer good outcomes for your customers (in partnership with your employer).  The fact that it's currently effective to do this by writing code will change over time, but that won't matter to those with a knack for it.

Epistemic status: I'm old, and I'm an AGI skeptic.  This is my 70% estimate, it could happen much faster than I think could affect more of life than I think, and could just blow up entirely in which case it doesn't matter.

Well, most low-end jobs will change radically.  This includes a lot of office work including "software engineer".  The skilled/high-end versions will remain very valuable.  Those people who can actually solve problems and understand customer needs in enough detail to build software that works at the scale needed are going to be needed until the actual singularity (when NOBODY is needed).

The unpleasant truth of software engineering is that writing code isn't the hard part.  Many Staff and Principal engineers spend less than 20% of their time on code, and that mostly as a way to communicate with other engineers, along with benchmarking and prototyping, and proving out harebrained ideas.  It's not primarily writing production code for actual use.  It's more "understand the universe (of human interactions and customer needs) well enough to describe it in machine-level detail".  

Relatedly, in any large system, FAR more developer time goes into observability, monitoring, debugging, recovery, and edge-case identification than to the automatable part of "make it work".   Even here, LLMs are going to help a lot.  I look forward to having on-demand dashboards and less-arcane outputs of profiling runs.  But they don't eliminate (and may even increase) the need for understanding and human-level modeling of complex systems.

I expect that some of the whitepapers, demos, and designs I write will include a fair bit of prompt engineering, and I'll be able to include more trials and prototypes in my proposals.  My beliefs about what our company should do will become less wrong, as I have better tools to test them.  But my job really can't be automated.

Note that this applies to the top ~20% of people who would become software engineers today.  there are a LOT (too many, IMO) of people worldwide who think it's about writing good code, or just implementing someone else's spec and walking away.  Those folks will need to either get more dedicated and open to the idea that they're responsible for outcomes, not just "doing a job", or find other things that computers can't do.  

Gordon Seidoh Worley

91

There's a way we can interpret the core job of a programmer or hacker as making computers do things. What that means keep changing over time. We have to do less toil than we used to and that enables us to achieve more. Maybe eventually the job of "programmer" will stop being a job and the only interesting skill left will be coming up with good ideas about what computers should do. Barring that, in the mean time I expect programmers to continue to have a lot of work to do, but for it to change.

This isn't really different than being a programmer now, other than the rate of change is likely to accelerate. Folks will need to be ready to do things like change how they write code quickly once LLMs get good enough to reliably write code that could be used in a product code base without significant rewriting.

If you like, the way to think about it is that skills that humans can do and get paid for will move up the stack, just like the market for low-level programming "dried up" and got replaced by people working in higher level languages.

In terms of career advice, focus on the core problem solving skills related to programming, not becoming a world class expert in something that can be automated away. LLMs mean expert programmers will be less in demand and programmers who can take a project from 0 to 1 will be in greater demand.

rpglover64

50

I'm a software engineer, and I'm not worried about AI taking my job. The shortest explanation of this is that "coding" is a very small part of what I do: there's stuff that's more product-related, and stuff that's pre-paradigmatic (to stretch the term), and mentorship, and communication; when I do write code, a lot of the time the code itself is almost irrelevant compared to the concerns of integrating it into the larger system and making it easy to change or delete in the future when requirements change.

One stupid analogy here is that coding is like walking--important and potentially impressive if a machine does it, but insufficient on its own to actually replace a human in a job.

Stephen McAleese

20

AI systems such as GitHub Copilot are increasingly capable of converting text prompts to working code.

Therefore the task of converting detailed English requirements to code might be largely replaced by AI in the future as these systems advance. In other words, the job of a junior software engineer could soon be automated.

But software engineers and especially senior software engineers are also responsible for defining and designing solutions for problems, thinking about architecture and other high-level concepts, and trading off factors such as performance, customer experience, and security.

My advice would be to become a generalist with many different independent skills so that if one task is automated, there will be other tasks to do.

Bear in mind that software engineers can use tools like Copilot to enhance their productivity and a human-AI pair is more effective than either alone. Humans will only become redundant when the AI alone is as effective or more effective than the human-AI pair.

This happened with chess: there was a point where humans playing with AIs were better than AIs alone but now AIs are so strong at chess that human players don't enhance the chess-playing ability of AI.

Human software engineers will probably eventually be fully replaced by AI but I don't expect it to happen until we have AGI because unlike chess, software engineering requires performing many different tasks.