I expect for there to be a delay in deployment, but I think ultimately OpenAI is aiming as a near term goal to automate intellectually difficult portions of computer programming. Personally, as someone just getting into the tech industry, this is basically my biggest near-term concern, besides death. At what point might it be viable for most people to do most of what skilled computer programmer does with the help of a large language model, and how much should this hurt salaries and career expectations?
Some thoughts:
- It will probably be less difficult to safely prompt a language model for an individual "LeetCode" function than to write a that function by hand within the next two years. Many more people will be able to do the former than could ever do the latter.
- Yes, reducing the price of software engineering means more software engineering will be done, but it would be extremely odd if this meant software engineer salaries stayed the same, and I expect regulatory barriers to limit the amount that the software industry can grow to fill new niches.
- Architecture seems difficult to automate with large language models, but a ridiculous architecture might be tolerable in some circumstances if your programmers are producing code at the speed GPT4 does.
- Creativity is hard to test, and if former programmers are mostly now hired based on their ability to "innovate" or have interesting psychological characteristics beyond being able to generate code I expect income and jobs to shift away from people with no credentials and skills to people with lots of credentials and political acumen and no skills
- At some point programmers will be sufficiently automated away that the singularity is here. This is not necessarily a comforting thought.
Edit: Many answers contesting the basic premise of the old title, "When will computer programming become an unskilled job?" The title of the post has been updated accordingly.
I think you are underestimating the level of exception handling required to completely automate the average software engineers job, as happened to unskilled farmhands and factory workers. A slightly atypical few hours for a software engineers at the moment, as an example, might be discovering the logging facility stopped working on an important VM, SSHing in and figuring out what went wrong, and then applying a patch to another related piece of software to fix the bug. LLMs could help coach regular people through that process over the shoulder like a senior engineer, but they couldn't automate the whole process, not because the individual pieces are too intellectually difficult but because it requires too much diverse and unsupervised tool use and investigation. If some AI successor to LLMs could be trusted to do that in the next few years, then we probably only have a short while until something FOOMs.