I expect for there to be a delay in deployment, but I think ultimately OpenAI is aiming as a near term goal to automate intellectually difficult portions of computer programming. Personally, as someone just getting into the tech industry, this is basically my biggest near-term concern, besides death. At what point might it be viable for most people to do most of what skilled computer programmer does with the help of a large language model, and how much should this hurt salaries and career expectations?
Some thoughts:
- It will probably be less difficult to safely prompt a language model for an individual "LeetCode" function than to write a that function by hand within the next two years. Many more people will be able to do the former than could ever do the latter.
- Yes, reducing the price of software engineering means more software engineering will be done, but it would be extremely odd if this meant software engineer salaries stayed the same, and I expect regulatory barriers to limit the amount that the software industry can grow to fill new niches.
- Architecture seems difficult to automate with large language models, but a ridiculous architecture might be tolerable in some circumstances if your programmers are producing code at the speed GPT4 does.
- Creativity is hard to test, and if former programmers are mostly now hired based on their ability to "innovate" or have interesting psychological characteristics beyond being able to generate code I expect income and jobs to shift away from people with no credentials and skills to people with lots of credentials and political acumen and no skills
- At some point programmers will be sufficiently automated away that the singularity is here. This is not necessarily a comforting thought.
Edit: Many answers contesting the basic premise of the old title, "When will computer programming become an unskilled job?" The title of the post has been updated accordingly.
What I expect to change quickly is that "programming languages" will go away completely. LLMs or similar tech will get us way closer to the DWIM level. Directly translating from a spec to executables will be something AI can excel at. The missing piece is the feedback: writing and executing unit tests and changing the executable (not the code!) to pass the tests.
Note that a lot of current CS concepts are human-oriented and make no sense when the human is not a part of the process: "architecture" is a crutch for the limitation of our brains. "Design" is another crutch. This can all be streamlined into "Spec->Binary".
Even further, there is no reason to have a general-purpose computer when it is easy for an AI to convert a spec into actual hardware, such as FPGA.
Next on the list (or maybe even first on the list) is not needing the low-level executables at all: the LLM or equivalent just does what you ask of it.
Humans are much better drivers than they are programmers