I expect for there to be a delay in deployment, but I think ultimately OpenAI is aiming as a near term goal to automate intellectually difficult portions of computer programming. Personally, as someone just getting into the tech industry, this is basically my biggest near-term concern, besides death. At what point might it be viable for most people to do most of what skilled computer programmer does with the help of a large language model, and how much should this hurt salaries and career expectations?
Some thoughts:
- It will probably be less difficult to safely prompt a language model for an individual "LeetCode" function than to write a that function by hand within the next two years. Many more people will be able to do the former than could ever do the latter.
- Yes, reducing the price of software engineering means more software engineering will be done, but it would be extremely odd if this meant software engineer salaries stayed the same, and I expect regulatory barriers to limit the amount that the software industry can grow to fill new niches.
- Architecture seems difficult to automate with large language models, but a ridiculous architecture might be tolerable in some circumstances if your programmers are producing code at the speed GPT4 does.
- Creativity is hard to test, and if former programmers are mostly now hired based on their ability to "innovate" or have interesting psychological characteristics beyond being able to generate code I expect income and jobs to shift away from people with no credentials and skills to people with lots of credentials and political acumen and no skills
- At some point programmers will be sufficiently automated away that the singularity is here. This is not necessarily a comforting thought.
Edit: Many answers contesting the basic premise of the old title, "When will computer programming become an unskilled job?" The title of the post has been updated accordingly.
Just to specify my claim a little harder, I am saying that individual threads/ISRs/processes/microservices: every actor has an initial input state, I.
And then the truth table rule above applies.
If a thread hits a mutex, and then either waits or proceeds, that mutex state is part of the truth table row.
Depending on when it got the lock and got to read a variable, the variable it actually read is still part of the same row.
Any internal variables it has are part of the row.
Ultimately for each actor, it is still a table lookup in practice.
Now as I mentioned earlier, this is generally bad design. Pure functional programming, which at all levels of embedded systems up to hyperscaler systems, has become the modern standard, and even low level embedded systems should be pure functional. This means they might hold a reader and writer lock on the system state they are modifying, for example, or other methods so that the entire state they operate on is atomic and coherent.
For example I've written a 10 khz motor controller, where it is a functional system of
PWM_outputs = f( phaseA, phaseB, resolver, PID_state, speed_filter[], current_filter[]) and a few other things. My actual implementation wasn't so clean and the system had bugs.
This above system is atomic, I need all variables to be from the same timestep and if my code loop is too slow to do all the work before the next timestep, I need to release control of the motor (open all the gates) and shut down with an error.
If I had an AI to do work for me I would have asked it to do some fairly massive refactors and add more wrapper layers etc and then self review it's own code by rubrics to make something clean and human readable.
All things that GPT-4 can do right now, especially if it gets a finetune on coding.