https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/
GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code.
Will Copilot or similar systems become ubiquitous in the next few years? Will they increase the speed of software development or AI research? Will they change the skills necessary for software development?
Is this the first big commercial application of the techniques that produced GPT-3?
For anyone who's used Copilot, what was your experience like?
This is my prediction too, but there are two strands to the argument that I think are worth teasing apart:
First, how many people will use Copilot? The base rate for infosec impact of innovations is very low, because most innovations are taken up slowly or not at all. Typescript is typical: most people who could use Typescript use Javascript instead (see for example the TIOBE rankings), so even if Typescript prevents all security problems it can't impact the overall security situation much. Garbage collection is another classic example: it was in production systems in the late 60s, but didn't become mainstream until the 90s with the rise of Java and Perl. There was a span of 20+ years where GC didn't much affect the use-after-free landscape, even though GC prevents 100% of use-after-free bugs.
(counterpoint: StackOverflow was also an innovation, it was taken up right away, and Copilot is more like StackOverflow than it is like a traditional technical innovation. I don't really buy this because Copilot seems it'll be much harder to get started with even once it's out of beta)
Second, are users of Copilot more or less likely to write security bugs? Here my prediction points the other way: Copilot does generate security bugs, and users are unusually unlikely to catch them because they'll tend to use it in domains they're unfamiliar with. Somewhat more weakly I think it'll be worse than the counterfactual where they don't have Copilot and have to use something else, for the reasons jimrandomh lists.
I'm curious whether you see the breakdown the same way, and if so, how you see the impact of Copilot conditional on its being widely adopted.