https://github.blog/2021-06-29-introducing-github-copilot-ai-pair-programmer/
GitHub Copilot is powered by OpenAI Codex, a new AI system created by OpenAI. OpenAI Codex has broad knowledge of how people use code and is significantly more capable than GPT-3 in code generation, in part, because it was trained on a data set that includes a much larger concentration of public source code.
Will Copilot or similar systems become ubiquitous in the next few years? Will they increase the speed of software development or AI research? Will they change the skills necessary for software development?
Is this the first big commercial application of the techniques that produced GPT-3?
For anyone who's used Copilot, what was your experience like?
One of the major, surprising consequences of this is that it's likely to become infeasible to develop software in secret. If AI tools like this confer a large advantage, and the tools are exclusively cloud hosted, then maintaining the confidentiality of a code base will mean forgoing these tools, and if the impact of these tools is large enough, then forgoing them may not be feasible.
On the plus side, this will put malware developers at something of a disadvantage. It might increase the feasibility of preventing people from running dangerous AI experiments (which sure seem like they're getting closer by the day). On the minus side, this creates an additional path to internet takeover, and increases centralization of power in general.
Or maybe developers will find good ways of hacking value out of cloud hosted AI systems while keeping their actual code base secret. e.g. maybe you have a parallel code base that is developed in public, and a way of translating code-gen outputs there into something useful for your actual secret code base.