Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:
As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.
Most AGI development is focused on increasing its capability since each iteration of an AGI generally improves upon its predecessor. Eventually this trend may give birth to an AGI that inadvertently produces a widespread negative effect. This self-improving AGI created without safety precautions would pursue its utility without regard for the well-being of humanity. Its intent would not be diabolical in nature; rather, it would expand its capability never pausing to consider the impact of its actions on other forms of life.
The Paperclip maximizer is a thought experiment describing one such scenario. In it, an AGI is created to continually increase the number of paperclips in its possession. As it gets smarter, it invents new ways of accomplishing this goal, consuming all matter around it to create more paperclips. In short, it inadvertently wreaks havoc on all life to accomplish this goal as safety measures were not taken to prevent it.
Research has illuminated the need for caution when developing an AGI. In an effort to formalize this need, AI safety theory continues to be developed in order to solve some of these issues. Proposed strategies to prevent an AGI from harming humanity include:
As an example, the paperclip maximizer mentioned above might be created with a sense of human value, preventing it from creating paperclips at the cost of harming humanity.