Differential intellectual progress describes a scenario which, in terms of human safety, risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:
As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.
Risk-increasing Progress
Technological advances — without corresponding development of safety mechanisms — simultaneously increase the capacity for both friendly and unfriendly AGI development. Presently, most AGI research is concerned with increasing its capacity rather than its safety and thus, most progress increases the risk for a widespread negative effect.
- Increased computing power. Computing power continues to rise in step with Moore's Law, providing the raw capacity for smarter AGIs. This allows for more 'brute-force' programming, increasing the probability of someone creating an AGI without properly understanding it. Such an AGI would also be harder to control.
- More efficient algorithms. Mathematical advances can produce substantial reductions in computing time, allowing an AGI to be more efficient within its current operating capacity. The ability to carry about a larger number of computations with the same amount of hardware would have the net effect of making the AGI smarter.
- Extensive datasets. Living in the 'Information Age' has produced immense amounts of data. As data storage capacity has increased, so has the amount of information that is collected and stored, allowing an AGI immediate access to massive amounts of knowledge.
- Advanced neuroscience. Cognitive scientists have discovered several algorithms used by the human brain which contribute to our intelligence, leading to a field called 'Computational Cognitive Neuroscience.' This has led to developments such as brain implants that have helped restore memory and motor learning in animals, algorithms which might conceivably contribute to AGI development.
The above developments could also help in the creation of Friendly AI. However, Friendliness requires the development of both AGI and Friendliness theory, while an Unfriendly Artificial Intelligence might be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not combined with work on Friendliness.
Risk-reducing Progress
There are several areas which, when more developed, will provide a means to produce AGIs that are friendly to humanity. These areas of research should be prioritized to prevent possible disasters.
- Computer security. One way by which AGIs might grow rapidly more powerful is by taking over poorly-protected computers on the Internet. Hardening computers and networks against such attacks would help reduce this risk.
- AGI confinement. Incorporating physical mechanisms which limit the AGI can prevent it from inflicting damage. Physical isolation has already been developed (such as AI Boxing) as well as embedded solutions which shut down parts of the system under certain conditions.
- Friendly AGI goals. Embedding an AGI with friendly terminal values reduces the risk that it will take action that is harmful to humanity. Development in this area has lead to many questions about what should be implemented. However, precise methodologies which, when executed within an AGI, would prevent it from harming humanity have not yet materialized.
See Also
References