If formal methods are only giving you probabilistic evidence, you aren't using appropriate formal methods. There are systems designed to make 1-to-1 correspondences between code and proof (the method I'm familiar with has an intermediate language and maps every subroutine and step of logic to an expression in that intermediate), and this could be used to make the code an airtight proof that, for example, the utility function will only evolve in specified ways and will stay within known limits. This does put limits on how the program can be written, and lesser limits on how the proof can be constructed (it is hard to incorporate nonconstructivist mathematics), but can have every assumption underlying the safety proved correct. (And when I say every step, I include proof that the compiler is sound and will produce a correct program or none at all, proof that each component of the intermediate language reflects the corresponding proof step, etc.)
We only have probabilistic evidence that any formal method is correct. So far we haven't found contradictions implied by the latest and greatest axioms, but mathematics is literally built upon the ruins of old axioms that didn't quite rule out all known contradictions. FAI needs to be able to re-axiomatize its mathematics when inconsistencies are found in the same way that human mathematicians have, while being implemented in a subset of the same mathematics.
Additionally, machines are only probabilistically correct. FAI will probably need to treat its own implementation as a probabilistic formal system.
This is a very basic introduction to AGI safety work, cross-posted from the MIRI blog. The discussion of AI V&V methods (mostly in the 'early steps' section) is probably the only part that will be new to regulars here.
Improvements in AI are resulting in the automation of increasingly complex and creative human behaviors. Given enough time, we should expect artificial reasoners to begin to rival humans in arbitrary domains, culminating in artificial general intelligence (AGI).
A machine would qualify as an 'AGI', in the intended sense, if it could adapt to a very wide range of situations to consistently achieve some goal or goals. Such a machine would behave intelligently when supplied with arbitrary physical and computational environments, in the same sense that Deep Blue behaves intelligently when supplied with arbitrary chess board configurations — consistently hitting its victory condition within that narrower domain.
Since generally intelligent software could help automate the process of thinking up and testing hypotheses in the sciences, AGI would be uniquely valuable for speeding technological growth. However, this wide-ranging productivity also makes AGI a unique challenge from a safety perspective. Knowing very little about the architecture of future AGIs, we can nonetheless make a few safety-relevant generalizations:
Today's AI software is already tough to verify and validate, thanks to its complexity and its uncertain behavior in the face of state space explosions. Menzies & Pecheur (2005) give a good overview of AI verification and validation (V&V) methods, noting that AI, and especially adaptive AI, will often yield undesired and unexpected behaviors.
An adaptive AI that acts autonomously, like a Mars rover that can't be directly piloted from Earth, represents an additional large increase in difficulty. Autonomous safety-critical agents need to make irreversible decisions in dynamic environments with very low failure rates. The state of the art in safety research for autonomous systems is improving, but continues to lag behind capabilities work. Hinchman et al. (2012) write:
AI acting autonomously in arbitrary domains, then, looks particularly difficult to verify. If AI methods continue to see rapid gains in efficiency and versatility, and especially if these gains further increase the opacity of AI algorithms to human inspection, AI safety engineering will become much more difficult in the future. In the absence of any reason to expect a development in the lead-up to AGI that would make high-assurance AGI easy (or AGI itself unlikely), we should be worried about the safety challenges of AGI, and that worry should inform our research priorities today.
Below, I’ll give reasons to doubt that AGI safety challenges are just an extension of narrow-AI safety challenges, and I’ll list some research avenues people at MIRI expect to be fruitful.
New safety challenges from AGI
A natural response to the idea of starting work on high-assurance AGI is that AGI itself appears to be decades away. Why worry about it so early? And, supposing that we should worry about it, why think there's any useful work we can do on AGI safety so far in advance?
In response to the second question: It’s true that, at a glance, AGI looks difficult to effectively prepare for. The issue is important enough, however, to warrant more than a glance. Long-term projects like mitigating climate change or detecting and deflecting asteroids are intuitively difficult. The same is true of interventions that depend on projected future technologies, such as work on post-quantum cryptography in anticipation of quantum computers. In spite of that, we've made important progress on these fronts.
Covert channel communication provides one precedent. It was successfully studied decades in advance of being seen in the wild. Roger Schell cites a few other success cases in Muehlhauser (2014b), and suggests reasons why long-term security and safety work remains uncommon. We don’t know whether early-stage AGI safety work will be similarly productive, but we shouldn’t rule out the possibility before doing basic research into the question. I’ll list some possible places to start looking in the next section.
What about the first question? Why worry specifically about AGI?
I noted above that AGI is an extreme manifestation of many ordinary AI safety challenges. However, MIRI is particularly concerned with unprecedented, AGI-specific behaviors. For example: An AGI’s problem-solving ability (and therefore its scientific and economic value) depends on its ability to model its environment. This includes modeling the dispositions of its human programmers. Since the program’s success depends in large part on its programmers’ beliefs and preferences, an AI pursuing some optimization target can select actions for the effect they have on programmers’ mental states, not just on the AI’s material environment.
This means that safety protocols will need to be sensitive to risks that differ qualitatively from ordinary software failure modes — AGI-specific hazards like ‘the program models its programmed goals as being better served if it passes human safety inspections, so it selects action policies that make it look safer (to humans) than it really is’. If we model AGI behavior using only categories from conventional software design, we risk overlooking new intelligent behaviors, including 'deception'.
At the same time, oversimplifying these novel properties can cause us to anthropomorphize the AGI. If it's naïve to expect ordinary software validation methods to immediately generalize to advanced autonomous agents, it's even more naïve to expect conflict prevention strategies that work on humans to immediately generalize to an AI. A 'deceptive' AGI is just one whose planning algorithm identifies some human misconception as instrumentally useful to its programmed goals. Its methods or reasons for deceiving needn't resemble a human's, even if its capabilities do.
In human society, we think, express, and teach norms like 'don't deceive' or Weld & Etzioni's (1994) 'don't let humans come to harm' relatively quickly and easily. The complex conditional response that makes humans converge on similar goals remains hidden inside a black box — the undocumented spaghetti code that is the human brain. As a result of our lack of introspective access to how our social dispositions are cognitively and neurally implemented, we're likely to underestimate how contingent and complex they are. For example:
Advanced AI is also likely to have technological capabilities, such as strong self-modification, that introduce other novel safety obstacles; see Yudkowsky (2013).
These are quick examples of some large and poorly-understood classes of failure mode. However, the biggest risks may be from problem categories so contrary to our intuitions that they will not occur to programmers at all. Relying on our untested intuitions, or on past experience with very different systems, is unlikely to catch every hazard.
As an intelligent but inhuman agent, AGI represents a fundamentally new kind of safety challenge. As such, we’ll need to do basic theoretical work on the general features of AGI before we can understand such agents well enough to predict and plan for them.
Early steps
What would early, theoretical AGI safety research look like? How does one vet a hypothetical technology? We can distinguish research projects oriented toward system verification from projects oriented toward system requirements.
Verification-directed AGI research extends existing AI safety and security tools that are likely to help confirm various features of advanced autonomous agents. Requirements-directed AGI research instead specifies desirable AGI abilities or behaviors, and tries to build toy models exhibiting the desirable properties. These models are then used to identify problems to be overcome and basic gaps in our conceptual understanding.
In other words, verification-directed approaches would ask 'What tools and procedures can we use to increase our overall confidence that the complex systems of the future will match their specifications?' They include:
Requirements-directed approaches would ask 'What outcomes are we likely to want from an AGI, and what general class of agent could most easily get us those outcomes?' Examples of requirements-directed work include:
We can probably make progress in both kinds of AGI safety research well in advance of building a working AGI. Requirements-directed research focuses on abstract mathematical agent models, which makes it likely to be applicable to a wide variety of software implementations. Verification-directed approaches will be similarly valuable, to the extent they are flexible enough to apply to future programs that are much more complex and dynamic than any contemporary software. We can compare this to present-day high-assurance design strategies, e.g., in Smith & Woodside (2000) and Hinchman et al. (2012). The latter write, concerning simpler autonomous machines:
Verification- and requirements-directed work is complementary. The point of building clear mathematical models of agents with desirable properties is to make it easier to design systems whose behaviors are transparent enough to their programmers to be rigorously verified; and verification methods will fail to establish system safety if we have a poor understanding of what kind of system we want in the first place.
Some valuable projects will also fall in between these categories — e.g., developing methods for principled formal validation, which can increase our confidence that we're verifying the right properties given the programmers' and users' goals. (See Cimatti et al. (2012) on formal validation, and also Rushby (2013) on epistemic doubt.)
MIRI's focus: the mathematics of intelligent agents
MIRI's founder, Eliezer Yudkowsky, has been one of the most vocal advocates for research into autonomous high-assurance AGI, or 'friendly AI'. Russell & Norvig (2009) write:
MIRI’s approach is mostly requirements-directed, in part because this angle of attack is likely to enhance our theoretical understanding of the entire problem space, improving our research priorities and other strategic considerations. Moreover, the relevant areas of theoretical computer science and mathematics look much less crowded. There isn’t an established subfield or paradigm for requirements-directed AGI safety work where researchers could find a clear set of open problems, publication venues, supervisors, or peers.
Rather than building on the formal verification literature, MIRI prioritizes jump-starting these new avenues of research. Muehlhauser (2013) writes that engineering innovations often have their germ in prior work in mathematics, which in turn can be inspired by informal philosophical questions. At this point, AGI safety work is just beginning to enter the 'mathematics' stage. Friendly AI researchers construct simplified models of likely AGI properties or subsystems, formally derive features of those models, and check those features against general or case-by-case norms.
Because AGI safety is so under-researched, we're likely to find low-hanging fruit even in investigating basic questions like 'What kind of prior probability distribution works best for formal agents in unknown environments?' As Gerwin Klein notes in Muehlhauser (2014a), "In the end, everything that makes it easier for humans to think about a system, will help to verify it." And, though MIRI's research agenda is determined by social impact considerations, it is also of general intellectual interest, bearing on core open problems in theoretical computer science and mathematical logic.
At the same time, it's important to keep in mind that formal proofs of AGI properties only function as especially strong probabilistic evidence. Formal methods in computer science can decrease risk and uncertainty, but they can't eliminate it. Our assumptions are uncertain, so our conclusions are too.
Though we can't reach complete confidence in the safety of an AI, we can still decrease the probability of catastrophic failure. In the process, we are likely to come to a better understanding of the most severe failure modes. If we begin working now to better understand AGI as a theoretical system, we'll be in a far better position to implement robust safety measures as AI improves in intelligence and autonomy in the decades to come.
References
∙ Altair (2013). A comparison of decision algorithms on Newcomblike problems. Machine Intelligence Research Institute.
∙ Armstrong et al. (2012). Thinking inside the box: controlling and using an oracle AI. Minds and Machines, 22: 299-324.
∙ Barasz et al. (2014). Robust cooperation in the Prisoner's Dilemma: program equilibrium via probability logic. arXiv.
∙ Bostrom (2003). Ethical issues in advanced artificial intelligence. In Smith et al. (eds.), Cognitive, Emotive and Ethical Aspects of Decision-Making in Humans and in Artificial Intelligence, 2: 12-17.
∙ Cimatti et al. (2012). Validation of requirements for hybrid systems: a formal approach. ACM Transactions on Software Engineering and Methodology, 21.
∙ Dewey (2011). Learning what to value. Artificial General Intelligence 4th International Conference Proceedings: 309-314.
∙ Fallenstein & Soares (2014). Problems of self-reference in self-improving space-time embedded intelligence. Working paper.
∙ Halls (2007). Beyond AI: Creating the Conscience of the Machine.
∙ Harper & Licata (2011). Foundations and applications of higher-dimensional directed type theory. National Science Foundation grant proposal.
∙ Hinchman et al. (2012). Towards safety assurance of trusted autonomy in Air Force flight-critical systems. Layered Assurance Workshop, 17.
∙ Hutter (2012). One decade of Universal Artificial Intelligence. Theoretical Foundations of Artificial General Intelligence, 4: 67-88.
∙ Menzies & Pecheur (2005). Verification and validation and artificial intelligence. Advances in Computers, 65: 154-203.
∙ Muehlhauser (2013). From philosophy to math to engineering. MIRI Blog.
∙ Muehlhauser (2014a). Gerwin Klein on formal methods. MIRI Blog.
∙ Muehlhauser (2014b). Roger Schell on long-term computer security research. MIRI Blog.
∙ Rushby (2013). Logic and epistemology in safety cases. Computer Safety, Reliability, and Security: Proceedings of SafeComp 32 (pp. 1-7).
∙ Russell & Norvig (2009). Artificial Intelligence: A Modern Approach.
∙ Smith & Woodside (2000). Performance validation at early stages of software development. In Gelenbe (ed.), System Performance Evaluation: Methodologies and Applications (pp. 383-396).
∙ Spears (2000). Asimovian adaptive agents. Journal of Artificial Intelligence Research, 13: 95-153.
∙ Spears (2006). Assuring the behavior of adaptive agents. In Rouff et al. (eds.), Agent Technology from a Formal Perspective (pp. 227-257).
∙ Weld & Etzioni (1994). The First Law of Robotics (a call to arms). Proceedings of the Twelfth National Conference on Artificial Intelligence: 1042-1047.
∙ Yampolskiy (2012). Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies, 19: 194-214.
∙ Yudkowsky (2013). Intelligence explosion microeconomics. Technical report.