Anvil Problem

It has been argued that a careful (re)definition of AIXI's off-policy behavior may patch the anvil problem in practice.

Created by JoshuaFox at

AIXI is a valuable tool in theoretically considering the nature of super-intelligence, yet it has its limitations. From one perspective, its lack of a a self-model is a mere detail necessarily left out of a formalized abstraction. Nonetheless, for researchers of a future artificial general intelligence, a correct understanding of self-analysis and self-modification is essential.

First, since any Friendly AI must strive to avoid changes in its own goal system, and self-modeling may be valuable for this. Thus, our decision theorythis, the AI must be improved to includebased on a reflection., and today's decision theories mostly lack an understanding of reflectivity.

Second, because human values are not well-understood or formalized, the FAI may need to refine its own goal of maximizing human values. "Refining" onesone's own goal without changing the goal's essentials is another demanding problem in reflective decision theory.

Third, an artificial general intelligence will likely choose to seektry to enhance its own intelligence to better achieve its goals. It may do so by altering its own implementation, or by creating a new generation of AI. It may even do so without regard for the destruction of the current implementation, so long as the new system can better achieve the goals. All these forms of self-modification again raise central questions about the self-model of the AI, which, as mentioned, is not a part of AIXI.

Though AIXI is an abstraction, and any real AI would have a physical embodiment that could be damaged, and an implementation which could be changed or could change its behavior due to bugs. The AIXI formalism completely ignores these possibilities (Yampolskiy & Fox, 2012).

AIXI is a valuable tool in theoretically considering the nature of super-intelligence, yet has its limitations. From one perspective, its lack of a a self-model is a mere detail necessarily left out of a formalized abstraction. Nonetheless, for researchers of a future Friendlyartificial general intelligence, a correct understanding of self-analysis and self-modification must be considered carefully. is essential.

First, since any Friendly AI must strive to avoid changes in its own goal system, the question ofand self-modeling cannotmay be ignored. Ourvaluable for this. Thus, our decision theory must be improved to include reflection.

Second, because human values are not well-understood or formalized, the FAI may need to refine its own goal of maximizing human values. "Refining" theones own goal without changing itsthe goal's essentials is another demanding problem in reflective decision theory.

Third, an artificial general intelligence will likely choose to self-improve,seek to enhance its own intelligence to better achieve its goals. It may do so by altering its own implementation, or by creating a new generation of AI. It may even do so without regard for the destruction of the current implementation, so long as the new system can better achieve the goals. All these forms of self-modification again raise central questions about the self-model of the AI, which, as mentioned, is not a part of AIXI.

It has been pointed out by Eliezer Yudkowsky and othershas pointed out that "Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens..., because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations."

AIXI, the theoretical formalism for the most intelligent possible agent, does not model itself. AIXIIt is simply a calculation of the best possible action, extrapolating into the future, andfuture. This calculation at each step choosingchooses the best action, which is calculated by recursively calculating the next stepstep, and so on intoto the time horizon.

AIXI is very simple math. AIXI does not include a model of itself to figureconsider its own structure in figuring out what actions it will take in the future. Implicit in its definition is the assumption that it will continue, up until its horizon, to choose actions that maximize expected future value. AIXI's definition assumes that the maximizing action will always be chosen, despite the fact that the agent’s implementation was predictably destroyed or changed. This is not accurate for real-world implementations which may malfunction, self-modify, be destroyed, be changed, etc.

Relevant to Friendly AI

ThisAIXI is calleda valuable tool in theoretically considering the Anvil problem: AIXI would not care if an anvil was about to drop onnature of super-intelligence, yet has its head.

The "Anvil problem"limitations. From one perspective, a self-model is not a mere detail necessarily left out of a formalized abstraction. Self-Nonetheless, for researchers of a future Friendlyartificial general intelligence, self-analysis and self-modification are likely tomust be essential parts ofconsidered carefully. First, since any future Friendly AI. First, as the AI must strive to avoid changes in its own goal system, the question of self-modeling cannot be ignored. Our decision theory must be improved to include reflection.

Third, an FAI mayartificial general intelligence will likely choose to self-improve, to enhance its own intelligence to better achieve its goals. It may do so by altering its own implementationimplementation, or by creating a new generation of AI, perhapsAI. It may even do so without regard for the destruction of the current implementation, so long as the new system can better achieve the goals. All these forms of self-modification again raise central questions about the self-model of the AI, which, as mentioned, is ignored bynot a part of AIXI.

Blog comment

Eliezer Yudkowsky on Qualitatively Confused at LessWrong, 15 March 2008.

It has been pointed out by Eliezer Yudkowsky and others that AIXI lacksdoes not model itself. AIXI is simply a self-model: It extrapolates its own actionscalculation of the best possible action, extrapolating into the future indefinitely,future, and at each step choosing the best action, which is calculated by recursively calculating the next step and so on into the assumption that it will keep working in the same way in the future.horizon.

AIXI is very simple math. AIXI does not include a model of itself to figure out what actions it will take in the future. Implicit in its definition is the assumption that it will continue, up until its horizon, to choose actions that maximize expected future value. AIXI's definition assumes that the maximizing action will always be chosen, despite the fact that the agent’s implementation was predictably destroyed or changed. This is not accurate for real-world implementations which may malfunction, self-modify, be destroyed, be changed, etc.

AIXI does not "model itself"model itself to figure out what actions it will take in the future. Implicit in its definition is the assumption that it will continue, up until its horizon, to choose actions that maximize expected future value. AIXI's definition assumes that the maximizing action will always be chosen, despite the fact that the agent’s implementation was predictably destroyed or changed. This is not accurate for real-world implementations which may malfunction, self-modify, be destroyed, be changed, etc.

The "Anvil problem" is not a mere detail necessarily left out of a formalized abstraction. Self-analysis and self-modification mayare likely to be essential parts of any future Friendly AI. First, as itthe AI must workstrive to avoid changes in its own goal system, the question of self-modeling cannot be ignored. Our decision theory must be improved to include reflection.

Third, an FAI may choose to self-improve, to enhance its own intelligence to better achieve .its goals. It may do so by altering its own implementation or by creating a new generation of AI, perhaps without regard for the destruction of the current implementation, so long as the new system can better achieve the goals. All these forms of self-modification again raise central questions about the self-model of the AI, which, as mentioned, is ignored by AIXI.

The "Anvil problem" is not a mere detail necessarily left out of a formalized abstraction. Self-analysis and self-modification may be essential parts of any future Friendly AI. First, as it must work to avoid changes in its own goal system, the question of self-modeling cannot be ignored. Our decision theory must be improved to include Reflective decision theoryreflection.

Third, an FAI may choose to self-improve, to enhance its own intelligence to better achieve . It may do so by altering its own implementation or by creating a new generation of AI, perhaps without regard for the destruction of the current implementation, so long as the new system can better achieve the goals. All these forms of self-modification again raise central questions about the self-model of the AI, which, as mentioned, is ignored by AIXI.

"AIXI does not 'model itself'"model itself" to figure out what actions it will take in the future; implicitfuture. Implicit in its definition is the assumption that it will continue, up until its horizon, to choose actions that maximize expected future value. AIXI’AIXI's definition assumes that the maximizing action will always be chosen, despite the fact that the agent’s implementation was predictably destroyed.destroyed or changed. This is not accurate for real-world implementations which may malfunction, self-modify, be destroyed, self-modify, etcbe changed, etc.

"Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged, and an implementation which could be changed or could change its behavior due to bugs; and thebugs. The AIXI formalism completely ignores these possibilities. possibilities (Yampolskiy & Fox, 2012).

This is called the Anvil problem: AIXI would not care if an anvil was about to drop on its head." (Yampolskiy, Fox, 2012)

The "Anvil problem" is not a mere detail necessarily left out of a formalized abstraction. Self-analysis and self-modification may be essential parts of any future Friendly AI. First, as it must work to avoid changes in its own goal system, the question of self-modeling cannot be ignored. Our decision theory must be improved to include Reflective decision theory. Second, because human values are not well-understood or formalized, the FAI may need to refine its goal of maximizing human values. "Refining" the goal without changing its essentials is another demanding problem in reflective decision theory.

It has been pointed out by Eliezer Yudkowsky and others have pointed out that AIXI lacks a self-model: It extrapolates its own actions into the future indefinitely, on the assumption that it will keep working in the same way in the future.

Eliezer Yudkowsky and others have pointed out that AIXI lacks a self-model: It extrapolates its own actions into the future indefinitely, on the assumption that it will keep working in the same way in the future.

"AIXI does not 'model itself' to figure out what actions it will take in the future; implicit in its definition is the assumption that it will continue, up until its horizon, to choose actions that maximize expected future value. AIXI’s definition assumes that the maximizing action will always be chosen, despite the fact that the agent’s implementation was predictably destroyed. This is not accurate for real-world implementations which may malfunction, be destroyed, self-modify, etc

"Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged, and an implementation which could change its behavior due to bugs; and the AIXI formalism completely ignores these possibilities. This is called the Anvil problem: AIXI would not care if an anvil was about to drop on its head." (Yampolskiy, Fox, 2012).

R.V. Yampolskiy, J. Fox (2012) Artificial General Intelligence and the Human Mental Model. In Amnon H. Eden, Johnny Søraker, James H. Moor, Eric Steinhart (Eds.), The Singularity Hypothesis.The Frontiers Collection. London: Springer.