AIXI

Settings

AIXI is a mathematical formalism for a hypothetical superintelligence, developed by Marcus Hutter (2005, 2007)(2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn't do).

See also: Solomonoff induction, Decision theory, AI

Because it abstracts optimization power away from human mental features, AIXI is valuable in considering the possibilities for future artificial general intelligence - a compact and non-anthropomorphic specification that is technically complete and closed; either some feature of AIXI follows from the equations or it does not. In particular, it acts as a constructive demonstration of an AGI which does not have human-like terminal values and will act solely to maximize its reward function. (Yampolskiy(Yampolskiy & Fox 2012)2012).

AIXI is a mathematical formalism for a hypothetical (super)intelligencesuperintelligence, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn't do).

AIXI is not a feasible AI, because Solomonoff induction is not computable. A somewhat more computable variant is the time-space-bounded AIXItl. Real AI algorithms explicitly inspired by AIXItl, e.g. the Monte Carlo approximation by Veness et al. (2011)(2011) have shown interesting results in simple general-intelligence test problems.

  • R.V. Yampolskiy, J. Fox (2012) Artificial General Intelligence and the Human Mental Model. In Amnon H. Eden, Johnny Søraker, James H. Moor, Eric Steinhart (Eds.), The Singularity Hypothesis.The Frontiers Collection. London: Springer.
  • M. Hutter (2007) Universal Algorithmic Intelligence: A mathematical top->down approach.approach. In Goertzel & Pennachin (eds.), Artificial General Intelligence, 227-287. Berlin: Springer.
  • M. Hutter, (2005) Universal Artificial Intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer.
  • J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation,Approximation, Journal of Artificial Intelligence Research 40, 95-142]

Suggest removing the claim that AIXI is not feasible in practice because it only works with a finite time horizon. This is false - AIXI can be well-defined to infinite horizon with a wide variety of discount factors https://www.sciencedirect.com/science/article/pii/S0304397513007135 and Jan Leike's thesis treats its computability level in this case (which I believe is not harmed relative to finite horizon, since the damage is already done by the difficulty of computing the interactive version of Solomonoff induction and the extra limit causes no further degradation). The only important difference is that the expectimax expression for AIXI no longer makes sense. 

AIXI is not a feasible AI, because Solomonoff induction is not computable, and because some environments may not interact over finite time horizons (AIXI only works over some finite time horizon, though any finite horizon can be chosen).computable. A somewhat more computable variant is the time-space-bounded AIXItl. Real AI algorithms explicitly inspired by AIXItl, e.g. the Monte Carlo approximation by Veness et al. (2011) have shown interesting results in simple general-intelligence test problems.

Applied to The Ethics of ACI by Akira Pyinya ago