AIXI is a mathematical formalism for a hypothetical (super)intelligent, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn'couldn't do).
AIXI can be viewed as the border between AI problems that would be 'simple''simple' to solve using unlimited computing power and problems which are structurally 'complicated''complicated'.
Hutter (2007) describes AIXI as a combination of decision theory and algorithmic information theory: "Decision"Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence.""
To do this, AIXI guesses at a probability distribution for its environment, using Solomonoff induction, a formalization of Occam'Occam's razor: Simpler computations are more likely a priori to describe the environment than more complex ones. This probability distribution is then Bayes-updated by how well each model fits the evidence (or more precisely, by throwing out all computations which have not exactly fit the environmental data so far, but for technical reasons this is roughly equivalent as a model). AIXI then calculates the expected reward of each action it might choose--weighting the likelihood of possible environments as mentioned. It chooses the best action by extrapolating its actions into its future time horizon recursively, using the assumption that at each step into the future it will again choose the best possible action using the same procedure.
The agent'agent's intelligence is defined by its expected reward across all environments, weighting their likelihood by their complexity.
AIXI is a mathematical formalism for a hypothetical (super)
intelligentintelligence, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn't do).