"Intelligence measures an agent's ability to achieve goals in a wide range of environments." (Shane Legg) [1]
A little while ago I tried to equip Hutter's universal agent, AIXI, with a utility function, so instead of taking its clues about its goals from the environment, the agent is equipped with intrinsic preferences over possible future observations.
The universal AIXI agent is defined to receive reward from the environment through its perception channel. This idea originates from the field of reinforcement learning, where an algorithm is observed and then rewarded by a person if this person approves of the outputs. It is less appropriate as a model of AGI capable of autonomy, with no clear master watching over it... (read 1593 more words →)
Super hard to say without further specification of the approximation method used for the physical implementation.