JulianMorrison comments on Brief Break - Less Wrong

3 Post author: Eliezer_Yudkowsky 31 August 2008 04:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

Sort By: Old

You are viewing a single comment's thread.

Comment author: JulianMorrison 31 August 2008 09:02:41PM -1 points [-]

Seems obvious to me that AIXI is describing a fully general learner, which is not the same as a FAI by any stretch. In particular, it's missing all of the optimizations you might gain by narrowing the scope, and it's completely unfriendly. It's a pure utility maximizer, which means it's a step *down* from a smiley-face maximizer in terms of safety - it has *no* humane values.

An AIXI solving a mathematical game would optimize. An AIXI operating in the real world would waste an awful lot of time learning basic physics, and then wirehead - if you were lucky.