I've been feeling burned on Overcoming Bias lately, meaning that I take too long to write my posts, which decreases the amount of recovery time, making me feel more burned, etc.
So I'm taking at most a one-week break. I'll post small units of rationality quotes each day, so as to not quite abandon you. I may even post some actual writing, if I feel spontaneous, but definitely not for the next two days; I have to enforce this break upon myself.
When I get back, my schedule calls for me to finish up the Anthropomorphism sequence, and then talk about Marcus Hutter's AIXI, which I think is the last brain-malfunction-causing subject I need to discuss. My posts should then hopefully go back to being shorter and easier.
Hey, at least I got through over a solid year of posts without taking a vacation.
Seems obvious to me that AIXI is describing a fully general learner, which is not the same as a FAI by any stretch. In particular, it's missing all of the optimizations you might gain by narrowing the scope, and it's completely unfriendly. It's a pure utility maximizer, which means it's a step down from a smiley-face maximizer in terms of safety - it has no humane values.
An AIXI solving a mathematical game would optimize. An AIXI operating in the real world would waste an awful lot of time learning basic physics, and then wirehead - if you were lucky.