Luke_A_Somers comments on Would AIXI protect itself? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
This post would benefit greatly from a link introducing AIXI so we know what you're talking about.
Not as far as I can see... if the AIXI is at least somewhat effective, then it will be able to note a connection between button-presses and changes in tendency of grue population, even if it doesn't know why...
But of course there may be something in the AIXI definition that interferes with this.
No, it won't: it knows exactly why the grue population went down, which is because it choose to output the grue-killing bit. It has no idea - and no interest - as to why it did that, but it can see the button has no effect on the universe: everything can be explained in terms of its own actions.
A google search puts forward an abstract by Hutter proclaiming, "We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. "
I posit that that is utterly and profoundly wrong, or the AI would be able to figure out that mashing the button produces effects it's not (presently) fond of.
AIXI is the smartest, and the stupidest, agent out there.