Stuart_Armstrong comments on Would AIXI protect itself? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
No, it won't: it knows exactly why the grue population went down, which is because it choose to output the grue-killing bit. It has no idea - and no interest - as to why it did that, but it can see the button has no effect on the universe: everything can be explained in terms of its own actions.
A google search puts forward an abstract by Hutter proclaiming, "We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. "
I posit that that is utterly and profoundly wrong, or the AI would be able to figure out that mashing the button produces effects it's not (presently) fond of.
AIXI is the smartest, and the stupidest, agent out there.