All our knowledge of the world necessarily comes to us through our senses.
Yes, but an agent can understand that it's fixed utility function which refers to the state of the entire universe is not maximized by allowing itself to be deceived.
Well, possibly. I certainly have an idea about what "the state of the universe" refers to aside from my sensory perceptions of it. What we need math for is to see whether it is possible to build an agent whose belief that it is maximising such a quantity survives extensive self-knowledge about its own operation. Without supporting math, we don't have much more than a story.
Daniel Dewey, 'Learning What to Value'
Abstract: I.J. Good's theory of an "intelligence explosion" predicts that ultraintelligent agents will undergo a process of repeated self-improvement. In the wake of such an event, how well our values are fulfilled will depend on whether these ultraintelligent agents continue to act desirably and as intended. We examine several design approaches, based on AIXI, that could be used to create ultraintelligent agents. In each case, we analyze the design conditions required for a successful, well-behaved ultraintelligent agent to be created. Our main contribution is an examination of value-learners, agents that learn a utility function from experience. We conclude that the design conditions on value-learners are in some ways less demanding than those on other design approaches.