Though I am not an AI researcher, it seems pretty obvious that knowledge of AIXI is the most important part of the mathematical background for work in Friendly AI.
I don't see it. Your intuition (telling that it's obvious) is probably wrong, even if the claim is in some sense correct (in a non-obvious way).
(The title of "the most important" is ambiguous enough to open a possibility of arguing definitions.)
In other words, epistemology seems too important to leave to non-mathematical methods.
It doesn't follow that a particular piece of mathematics is the way to go.
Hi, Vladimir!
In other words, epistemology seems too important to leave to non-mathematical methods.
It doesn't follow that a particular piece of mathematics is the way to go.
Is there another non-trivial mathematical account of how an agent can come to have accurate knowledge of its environment that is general enough to deserve the name 'epistemology'?
I searched the posts but didn't find a great deal of relevant information. Has anyone taken a serious crack at it, preferably someone who would like to share their thoughts? Is the material worthwhile? Are there any dubious portions or any sections one might want to avoid reading (either due to bad ideas or for time saving reasons)? I'm considering investing a chunk of time into investigating Legg's work so any feedback would be much appreciated, and it seems likely that there might be others who would like some perspective on it as well.