You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Shane Legg's Thesis: Machine Superintelligence, Opinions? - Less Wrong Discussion

9 Post author: Zetetic 08 May 2011 08:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 09 May 2011 07:14:18PM 1 point [-]

Though I am not an AI researcher, it seems pretty obvious that knowledge of AIXI is the most important part of the mathematical background for work in Friendly AI.

I don't see it. Your intuition (telling that it's obvious) is probably wrong, even if the claim is in some sense correct (in a non-obvious way).

(The title of "the most important" is ambiguous enough to open a possibility of arguing definitions.)

In other words, epistemology seems too important to leave to non-mathematical methods.

It doesn't follow that a particular piece of mathematics is the way to go.

Comment author: rhollerith_dot_com 09 May 2011 07:25:48PM *  1 point [-]

Hi, Vladimir!

In other words, epistemology seems too important to leave to non-mathematical methods.

It doesn't follow that a particular piece of mathematics is the way to go.

Is there another non-trivial mathematical account of how an agent can come to have accurate knowledge of its environment that is general enough to deserve the name 'epistemology'?

Comment author: Vladimir_Nesov 09 May 2011 07:29:44PM 1 point [-]

This is a bad argument, since the best available option isn't necessarily a good option.

Comment author: Zetetic 09 May 2011 07:52:40PM *  1 point [-]

This is what I was thinking, investing too much time and energy in AIXI simply because it seems to be the most 'obvious' option currently available could blind you to other avenues of approach.

Comment author: Vladimir_Nesov 09 May 2011 08:01:09PM 2 points [-]

I think you should know the central construction, it's simple enough (half of Hutter's "gentle introduction" would suffice). But at least read some good textbooks (such as AIMA) that give you overview of the field before charting exploration of primary literature (not sure if you mentioned before what's your current background).

Comment author: Zetetic 09 May 2011 09:00:34PM *  2 points [-]

I own a copy of AIMA, though I admittedly haven't read it from cover to cover. I did an independent study learning/coding some basic AI stuff about a year ago, the professor introduced me to AIMA.

not sure if you mentioned before what's your current background

It's a bit difficult to summarize. Is sort of did so here, but I didn't include a lot of detail.

I suppose I could try to hit a few specifics; I was jumping around The Handbook of Brain Theory and Neural Networks for a bit, I picked up the overviews and read a few of the articles, but haven't really come back to it yet; I've read a good number of articles from the MIT Encyclopedia of Cognitive Science; I've read a (small) portion of "Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems" (I ended up ultimately delving too far into molecular biology and organic chem so I abandoned it for the time being, though I would like to look at Comp Neurosci again, maybe using From Neuron to Brain instead, seems more approachable); I read a bit "Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting" partly to get a sense of just how much current computational models of neurons might diverge from actual neuronal behavior but mostly to get an idea of some alternatives.

As I mentioned in my response to timtyler, I tend to cycle through my readings quite a bit. I like to pick up a small cluster of ideas and let them sink in and move on to something else, coming back to the material later if it still seems relevant to my interests. Once it's popped up a few times I make a more concerted effort to learn it. In any event My main goal over the past few months was to try to get a better overview of a large amount of material relevant to FAI.

Comment author: timtyler 09 May 2011 07:39:27PM *  0 points [-]

Is there another non-trivial mathematical account of how an agent can come to have accurate knowledge of its environment

Pretty much: Solomonoff Induction. That does most of the work in AIXI. OK, it won't design experiments for you, but there are various approaches to doing that...

Comment author: rhollerith_dot_com 09 May 2011 08:10:25PM *  1 point [-]

When I use the word 'AIXI' above, I mean to include Solomonoff induction. I would have thought that was obvious.

One has to learn Solomonoff induction to learn AIXI.

Comment author: timtyler 09 May 2011 08:57:28PM 1 point [-]

AIXI is more than just Solomonoff induction. It is Solomonoff induction plus some other stuff. I'm a teensy bit concerned that you are giving AIXI credit for Solomonoff induction's moves.

Comment author: rhollerith_dot_com 09 May 2011 09:15:08PM *  2 points [-]

AIXI is more than just Solomonoff induction. It is Solomonoff induction plus some other stuff.

Right. The other stuff is an account of the most fundamental and elementary kind of reinforcement learning. In my conversations (during meetups to which everyone is invited) with one of the Research Fellows at SIAI, reinforcement learning has come up more than Solomonoff induction.

But yeah, the OP should learn Solomonoff induction first, then decide whether to learn AIXI. That would have happened naturally if he'd started reading Legg's thesis, unless the OP has some wierd habit of always finishing PhD theses that he has started.

Since we've gone back and forth twice, and no one's upvoted my contributions, this will probably be my last comment in this thread.