You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread.

Comment author: timtyler 19 December 2011 10:34:48AM *  0 points [-]

I'm not sure exactly what constitutes intelligence, but I don't think it's something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn't.

A common sentiment. Shane Legg even says something similar:

We then use this fact to prove that although very powerful prediction algorithms exist, they cannot be mathematically discovered due to Godel incompleteness. Given how fundamental prediction is to intelligence, this result implies that beyond a moderate level of complexity the development of powerful artificial intelligence algorithms can only be an experimental science.

I think we can now label the need for an environment as a fallacy. Most of the guts of building an intelligent agent involves finding good computable approximations to Solomonoff induction - and you can do that pretty well in a virtual world with an optimisation algorithm and a fitness function based around something like AIQ. This is essentiallly a math problem.

Comment author: jsteinhardt 20 December 2011 12:53:48PM 4 points [-]

I think we can now label the need for an environment as a fallacy.

I think it's difficult to label something as a fallacy when there is almost no hard evidence either way about who is right. The vast majority of AI researchers (including myself) don't think that Solomonoff induction will solve AI. It is also possible to construct formal environments where performing better than chance is impossible without constant interaction with the environment (unless P=NP). So, if the problem is essentially a math problem, that would have to depend on specific facts about the world that make it different from such environments.

Comment author: timtyler 20 December 2011 02:30:05PM 0 points [-]

The vast majority of AI researchers (including myself) don't think that Solomonoff induction will solve AI.

Solomonoff induction certainly doesn't give you an evaluation function on a plate. It would pass the Turing test, though. If the "vast majority of AI researchers" don't realise that, they should look into the issue further.

It is also possible to construct formal environments where performing better than chance is impossible without constant interaction with the environment (unless P=NP). So, if the problem is essentially a math problem, that would have to depend on specific facts about the world that make it different from such environments.

So: Occam's razor - the foundation of science - is also needed. We know about that. Other facts about the world might help a little (arguably, some elements relating to locality are implicit in the reference machine) - but they don't seem to be as critical.

Comment author: jsteinhardt 20 December 2011 03:07:34PM 1 point [-]

So: Occam's razor - the foundation of science - is also needed.

I was referring to computational issues, not whether a complexity prior is reasonable or not. It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment. I don't see how Occam's razor suggests that our world doesn't look like that (in fact, I currently think that our world does look like that, although my confidence in this is not very high).

Comment author: timtyler 20 December 2011 03:51:57PM 0 points [-]

It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment.

Well, of course - but that's learning - which Solomonoff induction models just fine (it is a learning theory).

Or maybe you are suggesting that organisms modify their environment to make their problems simpler. That is perfectly possible - but I don't really see how it is relevant.

You apparently didn't disagree with Solomonoff induction allowing the Turing test to be passed. So: what exactly is your beef with its centrality and significance?

Comment author: jsteinhardt 20 December 2011 04:05:35PM 1 point [-]

It's possible I misunderstood your original comment. Let me play it back to you in my own words to make sure we're on the same page.

My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?

P.S. I think that I also disagree with the Solomonoff induction -> Turing test proposition; but I'd rather delay discussing that point because I think it is contingent on the others.

Comment author: timtyler 20 December 2011 07:25:18PM *  1 point [-]

My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?

Pretty much. Virtual environments are fine, contain lots of complexity (chaos theory) and have easy access to lots of interesting and difficult problems (game of go, etc). Virtual worlds permit the development of intelligent agents just like the "real" world does. A good job too - since we have no terribly good way of telling whether our world exists under simulation or not.

The Solomonoff induction -> Turing test proposition is detailed here.

Comment author: jsteinhardt 28 December 2011 05:04:55PM 0 points [-]

Sorry for the delayed response, it took me a while to get through the article and corresponding Hutter paper. Do you know of any sources that present the argument for why the Kolmogorov complexity of the universe should be relatively low (i.e. not proportional to the number of atoms), or else why Solomonoff induction would perform well even if the Kolmogorov complexity is high? These both seem intuitively true to me, but I feel uneasy accepting them as fact without a solid argument.

Comment author: timtyler 29 December 2011 12:02:30AM *  0 points [-]

The Kolmogorov complexity of the universe is a totally unknown quantity - AFAIK. Yudkowsky suggests a figure of 500 bits here - but there's not much in the way of supporting argument.

Solomonoff induction doesn't depend of the Kolmogorov complexity of the universe being low. The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.

Instead, consider that Solomonoff induction is a formalisation of Occam's razor - which is a well-established empirical principle.

Comment author: jsteinhardt 29 December 2011 12:24:18AM 2 points [-]

I don't understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.

Comment author: dlthomas 29 December 2011 12:24:33AM *  0 points [-]

The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.

Wouldn't it put an upper bound on the complexity of any given piece, as you can describe it with "the universe, plus the location of what I care about"?

Edited to add: Ah, yes but "the location of what I care about" is has potentially a huge amount of complexity to it.