You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" - Less Wrong Discussion

2 Post author: harshhpareek 17 December 2013 07:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 17 December 2013 09:06:42AM 4 points [-]

I don't follow. You can write a program to generate random hypotheses, and you can write a program to figure out the implications of those hypotheses and whether they fit in with current experimental data, and if they do, to come up with tests of those ideas for future experiments. Now, just generating hypotheses completely randomly may not be a very efficient way, but it would work. That's very different from saying "It's impossible". It's just a question of figuring out how to make it efficient. So what's the problem here?

I think his claim is basically "we don't know yet how to teach a machine how to identify reasonable hypotheses in a short amount of time," where the "short amount of time" is implicit. The proposal "let's just test every possible program, and see which ones explain Dark Matter" is not a workable approach, even if it seems to describe the class that contains actual workable approaches. (Imagine actually going to a conference and proposing a go-bot that considers every possible sequence of moves possible from the current board position, and then picks the tree most favorable to it.)

But the Turing test is very different from coming up with an explanation of dark matter. The Turing test is a very specific test of use of language and common sense, which is only defined in relation to human beings (and thus needs human beings to test) whereas an explanation of dark matter does not need human beings to test. Thus making this particular argument moot.

I think the Turing test is being used as an illustrative example here. It seems unlikely that you could have a genetic algorithm operate on a population of code and end up with a program that passes the Turing test, because at each step the genetic algorithm (as an optimization procedure) needs to have some sense of what is more or less likely to pass the test. It similarly seems unlikely that you could have a genetic algorithm operate on a population of physics explanations and end up with an explanation that successfully explains Dark Matter, because at each step the genetic algorithm needs to have some sense of what is more or less likely to explain Dark Matter.

I think his claim is that a correct inference procedure will point right at the correct answer, but as I disagree with that point I am reluctant to ascribe it to him. I think it likely that a correct inference procedure involves checking out vast numbers of explanations, and discarding most of them very early on. But optimization over explanations instead of over plans is in its infancy, and I think he's right that AGI will be distant so long as that remains the case.

What else could it possibly be?

My interpretation of that section is that Deutsch is claiming that "induction" is not a complete explanation. If you say "well, the sun rose every day for as long as I can remember, and I suspect it will do so today," then you get surprised by things like "well, the year starts with 19 every day for as long as I can remember, and I suspect it will do so today." If you say "the sun rises because the Earth rotates around its axis, the sun emits light because of nuclear fusion, and I think the sun has enough fuel to continue shining, angular momentum is conserved, and the laws of physics do not vary with time," then your expectation that the sun will rise is very likely to be concordant with reality, and you are very unlikely to make that sort of mistake with the date. But how do you gets beliefs of that sort to begin with? You use science, which is a bit more complicated than induction.

Similarly, the claim that prediction is unimportant seems to be that the target of an epistemology should be at least one level higher than the output predictions- you don't want "the probability the sun will rise tomorrow" but "conservation of angular momentum" because the second makes you more knowledgeable and more powerful.

Comment author: passive_fist 17 December 2013 06:50:21PM 2 points [-]

I think his claim is basically "we don't know yet how to teach a machine how to identify reasonable hypotheses in a short amount of time," where the "short amount of time" is implicit.

My impression was that he was saying that creativity is some mysterious thing that we don't know how to implement. But we do. Creativity is just search. Search that is possibly guided by experience solving similar problems. By learning from past experiences, search becomes more efficient. This idea is quite consistent with studies on how the human brain works. Beginner chess players rely more on 'thinking' (i.e. considering a large variety of moves, most of which are terrible), but grandmasters seem to rely more on their memory.

It similarly seems unlikely that you could have a genetic algorithm operate on a population of physics explanations and end up with an explanation that successfully explains Dark Matter, because at each step the genetic algorithm needs to have some sense of what is more or less likely to explain Dark Matter.

As I said, though, it's quite different, because a hypothetical explanation for dark matter needs to only be consistent with existing experimental data. It's true that it's unfeasible to do this for the Turing test, because you need to test millions of candidate programs against humans, and this cannot be done inside the computer unless you already have AGI. But checking proposals for dark matter against existing data can be done entirely inside the computer.

I think his claim is that a correct inference procedure will point right at the correct answer, but as I disagree with that point I am reluctant to ascribe it to him.

I agree with you.

My interpretation of that section is that Deutsch is claiming that "induction" is not a complete explanation. If you say "well, the sun rose every day for as long as I can remember, and I suspect it will do so today," then you get surprised by things like "well, the year starts with 19 every day for as long as I can remember, and I suspect it will do so today."

If the machine's only inputs were '1990, 1991, 1992, ... , 1999', and it had no knowledge of math, arithmetic, language, or what years represent, then how on Earth can it possibly make any inference other than the next date will also start with 19? There is no other inference it could make.

On the other hand, if it had access to the sequence '1900, 1901, 1902, ... , 1999' then it becomes a different story. It can infer that 1 always follows 0, 2 always follows 1, etc., and 0 always follows 9. It could also infer that when 0 follows 9, the next digit is incremented. Thus it can conclude that after 1999, the date 2000 is plausible, and add it to its list of highly-plausible hypotheses. Another hypothesis could be that the 3rd digit is never affected, and that the next date after 1999 is 1900.

Equivalently, if it had already been told about math, it would know how number sequences work, and could say with high confidence that the next year will be 2000. Yes, going to school counts as 'past experiences'.

It's is a common mistake that people make when talking about induction. They think induction is simply just 'X has always happened, therefore it will always happen'. But induction is far more complicated than that! That's why it took so long to come up with a mathematical theory of induction (Solomonoff induction). Solomonoff induction considers all possible hypotheses - some of them possibly extremely complex - and weighs them according to how simple they are and if they fit the observed data. That is the very definition of science. Solomonoff induction could accurately predict the progression of dates, and could do 'science'. People have implemented time-limited versions of Solomonoff induction on a computer and they work as expected. We do need to come up with faster and more efficient ways of doing this, though. I agree with that.

I agree that there's a lot more work to be done in AI. We need to find better learning and search algorithms. What I disagree with is that the work must be this kind of philosophical work that Deutsch is proposing. I think the work that needs to be done is very much engineering work.

Comment author: Vaniver 17 December 2013 09:12:03PM 4 points [-]

Creativity is just search.

Correct, but not helpful; when you say "just search," that's like saying "but Dark Matter is just physics." The physicists don't have a good explanation of Dark Matter yet, and the search people don't have a good implementation of creativity (on the level of concepts) yet.

I agree that there's a lot more work to be done in AI. We need to find better learning and search algorithms. What I disagree with is that the work must be this kind of philosophical work that Deutsch is proposing. I think the work that needs to be done is very much engineering work.

It is not obvious to me that Deutsch is familiar with ideas like Solomonoff induction, Pearl's work on causality, and so on, and thinks that they're inadequate to the task. He might be saying "we need a formalized version of induction" while unaware that Solomonoff already proposed one.

Comment author: passive_fist 17 December 2013 09:46:13PM *  2 points [-]

I made it clear what I mean:

Search that is possibly guided by experience solving similar problems. By learning from past experiences, search becomes more efficient.

I agree that there's a lot more work to be done in AI. We need to find better learning and search algorithms.

Why did I mention this at all? Because there's no other way to do this. Creativity (coming up with new unprecedented solutions to problems) must utilize some form of search, and due to the no-free-lunch theorem, there is no shortcut to finding the solution to a problem. The only thing that can get around no-free-lunch is to consider an ensemble of problems. That is, to learn from past experiences.

And about your point:

It is not obvious to me that Deutsch is familiar with ideas like Solomonoff induction, Pearl's work on causality, and so on, and thinks that they're inadequate to the task.

I agree with this. The fact that he didn't even mention Solomonoff at all, even in passing, despite the fact that he devoted half the article to talking about induction, is strongly indicative of this.

Comment author: Lumifer 17 December 2013 07:17:16PM 3 points [-]

Creativity is just search.

That doesn't look helpful to me. Yes, you can define creativity this way but the price you pay is that your search space becomes impossibly huge and high-dimensional.

Defining sculpture as a search for a pleasing arrangement of atoms isn't very useful.

Comment author: passive_fist 18 December 2013 06:09:11AM 0 points [-]

After that sentence I made it clear what I mean. See my reply to Vaniver.