timtyler comments on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" - Less Wrong

2 Post author: harshhpareek 17 December 2013 07:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 17 December 2013 08:00:52AM *  8 points [-]

The enterprise of achieving it artificially — the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence.

I disagree with this. The development of probabilistic graphical models (incl. bayesian networks and some types of neural networks) was a very important forward advance, I think.

It remained a guess until the 1980s, when I proved it using the quantum theory of computation.

A little bit of arrogance here from Deutsch, but we can let it slide.

This astounding claim split the intellectual world into two camps, one insisting that AGI was none the less impossible, and the other that it was imminent. Both were mistaken. The first, initially predominant, camp cited a plethora of reasons ranging from the supernatural to the incoherent. All shared the basic mistake that they did not understand what computational universality implies about the physical world, and about human brains in particular.

Absolutely true, and the first camp still persists to this day, and is still extremely confused/ignorant about universality. It's a view that is espoused even in 'popular science' books.

Suppose you were somehow to give them a list, as with the temperature-conversion program, of explanations of Dark Matter that would be acceptable outputs of the program. If the program did output one of those explanations later, that would not constitute meeting your requirement to generate new explanations. For none of those explanations would be new: you would already have created them yourself in order to write the specification. So, in this case, and actually in all other cases of programming genuine AGI, only an algorithm with the right functionality would suffice. But writing that algorithm (without first making new discoveries in physics and hiding them in the program) is exactly what you wanted the programmers to do!

I don't follow. You can write a program to generate random hypotheses, and you can write a program to figure out the implications of those hypotheses and whether they fit in with current experimental data, and if they do, to come up with tests of those ideas for future experiments. Now, just generating hypotheses completely randomly may not be a very efficient way, but it would work. That's very different from saying "It's impossible". It's just a question of figuring out how to make it efficient. So what's the problem here?

Nor can it be met by the technique of ‘evolutionary algorithms’: the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves.

But the Turing test is very different from coming up with an explanation of dark matter. The Turing test is a very specific test of use of language and common sense, which is only defined in relation to human beings (and thus needs human beings to test) whereas an explanation of dark matter does not need human beings to test. Thus making this particular argument moot.

The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible.

What else could it possibly be? Information is either encoded into a brain, or predicted based on past experiences. There is no other way to gain information. Deutsch gives the example of dates starting with 19- or 20-. Surely, such information is not encoded into our brains from birth. It must be learned from past experiences. But knowledge of dates isn't the only knowledge we have! We have teachers and parents telling us about these things so that we can learn how they work. This all falls under the umbrella of 'past experiences'. And, indeed, a machine who's only inputs were dates would have a tough time making meaningful inferences about them, no matter how intelligent or creative it was.

But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself.

I cannot make head or tail of this.

Anyway, I stopped reading after this point because it was disappointing. I expected an interesting and insightful argument, one to make me actually question my fundamental assumptions, but that's not the case here.

Comment author: timtyler 18 December 2013 12:25:45AM *  -1 points [-]

But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences.

My estimate is 80% prediction, with the rest evaluation and tree pruning.