Vaniver comments on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (37)
Deutsch is interesting. He seems very close to the LW camp, and I think he's someone LWers should at least be familiar with. (This article is not as good an introduction as The Beginning of Infinity, I think.)
I suspect, personally, that the conflict between "Popperian conjecture and criticism" and the LW brand of Bayesianism is a paper tiger. See this comment thread in particular.
Deutsch is right that a huge part of artificial general intelligence is the ability to infer explanatory models from experience from the complete (infinite!) set of possible explanations, rather than just fit parameters to a limited set of hardcoded explanatory models (as AI programs today work). But that's what I think people here think (generally under the name Solomonoff induction).
Deutsch seems pretty clueless in the section quoted below. I don't see why students should be interested in what he has to say on this topic.
He's clever enough to get a lot of things right, and I think the things that he gets wrong he gets wrong for technical reasons. This means it's relatively quick to dispense with his confusions if you know the right response, but if you can't it points out places you need to shore up your knowledge. (Here I'm using the general you; I'm pretty sure you didn't have any trouble, Tim.)
I also think his emphasis on concepts- which seems to be rooted in his choice of epistemology- is a useful reminder of the core difference between AI and AGI, but don't expect it to be novel content for many (instead of just novel emphasis).