oooo comments on Reply to Holden on 'Tool AI' - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (348)
So first a quick note: I wasn't trying to say that the difficulties of AIXI are universal and everything goes analogously to AIXI, I was just stating why AIXI couldn't represent the suggestion you were trying to make. The general lesson to be learned is not that everything else works like AIXI, but that you need to look a lot harder at an equation before thinking that it does what you want.
On a procedural level, I worry a bit that the discussion is trying to proceed by analogy to Google Maps. Let it first be noted that Google Maps simply is not playing in the same league as, say, the human brain, in terms of complexity; and that if we were to look at the winning "algorithm" of the million-dollar Netflix Prize competition, which was in fact a blend of 107 different algorithms, you would have a considerably harder time figuring out why it claimed anything it claimed.
But to return to the meta-point, I worry about conversations that go into "But X is like Y, which does Z, so X should do reinterpreted-Z". Usually, in my experience, that goes into what I call "reference class tennis" or "I'm taking my reference class and going home". The trouble is that there's an unlimited number of possible analogies and reference classes, and everyone has a different one. I was just browsing old LW posts today (to find a URL of a quick summary of why group-selection arguments don't work in mammals) and ran across a quotation from Perry Metzger to the effect that so long as the laws of physics apply, there will always be evolution, hence nature red in tooth and claw will continue into the future - to him, the obvious analogy for the advent of AI was "nature red in tooth and claw", and people who see things this way tend to want to cling to that analogy even if you delve into some basic evolutionary biology with math to show how much it isn't like intelligent design. For Robin Hanson, the one true analogy is to the industrial revolution and farming revolutions, meaning that there will be lots of AIs in a highly competitive economic situation with standards of living tending toward the bare minimum, and this is so absolutely inevitable and consonant with The Way Things Should Be as to not be worth fighting at all. That's his one true analogy and I've never been able to persuade him otherwise. For Kurzweil, the fact that many different things proceed at a Moore's Law rate to the benefit of humanity means that all these things are destined to continue and converge into the future, also to the benefit of humanity. For him, "things that go by Moore's Law" is his favorite reference class.
I can have a back-and-forth conversation with Nick Bostrom, who looks much more favorably on Oracle AI in general than I do, because we're not playing reference class tennis with "But surely that will be just like all the previous X-in-my-favorite-reference-class", nor saying, "But surely this is the inevitable trend of technology"; instead we lay out particular, "Suppose we do this?" and try to discuss how it will work, not with any added language about how surely anyone will do it that way, or how it's got to be like Z because all previous Y were like Z, etcetera.
My own FAI development plans call for trying to maintain programmer-understandability of some parts of the AI during development. I expect this to be a huge headache, possibly 30% of total headache, possibly the critical point on which my plans fail, because it doesn't happen naturally. Go look at the source code of the human brain and try to figure out what a gene does. Go ask the Netflix Prize winner for a movie recommendation and try to figure out "why" it thinks you'll like watching it. Go train a neural network and then ask why it classified something as positive or negative. Try to keep track of all the memory allocations inside your operating system - that part is humanly understandable, but it flies past so fast you can only monitor a tiny fraction of what goes on, and if you want to look at just the most "significant" parts, you would need an automated algorithm to tell you what's significant. Most AI algorithms are not humanly understandable. Part of Bayesianism's appeal in AI is that Bayesian programs tend to be more understandable than non-Bayesian AI algorithms. I have hopeful plans to try and constrain early FAI content to humanly comprehensible ontologies, prefer algorithms with humanly comprehensible reasons-for-outputs, carefully weigh up which parts of the AI can safely be less comprehensible, monitor significant events, slow down the AI so that this monitoring can occur, and so on. That's all Friendly AI stuff, and I'm talking about it because I'm an FAI guy. I don't think I've ever heard any other AGI project express such plans; and in mainstream AI, human-comprehensibility is considered a nice feature, but rarely a necessary one.
It should finally be noted that AI famously does not result from generalizing normal software development. If you start with a map-route program and then try to program it to plan more and more things until it becomes an AI... you're doomed, and all the experienced people know you're doomed. I think there's an entry or two in the old Jargon File aka Hacker's Dictionary to this effect. There's a qualitative jump to writing a different sort of software - from normal programming where you create a program conjugate to the problem you're trying to solve, to AI where you try to solve cognitive-science problems so the AI can solve the object-level problem. I've personally met a programmer or two who've generalized their code in interesting ways, and who feel like they ought to be able to generalize it even further until it becomes intelligent. This is a famous illusion among aspiring young brilliant hackers who haven't studied AI. Machine learning is a separate discipline and involves algorithms and problems that look quite different from "normal" programming.
Thanks for the response. My thoughts at this point are that
Canonical software development examples emphasizing "proving safety/usefulness before running" over the "tool" software development approach are cryptographic libraries and NASA space shuttle navigation.
At the time of writing this comment, there was recent furor over software called CryptoCat that didn't provide enough warnings that it was not properly vetted by cryptographers and thus should have been assumed to be inherently insecure. Conventional wisdom and repeated warnings from the security community state that cryptography is extremely difficult to do properly and attempting to create your own may result in catastrophic results. A similar thought and development process goes into space shuttle code.
It seems that the FAI approach to "proving safety/usefulness" is more similar to the way cryptographic algorithms are developed than the (seemingly) much faster "tool" approach, which is more akin to web development where the stakes aren't quite as high.
EDIT: I believe the "prove" approach still allows one to run snippets of code in isolation, but tends to shy away from running everything end-to-end until significant effort has gone into individual component testing.
The analogy with cryptography is an interesting one, because...
In cryptography, even after you've proven that a given encryption scheme is secure, and that proof has been centuply (100 times) checked by different researchers at different institutions, it might still end up being insecure, for many reasons.
Examples of reasons include: