In the early 1980s Douglas Lenat wrote EURISKO, a program Eliezer called "[maybe] the most sophisticated self-improving AI ever built". The program reportedly had some high-profile successes in various domains, like becoming world champion at a certain wargame or designing good integrated circuits.
Despite requests Lenat never released the source code. You can download an introductory paper: "Why AM and EURISKO appear to work" [PDF]. Honestly, reading it leaves a programmer still mystified about the internal workings of the AI: for example, what does the main loop look like? Researchers supposedly answered such questions in a more detailed publication, "EURISKO: A program that learns new heuristics and domain concepts." Artificial Intelligence (21): pp. 61-98. I couldn't find that paper available for download anywhere, and being in Russia I found it quite tricky to get a paper version. Maybe you Americans will have better luck with your local library? And to the best of my knowledge no one ever succeeded in (or even seriously tried) confirming Lenat's EURISKO results.
Today in 2009 this state of affairs looks laughable. A 30-year-old pivotal breakthrough in a large and important field... that never even got reproduced. What if it was a gigantic case of Clever Hans? How do you know? You're supposed to be a scientist, little one.
So my proposal to the LessWrong community: let's reimplement EURISKO!
We have some competent programmers here, don't we? We have open source tools and languages that weren't around in 1980. We can build an open source implementation available for all to play. In my book this counts as solid progress in the AI field.
Hell, I'd do it on my own if I had the goddamn paper.
Update: RichardKennaway has put Lenat's detailed papers up online, see the comments.
Eliezer,
I am rather surprised that you accept all of the claimed achievements of Eurisko and even regard it as "dangerous", despite the fact that no one save the author has ever seen even a fragment of its source code. I firmly believe that we are dealing with a "mechanical Turk."
I am also curious why you believe that meaningful research on Friendly AI is at all possible without prior exposure to a working AGI. To me it seems a bit like trying to invent the ground fault interrupter before having discovered electricity.
Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place? You seem to side with humanity over a hypothetical Paperclip Optimizer. Why is that? It seems to me that unaugmented human intelligence is itself an "unfriendly (non-A)I", quite efficient at laying waste to whatever it touches.
There is every reason to believe that if an AGI does not appear before the demise of cheap petroleum, our species is doomed to "go out with a whimper." I for one prefer the "bang" as a matter of principle.
I would gladly accept taking a chance at conversion to paperclips (or some similarly perverse fate at the hands of an unfriendly AGI) when the alternative appears to be the artificial squelching of the human urge to discover and invent, with the inevitable harvest of stagnation and eventually oblivion.
I accept Paperclip Optimization (and other AGI failure modes) as an honorable death, far superior to being eaten away by old age or being killed by fellow humans in a war over dwindling resources. I want to live in interesting times. Bring on the AGI. It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.
Why is the continued hegemony of Neolithic flesh-bags so precious to you?
This was addressed in "Value is Fragile."
I don't think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it's just this device that tiles the light... (read more)