This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.
Depends how cool. I don't know the space of self-modifying programs very well. Anything cooler than anything that's been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it'd go to "oh, shit" one day after a sequence of "ooh, cools" and I don't know how long that sequence is.
I mean, if you kept on making it marginally cooler and cooler, it'd go to "oh, shit" one day after a sequence of "ooh, cools" and I don't know how long that sequence is.
This means we should feel pretty safe, since AI does not appear to be making even incremental progress.
Really, it's hard for anyone who is well-versed in the "state of the art" of AI to feel any kind of alarm about the possibility of an imminent FOOM. Take a look at this paper. Skim through the intro, note the long and complicated reinforcement learning algorithm, and check out the empirical results section. The test domain involves a monkey in a 5x5 playroom. There are some fun little complications, like a light switch and a bell. Note that these guys are top-class (Andrew Barto basically invented RL), and the paper was published at one of the top-tier machine learning conferences (NIPS), in 2005.
Call me a denier, but I just don't think the monkey is going to bust out of his playroom and take over the world. At least, not anytime soon.
Does the specific distance even matter? UFAI vs FAI is zero sum, and we have no idea how long FAI will take us. Any progress toward AGI that isn't "matched" by progress toward FAI is regressive, even if AGI is still 100 years off.
...aaaand that's why I don't go around discussing the danger paths until someone (who I can realistically influence) actually starts to advocate going down them. Plenty of idiots to take it as an instruction manual. So I discuss the safe path but make no particular advance effort to label the dangerous ones.
The journal's web site is here, from where I've just downloaded a copy of the paper. I don't know if it's freely available (my university has a subscription), but if anyone wants it and can't get it from the web site, send me an email address to send it to. (EDIT: Now online, see my later comment.)
The paper describes itself as the third in a series, of which the first appeared in the same journal, volume 19, pp.189-249 (also downloaded). The second is in a volume called "Machine Learning", which you can find here, but I haven't checked if the whole book is accessible. (EDIT: sorry, wrong reference, see later comment.)
Personally, I'm deeply sceptical of all work that has ever been done on AI (including the rebranding as AGI), which is why I consider Friendly AI to be a real but remote problem. However, I've no interest in raining on everyone else's parade. If you think you can make it work, go for it!
You're having trouble figuring out how to implement AIXI? I saw Marcus write it out as one equation. Perfectly clear what the main loop looks like. All you need is an infinitely fast computer and a halting oracle.
All you need is an infinitely fast computer and a halting oracle.
Couldn't you implement a halting oracle given an infinitely fast computer, though?
So, that's one requirement down! We'll have this AIXI thing built any day now.
One of my professors at UT reimplemented AM many years ago. I dusted it off and got it to compile with GNU prolog last christmas. Never got around to doing anything with it, though.
I have located a paper describing Lenat's "Representation Language Language", in which he wrote Eurisko. Since no one has brought it up in this thread, I will assume that it is not well-known, and may be of interest to Eurisko-resurrection enthusiasts. It appears that a somewhat more detailed report on RLL is floating around public archives; I have not yet been able to track down a copy.
I have found Haase's thesis online. Would it be irresponsible of me to post the link here? (It is not actually hard to find.)
ETA: How concerned should we be that DARPA is going full steam ahead for strong AI? Perhaps not very much, given the failure of at least two of their projects along these lines:
High Yield Cognitive Systems. The Wikipedia article (itself defunct) includes the grandiose claim that it failed because human-level AI was not ambitious enough.
Physical intelligence. Current.
I've just been Googling to see what became of EURISKO. The results are baffling. Despite its success in its time, there has been essentially no followup, and it has hardly been cited in the last ten years. Ken Haase claims improvements on EURISKO, but Eliezer disagrees; at any rate, the paper is vague and I cannot find Haase's thesis online. But if EURISKO is a dead end, I haven't found anything arguing that either.
Perhaps in a future where Friendly AI was achieved, emissaries are being/will be sent back in time to prevent any premature discovery of the key insights necessary for strong AI.
A couple of other links with tantalizing clues:
Doug Lenat's source code for AM and possibly EURISKO w/Traveller found in public archives.. see https://white-flame.com/am-eurisko.html
I find it extremely difficult to believe that Eurisko actually worked as advertised, given Dr. Lenat's behavior when confronted with requests for the source code.
What I find truly astounding is the readiness with which other researchers, textbook authors, journalists, etc. simply took his word for it, without holding the claim to anything like the usual standards of scientific evidence.
I'm with you, all the way. I was intensely curious when I first read about it. Specifically, the idea of being able to generate arbitrary concepts without being pre-programmed, and having heuristics and metaheuristics and meta[*n]-heuristics that were apparently able to come up with non-obvious solutions to problems, like that war game.
It even came up with interesting results when it didn't solve anything, such as heuristics that somehow optimized themselves for "claiming credit for findings of other heuristics".
So yes, let's pull back the curtain.
I've always been more suspicious it's a 'mechanical turk' than a 'clever hans'.
How could he not make the source code public? Who does he think he is, Microsoft?
I bet any results therein are subsumed by modern developments, and are nothing particularly interesting from the right background, so only the mystery continues to capture attention.
Doug Lenat's sources for AM (and EURISKO+Traveller?) found in public archives
https://news.ycombinator.com/item?id=38413615
Update on this project: Lenat's thesis on AM is available for purchase online, and explains with all necessary details how AM works. (AM: An Artificial Intelligence Approach to Discovery in Mathematics as Heuristic Search)
Unfortunately I have not found a paper that describes Eurisko itself with the same degree of precision, but that's not too much of an issue.
For a school project, I am reimplementing AM in the context of chess-playing, and it's looking good. Lenat's thesis is largely enough to do that.
Interesting post (thanks for putting up the detailed papers RichardKennaway!!)! I've always been fascinated by Doug Lenat and his creations and I would like to share a google techtalk, by Doug Lenat about his work and ideas from 2006. It's contents have direct bearing on this post (although it doesn't mention EURISKO specifically nor does it give insight into it's main loop, it's more of an overview thing and the first half has a slight bias towards search and it ends up discussing CYC: it does give a lot of good information about how to model the world an...
In the early 1980s Douglas Lenat wrote EURISKO, a program Eliezer called "[maybe] the most sophisticated self-improving AI ever built". The program reportedly had some high-profile successes in various domains, like becoming world champion at a certain wargame or designing good integrated circuits.
Despite requests Lenat never released the source code. You can download an introductory paper: "Why AM and EURISKO appear to work" [PDF]. Honestly, reading it leaves a programmer still mystified about the internal workings of the AI: for example, what does the main loop look like? Researchers supposedly answered such questions in a more detailed publication, "EURISKO: A program that learns new heuristics and domain concepts." Artificial Intelligence (21): pp. 61-98. I couldn't find that paper available for download anywhere, and being in Russia I found it quite tricky to get a paper version. Maybe you Americans will have better luck with your local library? And to the best of my knowledge no one ever succeeded in (or even seriously tried) confirming Lenat's EURISKO results.
Today in 2009 this state of affairs looks laughable. A 30-year-old pivotal breakthrough in a large and important field... that never even got reproduced. What if it was a gigantic case of Clever Hans? How do you know? You're supposed to be a scientist, little one.
So my proposal to the LessWrong community: let's reimplement EURISKO!
We have some competent programmers here, don't we? We have open source tools and languages that weren't around in 1980. We can build an open source implementation available for all to play. In my book this counts as solid progress in the AI field.
Hell, I'd do it on my own if I had the goddamn paper.
Update: RichardKennaway has put Lenat's detailed papers up online, see the comments.