RichardKennaway comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread.

Comment author: RichardKennaway 11 June 2009 09:22:15PM *  5 points [-]

I've just been Googling to see what became of EURISKO. The results are baffling. Despite its success in its time, there has been essentially no followup, and it has hardly been cited in the last ten years. Ken Haase claims improvements on EURISKO, but Eliezer disagrees; at any rate, the paper is vague and I cannot find Haase's thesis online. But if EURISKO is a dead end, I haven't found anything arguing that either.

Perhaps in a future where Friendly AI was achieved, emissaries are being/will be sent back in time to prevent any premature discovery of the key insights necessary for strong AI.

Comment author: pjeby 11 June 2009 09:45:57PM *  5 points [-]

Hm, the abstract for that paper mentions that:

a close coupling of representation syntax and semantics is neccessary for a discovery program to prosper in a given domain

This is a really interesting point; it seems related to the idea that to be an expert in something, you need a vocabulary close to the domain in question.

It also immediately raises the question of what the expert vocabulary of vocabulary formation/acquisition is, i.e. the domain of learning.

Comment author: SilasBarta 15 June 2009 06:46:54PM *  2 points [-]

a close coupling of representation syntax and semantics is neccessary for a discovery program to prosper in a given domain

This is a really interesting point; it seems related to the idea that to be an expert in something, you need a vocabulary close to the domain in question.

It doesn't seem that interesting to me: it's just a restatement that "data compression = data prediction". When you have a vocabulary "close to the domain" that simply means that common concepts are compactly expressed. Once you've maximally compressed a domain, you have discovered all regularities, and simply outputting a short random string will decompress into something useful.

How do you find which concepts are common and how do you represent them? Aye, there's the rub.

It also immediately raises the question of what the expert vocabulary of vocabulary formation/acquisition is, i.e. the domain of learning.

So my guess would be that the expert vocabulary of vocabulary formation is the vocabulary of data compression. I don't know how to make any use of that, though, because the No Free Lunch Theorems seem to say that there's no general algorithm that is the best across all domains And so there's no algorithmic way to find which is the best compressor for this universe.

(ETA: multiple quick edits)

Comment author: Daniel_Burfoot 12 June 2009 02:24:50PM *  0 points [-]

This is a really interesting point; it seems related to the idea that to be an expert in something, you need a vocabulary close to the domain in question.

I'm not so sure about this. I am pretty good at understanding visual reality, and I have some words to describe various objects, but my vocabulary is nowhere near as rich as my understanding is (of course, I'm only claiming to be an average member of a race of fantastically powerful interpreters of visual reality).

Let me give you an example. Say you had two pictures of faces of two different people, but the people look alike and the pictures were taken under similar conditions. Now a blind person, who happens to be a Matlab hacker, asks you to explain how you know the pictures are of different people, presumably by making reference to the pixel statistics of certain image regions (which the blind person can verify with Matlab). Is your face recognition vocabulary up to this challenge?

Comment author: Cyan 12 June 2009 02:58:26PM 5 points [-]

I think "vocabulary" in this sense refers to the vocabulary of the bits doing the actual processing. Humans don't have access to the "vocabulary" of their fusiform gyruses, only the result of its computations.

Comment author: Eliezer_Yudkowsky 11 June 2009 11:04:31PM 4 points [-]

Perhaps in a future where Friendly AI was achieved, emissaries are being/will be sent back in time to prevent any premature discovery of the key insights necessary for strong AI.

As silly explanations go, I prefer the anthropic explanation: In worlds where AI didn't stagnate, you're dead and hence not reading this.

Comment author: RichardKennaway 12 June 2009 10:40:01AM 2 points [-]

Or in non-anthropic terms, strong AI could be done on present-day hardware, if we only knew how, and our survival so far is down to blind luck in not yet discovering the right ideas?

For how long, in your estimate, has the hardware been powerful enough for this to be so?

If Eurisko was a non-zero step towards strong AI, would it have been any bigger a step if Lenat had been using present-day hardware? Or did it fizzle because it didn't have sufficiently rich self-improvement capabilities, regardless of how fast it might have been implemented?

Comment author: Jonathan_Graehl 12 June 2009 12:13:42AM 2 points [-]

That is silly. In the same vein, why worry about any risks? You'll continue to exist in whatever worlds they didn't develop into catastrophe.

Comment deleted 12 June 2009 09:45:34PM [-]
Comment author: Eliezer_Yudkowsky 12 June 2009 09:54:41PM 7 points [-]

Not all worlds in which you continue to exist are pleasant ones. I think Michael Vassar once called quantum immortality the most horrifying hypothesis he had ever taken seriously, or something along those lines.

Comment author: loqi 12 June 2009 10:39:25PM 3 points [-]

Indeed. In particular, "dying of old age" is pretty damn horrifying if you think quantum immortality holds.

Comment author: NancyLebovitz 14 July 2010 12:19:02PM 1 point [-]

If there's quantum immortality, what proportion of your lives would be likely to be acutely painful?

I don't have an intuition on that one. It seems as though worlds in which something causes good health would predominate over just barely hanging on, but I'm unsure of this.

Comment author: SoullessAutomaton 12 June 2009 10:42:55PM 0 points [-]

Hunh. I'm glad I'm not the only person who has always found quantum immortality far more horrifying than nonexistence.

Comment author: SoullessAutomaton 12 June 2009 11:00:20AM 1 point [-]

The most sensible explanation has, I think been mentioned previously: that EURISKO was both overhyped and a dead end. Perhaps the techniques it used fell apart rapidly in less rigid domains than rule-based wargaming, and perhaps its successes were very heavily guided by Lenat. It's somewhat telling that Lenat, the only one who really knows how it worked, went off to do something completely different from EURISKO.

In this regard, one could consider something like EURISKO not as a successful AI, but as a successful cognitive assistant for someone working in a mostly unexplored rule-based system. Recall the results that AM, EURISKO's predecessor, got--if memory serves me, it rediscovered a lot of mathematical principles, none of them novel, but duplicating mostly from scratch results that took many years and many mathematicians to find originally.

Not that I'm certain this is the case by a long shot, but it seems the most superficially plausible explanation.

Comment author: ChrisHibbert 13 June 2009 03:18:03AM 0 points [-]

From what I remember of the papers, it was pretty clear (though perhaps not stated explicitly) that AM "happened across" many interesting factoids about math, but it was Lenat's intervention that declared them important and worth further study. I think your second paragraph implies this, but I wanted it to be explicit.

A reasonable interpretation of AM's success was that Lenat was able to recognize many important mathematical truths in AM's meanderings. Lenat never claimed any new discoveries on behalf of AM.

Comment author: Cyan 13 June 2009 03:34:59AM 6 points [-]

Lenat was also careful to note that AM's success, such as it was, was very much due to the fact that LISP's "vocabulary" started with a strong relation to mathematics. EURISKO didn't show anything like reasonable performance until he realized that the vocabulary it was manipulating needed to be "close" to the modeled domain, in the sense that interesting (to Lenat) statements about the domain needed to be short, and therefore easy for EURISKO to come across.

Comment author: SoullessAutomaton 13 June 2009 01:49:28PM 1 point [-]

Yeah, that was basically what I meant. My hypothesis was that if you gave AM to someone with good mathematical aptitude but little prior knowledge, they would discover a lot more interesting mathematical statements than they would have without AM's help, by analogy to Lenat discovering more interesting logical consequences of the wargaming rules with EURISKO's help than any of the experienced players discovered themselves.