Eliezer_Yudkowsky comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 11 June 2009 09:22:15PM *  5 points [-]

I've just been Googling to see what became of EURISKO. The results are baffling. Despite its success in its time, there has been essentially no followup, and it has hardly been cited in the last ten years. Ken Haase claims improvements on EURISKO, but Eliezer disagrees; at any rate, the paper is vague and I cannot find Haase's thesis online. But if EURISKO is a dead end, I haven't found anything arguing that either.

Perhaps in a future where Friendly AI was achieved, emissaries are being/will be sent back in time to prevent any premature discovery of the key insights necessary for strong AI.

Comment author: Eliezer_Yudkowsky 11 June 2009 11:04:31PM 4 points [-]

Perhaps in a future where Friendly AI was achieved, emissaries are being/will be sent back in time to prevent any premature discovery of the key insights necessary for strong AI.

As silly explanations go, I prefer the anthropic explanation: In worlds where AI didn't stagnate, you're dead and hence not reading this.

Comment author: RichardKennaway 12 June 2009 10:40:01AM 2 points [-]

Or in non-anthropic terms, strong AI could be done on present-day hardware, if we only knew how, and our survival so far is down to blind luck in not yet discovering the right ideas?

For how long, in your estimate, has the hardware been powerful enough for this to be so?

If Eurisko was a non-zero step towards strong AI, would it have been any bigger a step if Lenat had been using present-day hardware? Or did it fizzle because it didn't have sufficiently rich self-improvement capabilities, regardless of how fast it might have been implemented?

Comment author: Jonathan_Graehl 12 June 2009 12:13:42AM 2 points [-]

That is silly. In the same vein, why worry about any risks? You'll continue to exist in whatever worlds they didn't develop into catastrophe.

Comment deleted 12 June 2009 09:45:34PM [-]
Comment author: Eliezer_Yudkowsky 12 June 2009 09:54:41PM 7 points [-]

Not all worlds in which you continue to exist are pleasant ones. I think Michael Vassar once called quantum immortality the most horrifying hypothesis he had ever taken seriously, or something along those lines.

Comment author: loqi 12 June 2009 10:39:25PM 3 points [-]

Indeed. In particular, "dying of old age" is pretty damn horrifying if you think quantum immortality holds.

Comment author: NancyLebovitz 14 July 2010 12:19:02PM 1 point [-]

If there's quantum immortality, what proportion of your lives would be likely to be acutely painful?

I don't have an intuition on that one. It seems as though worlds in which something causes good health would predominate over just barely hanging on, but I'm unsure of this.

Comment author: SoullessAutomaton 12 June 2009 10:42:55PM 0 points [-]

Hunh. I'm glad I'm not the only person who has always found quantum immortality far more horrifying than nonexistence.