thomblake comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 11 June 2009 08:27:01PM 7 points [-]

Not exactly, Thom. Roughly, for FAI you need precise self-modification. For precise self-modification, you need a precise theory of the intelligence doing the self-modification. To get to FAI you have to walk the road that leads to precise theories of intelligence - something like our present-day probability theory and decision theory, but more powerful and general and addressing issues these present theories don't.

Eurisko is the road of self-modification done in an imprecise way, ad-hoc, throwing together whatever works until it gets smart enough to FOOM. This is a path that leads to shattered planets, if it were followed far enough. No, I'm not saying that Eurisko in particular is far enough, I'm saying that it's a first step along that path, not the FAI path.

Comment author: thomblake 12 June 2009 02:32:49PM *  0 points [-]

Right.

That's what I had in mind, though I didn't state it explicitly. It's what I meant by 'worked out'. It's clear that you want these things worked out formally, as strong as being provably friendly.

I'm still skeptical on the world-destroying. My money's on chaos to FOOM. Dynamism FTW. But then, I think AGI will come from robots.