derekz comments on Let's reimplement EURISKO! - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (151)
Not exactly, Thom. Roughly, for FAI you need precise self-modification. For precise self-modification, you need a precise theory of the intelligence doing the self-modification. To get to FAI you have to walk the road that leads to precise theories of intelligence - something like our present-day probability theory and decision theory, but more powerful and general and addressing issues these present theories don't.
Eurisko is the road of self-modification done in an imprecise way, ad-hoc, throwing together whatever works until it gets smart enough to FOOM. This is a path that leads to shattered planets, if it were followed far enough. No, I'm not saying that Eurisko in particular is far enough, I'm saying that it's a first step along that path, not the FAI path.
Perhaps a writeup of what you have discovered, or at least surmise, about walking that road would encourage bright young minds to work on those puzzles instead of reimplementing Eurisko.
It's not immediately clear that studying and playing with specific toy self-referential systems won't lead to ideas that might apply to precise members of that class.
I've written up some of the concepts of precise self-modification, but need to collect the posts on a Wiki page on "lawfulness of intelligence" or something.
Any of these posts ever go up?
Cf. "Lawful intelligence."