Vladimir_Nesov comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: SoullessAutomaton 11 June 2009 08:37:59PM 1 point [-]

But can we learn anything useful for a complete theory of intelligence based on something like EURISKO? Sure, it's an ad hoc, throw things at the wall and see what sticks approach--but so are our brains, and if something like EURISKO can show limited, non-foomy levels of optimization power it would at least provide another crappy data point other than vertebrate brains on how intelligence works.

Comment author: Vladimir_Nesov 11 June 2009 08:53:46PM *  1 point [-]

I used to think it's useful to study ad-hoc attempts at AGI, but it now seems to me that knowledge of these chaotic things is both very likely a dead end, even for destroying the world, and of a wrong character to progress towards FAI.

Comment author: loqi 11 June 2009 09:37:58PM 4 points [-]

I think one of the factors that contributes to interest in ad-hoc techniques is the prospect of a "thrilling discovery". One is allowed to fantasize that all of their time and effort may pay off suddenly and unpredictably, which makes the research seem that much more fun and exciting. This is in contrast to a more formal approach in which understanding and progress are incremental by their very nature.

I bring this up because I see it as a likely underlying motive for arguments of the form "ad-hoc technique X is worth pursuing even though it's not a formal approach".

Comment author: Annoyance 12 June 2009 04:47:24PM 7 points [-]

There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries. Acknowledging our debt to the former, we yearn nonetheless for the latter. - Academician Prokhor Zakharov, "Address to the Faculty"

Comment author: Vladimir_Nesov 11 June 2009 09:55:32PM 0 points [-]

No, it actually looks (just barely) feasible to get a FOOM out of something ad-hoc, and there are even good reasons for expecting that. But it doesn't seem to be on the way towards deeper understanding. The best to hope for is catching it right when the FOOMing is imminent and starting to do serious theory, but the path of blind experimentation doesn't seem to be the most optimal one even towards blind FOOM.

Comment author: Eliezer_Yudkowsky 11 June 2009 11:02:30PM *  1 point [-]

That doesn't contradict what logi said. It could still be a motive.

Comment author: Vladimir_Nesov 11 June 2009 11:07:18PM 0 points [-]

It could, but it won't be invalid motive, as I (maybe incorrectly) heard implied.

Comment author: loqi 12 June 2009 12:26:47AM 1 point [-]

I didn't mean to imply it was an invalid motive, merely a potential underlying motive. If it is valid in the sense that you mean (and I think it is), that's just reason to scrutinize such claims even more closely.

Comment author: SoullessAutomaton 11 June 2009 09:07:32PM 2 points [-]

What changed your mind?

Comment author: Vladimir_Nesov 11 June 2009 09:20:07PM *  4 points [-]

Starting to seriously think about FAI and studying more rigorous system modeling techniques/theories changed my mind. There seems to be very little overlap between wild intuitions of ad-hoc AGI and technical challenges of careful inference/simulation or philosophical issues with formalizing decision theories for intelligence on overdrive.

Some of the intuitions from thinking about ad-hoc seem to carry over, but it's just that: intuitions, and understanding of approaches to more careful modeling, even if they are applicable only on "toy" applications, gives deeper insight than knowledge of a dozen "real projects". Intuitions gained from ad-hoc do apply, but only as naive clumsy caricatures.

Comment author: JulianMorrison 15 June 2009 12:45:03PM -1 points [-]

Ad hoc AI is like ad hoc aircraft design. It flaps, it's got wings, it has to fly, right? If we keep trying stuff, we'll stumble across a wing that works. Maybe it's the feathers?

Comment author: randallsquared 16 June 2009 01:41:12PM *  2 points [-]

Since such aircraft design actually worked, and produced aeroplanes before pure theory-based design, perhaps it's not the best analogy. [Edit: Unless that was your point]

Comment author: Vladimir_Nesov 15 June 2009 01:40:15PM *  1 point [-]

There are multiple concepts in the potential of ad-hoc. There is Strong AI, Good AI (Strong AI that has a positive effect), and Useful AI (Strong AI that can be used as a prototype or inspiration for Good AI, but can go Paperclip maximizer if allowed to grow). These concepts can be believed to be in quite different relations to each other.

Your irony states that there is no potential for any Strong AI in ad-hoc. Given that stupid evolution managed to get there, I think that with enough brute force of technology it's quite feasible to get to Strong AI via this road.

Many reckless people working on AGI think that Strong AI is likely to also be a Good AI.

My previous position was that ad-hoc gives a good chance (in the near future) for Strong AI that is likely a Useful AI, but unlikely a Good AI. My current position is that ad-hoc has a small but decent chance (in the near future) for Strong AI, that is unlikely to be either Useful AI or Good AI.

Comment author: JulianMorrison 15 June 2009 01:59:08PM -1 points [-]

BTW, none of the above classifications are "friendly".

Comment author: Vladimir_Nesov 15 June 2009 02:09:55PM 0 points [-]

Good AI is a category containing Friendly AI, that doesn't require the outcome to be precisely right. This separates more elaborated concept of Friendly AI from an informal concept (requirement) of good outcome.

I believe the concepts are much more close than it seems, that is it's hard to construct an AI that is not precisely Friendly, but still Good.

Comment author: JulianMorrison 15 June 2009 08:19:31PM 0 points [-]

FAI is about being reliably harmless. Whether the outcome seems good in the short term is tangential. Even a "good" AI ought to be considered unfriendly if it's opaque to proof - what can you possibly rely upon? No amount of demonstrated good behavior can be trusted. It could be insincere, it could be sincere but fatally misguided, it could have a flaw that will distort its goals after a few recursions. We would be stupid to just run it and see.

Comment author: Vladimir_Nesov 15 June 2009 09:01:42PM 0 points [-]

At which point you are starting to think of what it takes to make not just informally "Good" AI, but an actually Friendly AI.