JulianMorrison comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 11 June 2009 08:53:46PM *  1 point [-]

I used to think it's useful to study ad-hoc attempts at AGI, but it now seems to me that knowledge of these chaotic things is both very likely a dead end, even for destroying the world, and of a wrong character to progress towards FAI.

Comment author: JulianMorrison 15 June 2009 12:45:03PM -1 points [-]

Ad hoc AI is like ad hoc aircraft design. It flaps, it's got wings, it has to fly, right? If we keep trying stuff, we'll stumble across a wing that works. Maybe it's the feathers?

Comment author: randallsquared 16 June 2009 01:41:12PM *  2 points [-]

Since such aircraft design actually worked, and produced aeroplanes before pure theory-based design, perhaps it's not the best analogy. [Edit: Unless that was your point]

Comment author: Vladimir_Nesov 15 June 2009 01:40:15PM *  1 point [-]

There are multiple concepts in the potential of ad-hoc. There is Strong AI, Good AI (Strong AI that has a positive effect), and Useful AI (Strong AI that can be used as a prototype or inspiration for Good AI, but can go Paperclip maximizer if allowed to grow). These concepts can be believed to be in quite different relations to each other.

Your irony states that there is no potential for any Strong AI in ad-hoc. Given that stupid evolution managed to get there, I think that with enough brute force of technology it's quite feasible to get to Strong AI via this road.

Many reckless people working on AGI think that Strong AI is likely to also be a Good AI.

My previous position was that ad-hoc gives a good chance (in the near future) for Strong AI that is likely a Useful AI, but unlikely a Good AI. My current position is that ad-hoc has a small but decent chance (in the near future) for Strong AI, that is unlikely to be either Useful AI or Good AI.

Comment author: JulianMorrison 15 June 2009 01:59:08PM -1 points [-]

BTW, none of the above classifications are "friendly".

Comment author: Vladimir_Nesov 15 June 2009 02:09:55PM 0 points [-]

Good AI is a category containing Friendly AI, that doesn't require the outcome to be precisely right. This separates more elaborated concept of Friendly AI from an informal concept (requirement) of good outcome.

I believe the concepts are much more close than it seems, that is it's hard to construct an AI that is not precisely Friendly, but still Good.

Comment author: JulianMorrison 15 June 2009 08:19:31PM 0 points [-]

FAI is about being reliably harmless. Whether the outcome seems good in the short term is tangential. Even a "good" AI ought to be considered unfriendly if it's opaque to proof - what can you possibly rely upon? No amount of demonstrated good behavior can be trusted. It could be insincere, it could be sincere but fatally misguided, it could have a flaw that will distort its goals after a few recursions. We would be stupid to just run it and see.

Comment author: Vladimir_Nesov 15 June 2009 09:01:42PM 0 points [-]

At which point you are starting to think of what it takes to make not just informally "Good" AI, but an actually Friendly AI.