Warrigal comments on Open Thread: February 2010 - Less Wrong

1 Post author: wedrifid 01 February 2010 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (738)

You are viewing a single comment's thread.

Comment author: [deleted] 12 February 2010 03:15:32AM 1 point [-]

Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.

May I?

Comment author: orthonormal 15 February 2010 12:51:31AM 2 points [-]

I second Kevin: the nearest analogy that occurs to me is playing "kick the landmine" when the landmine is almost surely a dud.

Comment author: JGWeissman 15 February 2010 01:39:32AM 2 points [-]

Of course, the advantage of "kick the landmine" is that you don't take the rest of the world out in case it wasn't a dud.

Comment author: Kevin 12 February 2010 04:37:56AM *  2 points [-]

I think Eliezer would say no (see http://lesswrong.com/lw/10g/lets_reimplement_eurisko/) but I think you're so astronomically unlikely to succeed that it doesn't matter.

Comment author: ciphergoth 12 February 2010 10:31:04AM 1 point [-]

What on Earth? When you say "may I" you presumably mean "is this a good idea" since obviously we're not in a position to stop you. But you're already aware of the arguments why it isn't a good idea and you don't address them here, so it's not clear that you have a good purpose for this comment in mind.

Comment author: byrnema 12 February 2010 01:22:55PM *  2 points [-]

I interpreted as akin to a call to a suicide hot-line.

'This is sounding like a good idea...'

(Can you help / talk me out of it?)

If this is the case, we can probably give support. I certainly understand how curiosity can pull, and Warrigal may already be rationalizing that he probably won't make progress, and we can give advice that balances that. But then, is it true that Warrigal should be afraid of knowledge?

Comment author: ciphergoth 12 February 2010 02:05:48PM 2 points [-]

I don't think it's fear of knowledge that leads me to suggest you don't try to build a catapult to twang yourself into a tree.

Comment author: whpearson 12 February 2010 10:55:49AM 0 points [-]

Do you mean playing around with backprop? Or making your own algorithms.

Comment author: [deleted] 13 February 2010 12:49:31AM 0 points [-]

Either.

Comment author: Eliezer_Yudkowsky 15 February 2010 04:10:17AM 2 points [-]

If this is your state of knowledge then... how can I put this: it seems extremely likely that you'll start playing around with very simple tools, find out just how little they can do, and, if you're lucky, start reading up and rediscovering the world of AI.

Comment author: whpearson 14 February 2010 11:37:14PM *  2 points [-]

Backprop is likely to be safe. Lots of AI students play around with it and it is well behaved mathematically. If it was going to kill us it would have done so already. More advanced stuff has to be evaluated individually.

Comment author: JGWeissman 12 February 2010 03:30:12AM 0 points [-]

No.

What made you think you might get any other answer?

Comment author: [deleted] 12 February 2010 09:36:41PM -1 points [-]

Well, I did get other answers. Ask Kevin and thomblake why they answered that way, if you like.

Comment author: thomblake 12 February 2010 02:01:33PM -1 points [-]

Sounds fun. Though so far we don't have anything that you can "teach" in a general way.