eli_sennesh comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong

8 Post author: Richard_Loosemore 05 May 2015 02:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (343)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 May 2015 02:41:47AM 3 points [-]

In a flash of insight combined with some open source deep learning sites (like kaggle), he's able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.

You're kidding, right? Deep neural nets are very good at learning hierarchies of features, but they are still basically doing correlative statistical inference rather than causal inference. They are going to be much too slow, with respect to actual computation speed and sample complexity, to function dangerously well in realistically complex environments (ie: not Atari games).

Comment author: TheAncientGeek 06 May 2015 07:42:57AM 0 points [-]

There's an unwritten rule around here that you have to discuss AI in terms of unimplementable abstractions...its rude to bring in real world limitations.

Comment author: [deleted] 06 May 2015 02:19:59PM 1 point [-]

Excuse me if I care more about getting a working design that does the right things than I do about treating LW discussions as battles.

Comment author: 27chaos 07 May 2015 04:57:16AM 1 point [-]

I'm... pretty sure that was sarcasm? I hope so, at least.

Comment author: TheAncientGeek 07 May 2015 08:43:57AM 0 points [-]

Yes, that was sarcasm.

Comment author: Gram_Stone 07 May 2015 09:27:18AM 0 points [-]

Or this is meta-sarcasm, and therein lies the problem.

Comment author: [deleted] 07 May 2015 03:24:50PM 1 point [-]

Yeah, but I still object to even the sarcastic implications. I was posting in full seriousness about the limitations of deep neural nets.