Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 21 August 2017 01:09:20PM 4 points [-]

The fact that you engage with the article and share it, might suggest to the author that he did everything right. The idea that your email will discourage the author from writing similar articles might be mistaken.

Secondly, calling autonomous weapons killer robots isn't far of the mark. The policy question of whether or not to allow autonomous weapons is distinct from AGI.

Comment author: Tenoke 21 August 2017 01:55:32PM *  0 points [-]

The fact that you engage with the article and share it, might suggest to the author that he did everything right.

True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It's mostly significant insofar as being one I saw today that prompted me to make a template email.

The idea that your email will discourage the author from writing similar articles might be mistaken.

I can see it having no influence on some journalist, but again

I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice..

..

Secondly, calling autonomous weapons killer robots isn't far of the mark.

It's still fairly misleading, although a lot less than in AGI discussions.

The policy question of whether or not to allow autonomous weapons is distinct from AGI.

I am not explicitly talking about AGI either.

Comment author: Tenoke 21 August 2017 12:03:14PM 0 points [-]

After reading yet another article which mentions the phrase 'killer robots' 5 times and has a photo of terminator (and robo-cop for a bonus), I've drafted a short email asking the author to stop using this vivid but highly misleading metaphor.

I'm going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice before referencing Terminator in AI Safety discussions, potentially improving the quality of the discourse a little.

The effect of this might be slightly larger if more people do this.

Comment author: ChristianKl 09 August 2017 11:08:10PM 0 points [-]

P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.

The overview lecture doesn't really get me worried. It basically means that we are at the point where we can use machine learning to solve well-defined problems with plenty of training data. At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Comment author: Tenoke 10 August 2017 12:35:57AM *  1 point [-]

At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research (e.g. new activation functions with better performance than sota, novel architectures etc.). And for going further in that direction, they just want more compute which they're going to be getting more and more of.

I mean, if we've entered the point where we AI research is a problem tackalable by (narrow) AI, which can further benefit from that research and apply it to make further improvements faster/wtih more accuracy.. then maybe there is something to potentially worry about .

Unless of course you think that AGI will be built in such a different way that no/very few DL findings are likely to be applicable. But even then I wouldn't be convinced that doing this completely separate AGI research wont also be the kind of problem that DL wont be able to handle - as AGI research is in the end a "narrow" problem.

Comment author: Tenoke 09 August 2017 05:19:37PM 0 points [-]

Karpathy mentions offhand in this video that he thinks he has the correct approach to AGI but doesnt say what it is. Before that he lists a few common approaches, so I assume it's not one of those. What do you think he suggests?

P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.

Comment author: sixes_and_sevens 05 December 2016 11:59:49AM 14 points [-]

I haven’t posted in LW in over a year, because the ratio of interesting-discussion to parochial-weirdness had skewed way too far in the parochial-weirdness direction. There still isn’t a good substitute for LW out there, though. Now it seems there’s some renewed interest in using LW for its original purpose, so I thought I’d wander back, sheepishly raise my hand and see if anyone else is in a similar position.

I’m presumably not the only one to visit the site for the first time in ages because of new, interesting content, so it’s reasonable to assume a bunch of other former LW-users are reading this. What would it take for you to come back and start using the site again?

Comment author: Tenoke 09 December 2016 07:37:32PM *  1 point [-]

More quality content (either in terms of discussions or actual posts).

P.S. I do see how that might not be especially helpful.

Comment author: Tenoke 20 December 2015 12:20:22PM 8 points [-]

What is the latest time that I can sign up and realistically expect that there'll be spaces left? I am interested, but I can't really commit 10 months in advance.

Comment author: Tenoke 21 May 2015 02:36:23PM 1 point [-]

Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.

Comment author: Tenoke 19 May 2015 08:17:38PM *  3 points [-]

If someone is going to turn away at the first sight an unknown term, then they have no chance in lasting here (I mean, imagine what'll happen when they see the Sequences).

Comment author: ciphergoth 17 March 2015 08:33:48PM 1 point [-]

Is everyone already aware of the existence of erotic fanfiction entitled Conquered by Clippy?

Comment author: Tenoke 18 March 2015 12:18:12AM 1 point [-]
Comment author: Tenoke 13 March 2015 11:25:23AM 3 points [-]

Awesome! How large is it altogether (in words)?

View more: Next