Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Gleb_Tsipursky 02 February 2016 03:40:03AM 0 points [-]

Judging by the fact that this post got 500 FB likes the first day it was posted on The Life You Can Save Blog, people are not tuning it out. Note, the baseline for posts on TLYCS blog is about 100-200 likes over their lifetime, not the first day.

Comment author: jkaufman 02 February 2016 02:58:14PM 0 points [-]

Audience matters? The TLYCS blog is very different from LW.

Comment author: Clarity 09 January 2016 11:05:09PM 0 points [-]

Incredible analyses in the comments here.

Somebody That I'll Never Know (Gotye parody) Organ Donation

Comment author: jkaufman 11 January 2016 03:53:40PM 0 points [-]

Which of the youtube comments are you referring to? There are a bunch of them (and none of them jumped out as an incredible analysis to me? But I was just skimming.)

Comment author: Dreaded_Anomaly 23 September 2011 03:15:37AM 5 points [-]

The earliest reference to the parable that I can find is in this paper from 1992. (Paywalled, so here's the relevant page.) I also found another paper which attributes the story to this book, but the limited Google preview does not show me a specific discussion of it in the book.

Comment author: jkaufman 25 December 2015 03:34:26PM 0 points [-]

Expanded my comments into a post: http://www.jefftk.com/p/detecting-tanks

Comment author: pedanterrific 23 September 2011 03:09:48AM *  7 points [-]

It's almost certainly not the actual source of the "parable", or if it is the story was greatly exaggerated in its retelling (admittedly not unlikely), but this may well be the original study (and is probably the most commonly-reused data set in the field) and this is a useful overview of the topic.

Does that help?

Comment author: jkaufman 24 December 2015 03:27:04PM 0 points [-]

Except "November Fort Carson RSTA Data Collection Final Report" was released in 1994 covering data collection from 1993, but the parable was described in 1992 in the "What Artificial Experts Can and Cannot Do" paper.

Comment author: Dreaded_Anomaly 23 September 2011 03:15:37AM 5 points [-]

The earliest reference to the parable that I can find is in this paper from 1992. (Paywalled, so here's the relevant page.) I also found another paper which attributes the story to this book, but the limited Google preview does not show me a specific discussion of it in the book.

Comment author: jkaufman 24 December 2015 03:22:10PM *  0 points [-]

Here's the full version of "What Artificial Experts Can and Cannot Do" (1992): http://www.jefftk.com/dreyfus92.pdf It has:

... consider the legend of one of connectionism's first applications. In the early days of the perceptron ...

Comment author: Dreaded_Anomaly 23 September 2011 03:15:37AM 5 points [-]

The earliest reference to the parable that I can find is in this paper from 1992. (Paywalled, so here's the relevant page.) I also found another paper which attributes the story to this book, but the limited Google preview does not show me a specific discussion of it in the book.

Comment author: jkaufman 24 December 2015 03:10:50PM 0 points [-]

There's also https://neil.fraser.name/writing/tank/ from 1998 which says the "story might be apocryphal", so by that point it sounds like it had been passed around a lot.

Comment author: timtyler 24 October 2011 01:53:40AM *  0 points [-]

In the "Building Neural Networks" book, the bottom of page 199 seems to be about "classifying military tanks in SAR imagery". It goes on to say it is only interested in "tank" / "non-tank" categories.

Comment author: jkaufman 24 December 2015 03:09:26PM 0 points [-]

But it also doesn't look like it's a version of this story. That section of the book is just a straight ahead "how to distinguish tanks" bit.

Comment author: AstraSequi 12 December 2015 09:28:14PM *  7 points [-]

The primary weakness of longitudinal studies, compared with studies that include a control group

Longitudinal studies can and should include control groups. The difference with RCTs is that the control group is not randomized. Instead, you select from a population which is as similar as possible to the treatment group, so an example is a group of people who were interested but couldn't attend because of scheduling conflicts. There is also the option of a placebo substitute like sending them generic self-help tips.

ETA: "Longitudinal" is also ambiguous here. It means that data were collected over time, and could mean one of several study types (RCTs are also longitudinal, by some definitions). I think you want to call this a cohort study, except without controls this is more like two different cross-sectional studies from the same population.

Comment author: jkaufman 14 December 2015 08:11:25PM *  4 points [-]

Instead, you select from a population which is as similar as possible to the treatment group

They did this with an earlier batch (I was part of that control group) and they haven't reported that data. I found this disappointing, and it makes me trust this round of data less.

On Sunday, Sep 8, 2013 Dan at CFAR wrote:

Last year, you took part in the first round of the Center for Applied Rationality's study on the benefits of learning rationality skills. As we explained then, there are two stages to the survey process: first an initial set of surveys in summer/fall 2012 (an online Rationality Survey for you to fill out about yourself, and a Friend Survey for your friends to fill out about you), and then a followup set of surveys one year later in 2013 when you (and your friends) would complete the surveys again so that we could see what has changed.

In response to LessWrong 2.0
Comment author: cousin_it 03 December 2015 07:00:03PM *  11 points [-]

I vote for both plans at once!

1) Make the current LW read-only. All content is still accessible, but commenting and voting is disabled. The discussion section is closed as well. Let things rest for a month or so.

2) Announce that during the next year, LW will have one post per week, at a specified time. There will be an email address where anyone can send their submissions, whereupon a horribly secretive and biased group of editors will select the best one each week, aiming for Eliezer quality or higher. The prominent posters you've contacted should create enough good content for the first couple months. Voting will be disabled for posts, and enabled only for comments. There will also be one monthly open thread for unstructured discussion.

I don't think anything short of that would work. LW's problem is the decline in quality, so the fix should be quality-oriented, not quantity-oriented.

In response to comment by cousin_it on LessWrong 2.0
Comment author: jkaufman 03 December 2015 08:17:02PM 13 points [-]

LW's problem is the decline in quality, so the fix should be quality-oriented, not quantity-oriented.

I think it went the other way: demands for quality, rigor, and fully developed ideas made posting here unsatisfying (compared to the alternatives) for a lot of previously good posters.

In response to comment by jimrandomh on LessWrong 2.0
Comment author: Elo 03 December 2015 05:21:06AM 2 points [-]

does something prevent you from cross posting?

previously it was suggested to post here; then in 6 months delete the text; include a link to the text on your blog. (or just leave it)

In response to comment by Elo on LessWrong 2.0
Comment author: jkaufman 03 December 2015 08:12:37PM 2 points [-]

does something prevent you from cross posting?

Hassle, two comment threads to follow, probably bad for search rankings.

in 6 months delete the text; include a link to the text on your blog

More hassle.

View more: Next