You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Clean real-world example of the file-drawer effect

2 enfascination 28 March 2015 09:06AM

I've only ever seen publication bias taught with made-up or near-miss examples.  Has anyone got a really well-documented case in which:

* (About) nine people independently get the idea for the same experiment because it seems like it should be there, and they all see that nothing has been published on it, so they all work on it, and all get a (true) null result.

* The tenth experiment is eventually published reporting an NHST effect of about p = 0.10 

* The slow (g)rumbling of science surfaces the nine previous, unpublished versions of that experiment and someone catches it and gets it all down, with citations and dates and the specifics of whichever effect these ten people found themselves rooting around for.

 

The most representative real-world example I've seen lately has been Bem/psi, but, as a pedagogical example, I find it too distracting.  The ideal example would report on an effect that's more sympathetic, that a sharp student or outsider would say "Yeah, I'd also have thought that effect would have come through."

 

Thanks.

How to write an academic paper, according to me

31 Stuart_Armstrong 15 October 2014 12:29PM

Disclaimer: this is entirely a personal viewpoint, formed by a few years of publication in a few academic fields. EDIT: Many of the comments are very worth reading as well.

Having recently finished a very rushed submission (turns out you can write a novel paper in a day and half, if you're willing to sacrifice quality and sanity), I've been thinking about how academic papers are structured - and more importantly, how they should be structured.

It seems to me that the key is to consider the audience. Or, more precisely, to consider the audiences - because different people will read you paper to different depths, and you should cater to all of them. An example of this is the "inverted pyramid" structure for many news articles - start with the salient facts, then the most important details, then fill in the other details. The idea is to ensure that a reader who stops reading at any point (which happens often) will nevertheless have got the most complete impression that it was possible to convey in the bit that they did read.

So, with that model in mind, lets consider the different levels of audience for a general academic paper (of course, some papers just can't fit into this mould, but many can):

 

continue reading »

[LINK] AI risk summary published in "The Conversation"

8 Stuart_Armstrong 14 August 2014 11:12AM

A slightly edited version of "AI risk - executive summary" has been published in "The Conversation", titled "Your essential guide to the rise of the intelligent machines":

The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features – strength, armour, implacability, indestructability – but Arnie’s character lacks the one characteristic that we in the real world actually need to worry about – extreme intelligence.

Thanks again for those who helped forge the original article. You can use this link, or the Less Wrong one, depending on the audience.

[LINK] The errors, insights and lessons of famous AI predictions: preprint

5 Stuart_Armstrong 17 June 2014 02:32PM

A preprint of the "The errors, insights and lessons of famous AI predictions – and what they mean for the future" is now available on the FHI's website.

Abstract:

Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.

The paper was written by me (Stuart Armstrong), Kaj Sotala and Seán S. Ó hÉigeartaigh, and is similar to the series of Less Wrong posts starting here and here.

[LINK] The errors, insights and lessons of famous AI predictions

8 Stuart_Armstrong 28 April 2014 09:41AM

The Journal of Experimental & Theoretical Artificial Intelligence has - finally! - published our paper "The errors, insights and lessons of famous AI predictions – and what they mean for the future":

Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's ‘AI drives’ paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.

The paper was written by me (Stuart Armstrong), Kaj Sotala and Seán S. Ó hÉigeartaigh, and is similar to the series of Less Wrong posts starting here and here.

Wisdom of the Crowd: not always so wise

20 tgb 01 July 2012 08:55PM

I have a confession to make: I have been not "publishing" my results to an experiment because the results were uninteresting. You may recall some time ago that I made a post asking people to take a survey so that I could look at a small variation of the typical "Wisdom of the Crowds" experiment where people make estimates on a value and the average of crowd's estimates is better than that of all or almost all of the individual estimates. Since LessWrong is full of people who like to do these kinds of things (thank you!), I got 177 responses - many more than I was hoping for!

I am now coming back to this since I happened upon an older post by Eliezer saying the following

When you hear that a classroom gave an average estimate of 871 beans for a jar that contained 850 beans, and that only one individual student did better than the crowd, the astounding notion is not that the crowd can be more accurate than the individual.  The astounding notion is that human beings are unbiased estimators of beans in a jar, having no significant directional error on the problem, yet with large variance.  It implies that we tend to get the answer wrong but there's no systematic reason why.  It requires that there be lots of errors that vary from individual to individual - and this is reliably true, enough so to keep most individuals from guessing the jar correctly. And yet there are no directional errors that everyone makes, or if there are, they cancel out very precisely in the average case, despite the large individual variations.  Which is just plain odd I find myself somewhat suspicious of the claim, and wonder whether other experiments that found less amazing accuracy were not as popularly reported.

(Emphasis added.) It turns out that I myself was sitting upon exactly such results.

The results are here. Sheet 1 shows raw data and Sheet 3 shows some values from those numbers. A few values that were clearly either jokes or mistakes (like not noticing the answer was in millions) were removed. In summary: (according to Wikipedia) 1000 million people in Africa (as of 2009) whereas the estimate from LessWrong was 781 million and the first transatlantic telephone call happened in 1926 whereas the average from the poll was 1899.

There! I've come clean!

I had deferred making this public because I thought the result that I was trying to test wasn't really being tested in this experiment, regardless of the results. The idea (see my original post linked about) was to see whether selecting between two choices would still let the crowd average out to the correct value (this two-option choice was meant to reflect the structure of some democracies). But how to interpret the results? It seemed that my selection of values is too important and that the average would change depending on what I picked even if everyone was to make an estimate, then look at the two options and choose the best one. So perhaps the only result of note here is that for the questions given, Less Wrong users were not particularly great at being a wise crowd.