You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

fubarobfusco comments on An attempt at a short no-prerequisite test for programming inclination - Less Wrong Discussion

4 Post author: ShardPhoenix 29 June 2013 11:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread. Show more comments above.

Comment author: buybuydandavis 30 June 2013 10:07:45AM *  3 points [-]

An attempt at a short no-prerequisite test for programming inclination

Somebody else beat you to it and wrote papers about it. Maybe more than one somebody, but that's all I'm digging up for you.

See the discussion: http://www.codinghorror.com/blog/2006/07/separating-programming-sheep-from-non-programming-goats.html

Multiple papers on the topic are here: http://www.eis.mdx.ac.uk/research/PhDArea/saeed/

From the main page:

We (Saeed Dehnadi, Richard Bornat) have discovered a test which divides programming sheep from non-programming goats. This test predicts ability to program with very high accuracy before the subjects have ever seen a program or a programming language.

Draft paper which incorporates some of the criticism we got at Coventry mini-PPIG and the University of Kent in January 2006.

Abstract: All teachers of programming find that their results display a 'double hump'. It is as if there are two populations: those who can, and those who cannot, each with its own independent bell curve. Almost all research into programming teaching and learning have concentrated on teaching: change the language, change the application area, use an IDE and work on motivation. None of it works, and the double hump persists. We have a test which picks out the population that can program, before the course begins. We can pick apart the double hump. You probably don't believe this, but you will after you hear the talk. We don't know exactly how/why it works, but we have some good theories.

Comment author: fubarobfusco 30 June 2013 10:28:58AM *  2 points [-]

Reading a little further ...

We now report that after six experiments, involving more than 500 students at six institutions in three countries, the predictive effect of our test has failed to live up to that early promise.

Comment author: Dreaded_Anomaly 30 June 2013 07:27:25PM 2 points [-]

And reading a little further than that...

The test does not very accurately predict levels of performance, but by combining the result of six replications of the experiment, five in UK and one in Australia. We show that consistency does have a strong e ffect on success in early learning to program but background programming experience, on the other hand, has little or no effect.