You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

buybuydandavis comments on An attempt at a short no-prerequisite test for programming inclination - Less Wrong Discussion

4 Post author: ShardPhoenix 29 June 2013 11:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread.

Comment author: buybuydandavis 30 June 2013 10:07:45AM *  3 points [-]

An attempt at a short no-prerequisite test for programming inclination

Somebody else beat you to it and wrote papers about it. Maybe more than one somebody, but that's all I'm digging up for you.

See the discussion: http://www.codinghorror.com/blog/2006/07/separating-programming-sheep-from-non-programming-goats.html

Multiple papers on the topic are here: http://www.eis.mdx.ac.uk/research/PhDArea/saeed/

From the main page:

We (Saeed Dehnadi, Richard Bornat) have discovered a test which divides programming sheep from non-programming goats. This test predicts ability to program with very high accuracy before the subjects have ever seen a program or a programming language.

Draft paper which incorporates some of the criticism we got at Coventry mini-PPIG and the University of Kent in January 2006.

Abstract: All teachers of programming find that their results display a 'double hump'. It is as if there are two populations: those who can, and those who cannot, each with its own independent bell curve. Almost all research into programming teaching and learning have concentrated on teaching: change the language, change the application area, use an IDE and work on motivation. None of it works, and the double hump persists. We have a test which picks out the population that can program, before the course begins. We can pick apart the double hump. You probably don't believe this, but you will after you hear the talk. We don't know exactly how/why it works, but we have some good theories.

Comment author: Viliam_Bur 30 June 2013 12:11:42PM *  4 points [-]

Now let's do some reductionism magic. Taboo "programming skills".

Which skill specifically is the one that some students have, and other students cannot be taught? If there are more skills like that, which is the most simple of them?

(If all we have is "here is a black box, we put students in, they come out in two groups", then it is not obvious whether we speak about properties of the students, or properties of the black box. Why not say: "Our teaching style of programming has a 'double hump'." instead?)

Comment author: buybuydandavis 30 June 2013 07:21:49PM 0 points [-]

Which skill specifically is the one that some students have, and other students cannot be taught?

There is no particular, identifiable, atomic skill that they're calling "programming skills". Like any other performance or talent, it is made up of a jillion jillion component skills. And I don't see them claiming that any particular skill cannot be taught, only that it is less likely to be taught to some than others with a given amount of instruction.

They take the grade in class as a proxy for general programming skills. That has it's own issues, but I'd expect it to have decent merit on population statistics.

I don't see any "magic" coming out of further reduction, here.

"Our teaching style of programming has a 'double hump'." instead?)

Because they claim the empirical observation that the double hump is prevalent across the distribution of classes, not just in any particular class. Yes, maybe with a different teaching method, the bottom cluster could do better. Maybe I would have been a better basketball player than Kobe Bryan is someone had taught me differently as well. But they didn't. Oh well.

I recalled this story from years ago and tracked it down. Their main claim was that a particular test at the beginning of the course accurately predicted the outcome of the course in terms of their grade. Someone else mentioned that in the later papers, they say that their test is no longer predictive.

Comment author: NancyLebovitz 01 July 2013 12:03:03AM 0 points [-]

Their main claim was that a particular test at the beginning of the course accurately predicted the outcome of the course in terms of their grade. Someone else mentioned that in the later papers, they say that their test is no longer predictive.

It's possible that people are enough more used to computers that some elementary concepts (like that the computer responds to simple cues rather than having any understanding of what you mean) are much more common, so those concepts aren't as useful for filters.

Comment author: Viliam_Bur 30 June 2013 09:39:41PM *  0 points [-]

My model is that there is something (I am not sure what it is) that is necessary for programming, but we don't know how to teach it. Maybe it is too abstract to articulate, or seems so trivial to those who already know it that they don't pay conscious attention to it . (Maybe it's multiple things.) Some people randomly "get it", either at the beginning of the class, or usually long before the class. Then when the class starts, those who "have it" can progress, and those who "don't have it" are stuck.

The study suggests that this something could be: expecting the same actions to have the same results consistently (even if the person is wrong about specific results of a specific action, because that kind of mistake can be fixed easily). Sounds plausible.

Assuming this is true (which is not certain, as the replications seem to fail), I would still describe it as a failure of the education process. There is a necessary prerequisite skill, and we don't teach it, which splits the class into those who got it from other sources, and those who didn't. -- It would be an equivalent of not teaching small children alphabet, and starting with reading the whole words and sentences. The children who learned alphabet at home would progress, the remaining children would be completely lost, we would observe the "double hump" and declare that the difference is probably innate.

The disappearing of the "double hump" (assuming that the original result was valid) could hint at improving the educational methods.

Even with perfect education, some people will be better and some will be worse. But there will be more people with partial success. -- To use your analogy, we would no longer have the situation where some people are basketball stars, and the remaining ones are unable to understand the rules of basketball; we would also have many recreational players. -- In programming, we would have many people able to do Excel calculations or very simple Python scripts.

If consistence really is the key, that would explain why aspies get it naturally, but seems to me that this is a skill that can be trained... at worst, by using some exercise of giving students the same question dozen times and expecting dozen same answers. Or something more meaningful than this, e.g. following some simple instructions consistently. It could be a computer game!

Comment author: ShardPhoenix 30 June 2013 10:29:11AM *  3 points [-]

This is what I'm reacting to. IIRC some a follow-up study showed their proposed test didn't work that well as a predictor, so this is a different angle on the problem.

Comment author: malcolmocean 30 June 2013 11:25:34AM 2 points [-]

My first instinct was to link to codinghorror as well, so I think it would (have) be(en) helpful to include the "what I'm reacting to" in your initial post.

Comment author: ShardPhoenix 30 June 2013 12:11:10PM 0 points [-]

Ok, I will.

Comment author: fubarobfusco 30 June 2013 10:28:58AM *  2 points [-]

Reading a little further ...

We now report that after six experiments, involving more than 500 students at six institutions in three countries, the predictive effect of our test has failed to live up to that early promise.

Comment author: Dreaded_Anomaly 30 June 2013 07:27:25PM 2 points [-]

And reading a little further than that...

The test does not very accurately predict levels of performance, but by combining the result of six replications of the experiment, five in UK and one in Australia. We show that consistency does have a strong e ffect on success in early learning to program but background programming experience, on the other hand, has little or no effect.