People on this board have talked about programming as a gear in your brain that, to a first approximation, you have or you don't. I'm wondering if there's some well put-together resource you can direct someone with zero experience and just a web-browser to and say "if you're having fun an hour from now, you have the gear, good luck" -- maybe something on Khan academy?

(I learned to program a long time ago, and I started with BASIC program listings in my math textbook -- I don't actually know what the optimal onramps are now.)

New to LessWrong?

Mentioned in
New Comment
53 comments, sorted by Click to highlight new comments since: Today at 3:04 AM

The idea of programming as a gear is still controversial, but the specific hypothesized gear is that people who can build a consistent model of a language will be successful at programming, whereas those who can't won't be. This was tested by giving students a test on Java before they had been taught java; their answers were checked, not for correctness, but for consistency. See "The Camel Has Two Humps." Even then the test is far from perfectly predictive- ~28% of the consistent group failed, ~19% of the inconsistent group passed, and membership in the groups as indicated by the tests assigned shifted over time. If you do want to test this, you can reuse the original test.

However, there have been numerous attempted replications, none of which succeeded- though none found a negative result either. They were generally either confounded by the presence of experienced programmers, setup poorly, or not statistically significant. To quote the original authors:

When we began this work we had high hopes that we had found a test that could be used as an admissions filter to reduce the regrettable waste of human e ffort and enthusiasm caused by high failure rates in universities' first programming courses. We can see from the experiments reported above that our test doesn't work if the intake is already experienced, and in experiment 3 didn't work at all. We cannot claim to be separating the programming goats from the non-programming sheep: experiment 3 demolishes the notion that consistent subjects will for the most part learn well, and others for the most part won't. And even in the most encouraging of our results, we find a 50% success rate in those who don't score C0 or CM2 [ie those who were inconsistent]. None the less, some of our results indicate that there may be something going on with consistency.

HT Gwern

It irritates me to no end that the original study is so much better known than the utter failure to replicate. I have to suspect that this has something to do with how conveniently it fits many programmers' notion that programmers are a special sort of person, possessed of some power beyond merely a lot of practice at programming and related skills.

I feel much the same way about dual n-back studies. There was an article this month or last about WM training with Jaeggi and Buschkuel as authors... and it mentioned not a single issue. Gah!

The more recent meta-analysis appears to support their initial conclusion.

I think that interesting results which fail to replicate are almost always better-known than the failure to replicate. I think it's a fundamental problem of science, rather than a special weakness of programmers.

http://pleasingfungus.com/Manufactoria/ perhaps?

Ah, found the paper. Doing a little more research...

Aha, found the test here. Ask someone nearby to give it to you, maybe?

Seconding Manufactoria.

I think you should give a more precise definition of the aptitude needed to be labelled has-a-gear.

I program for a living, and I would like to think that I fall among "those who can" on the bimodal distribution (if one exists). I've seen programmers and non-programmers of all levels of ability (except for far above mine, because those are hard to recognize). One man's programmer is another man's deadweight.

Individual people grow in talent until they stop (and maybe they resume later). So if there exists a test to predict whether they'll stop at some future level, it probably doesn't involve actual programming. (For instance, testing people's understanding of variable semantics is pointless unless you've taught them those semantics first.) It would have to test something else that happens to be strongly correlated with it. So

Incidentally, this was this was recently discussed on Programmers Stack Exchange:

I think you should give a more precise definition of the aptitude needed to be labelled has-a-gear.

And the next step for a reductionist is to split this "gear" to smaller (and smaller and smaller... if necessary) parts, and design a course to teach each one of them separately. And only then teach programming.

People have some innate differences, but I feel that speaking about innate talent is often just worshiping our ignorance as teachers.

Of course, it may turn out that the innate differences in this specific topic are too big to overcome, or that overcoming them is possible but not cost-effective... but I think we haven't tried hard enough yet.

For the record, I think programming is so measurable and has such a tight feedback loop that it is one arena in which it's relatively easy to recognize ability that far exceeds your own.

1) Code quality is fairly subjective, and in particular novice (very novice) programmers have difficulty rating code. Most professional programmers seem to be able to recognize it though, and feel awe when they come across beautiful code.

2) Code quantity can be misleading, but if you're on a team and producing a 100-line delta a day, you will notice the odd team member producing 1000-line daily deltas; coupled with even decent ability to tell whether or not that code is maintainable and efficient (in terms of functionality / loc), this is a strong indicator.

3) Actually watching a master write code is fantastic and intimidating. People that code at 60 wpm without even stopping to consider their algorithms, data structures or APIs, but manage at the end of an hour to have a tight, unit-tested, efficient and readable module.

I can think of five people that I know that I would classify as being in discrete levels above me (that is, each of them is distinguishable by me as being either better or worse than the others). I think there are some gaps in there; Jeff Dean is so mindbogglingly skilled that I can't see anyone on my list ever catching up to him, so there are probably a few levels I don't have examples for.

People that code at 60 wpm without even stopping to consider their algorithms, data structures or APIs, but manage at the end of an hour to have a tight, unit-tested, efficient and readable module.

I've never seen this or even imagined it can happen. I can't even write comments or pseudo-code that fast (without pause) because I can't design that fast.

I've done it, and it's not as impressive as it sounds. It's mostly just reciting from experience and not some savant-like act of intelligence or skill. Take those same masters into an area where they don't have experience and they won't be nearly as fast.

Actually, I think the sequences were largely a recital of experience (a post a day for a year).

I don't know about you, but I can't recall 10k LOC from experience even if I had previously written something before; seeing someone produce that much in the space of three hours is phenomenal, especially when I realize that I probably would have required two or three times as much code to do the same thing on my first attempt. If by "reciting from experience" you mean that they have practiced using the kinds of abstractions they employ many times before, then I agree that they're skilled because of that practice; I still don't think it's a level of mastery that I will ever attain.

Yeah, I can pretty much recall 10k LOC from experience. But it's not just about having written something before, it's about a truly fundamental understanding of what is best in some area of expertise which comes with having written something before (like a GUI framework for example) and improved upon it for years. After doing that, you just know what the architecture should look like, and you just know how to solve all the hard problems already, and you know what to avoid doing, and so really all you're doing is filling in the scaffolding with your hard won experience.

Not too long ago, I lost a week of work and was able to recompose it in the space of an afternoon. It wasn't the same line-for-line, but it was the same design and probably even used the same names for most things, and was roughly 10k LOC. So if I had recent or substantial experience, I can see expecting a 10x speedup in execution. That's pretty specific though; I don't think I have ever had the need to write something that was substantially similar to anything else I'd ever written.

Domain experience is vital, of course. If you have to spend all your time wading through header files to find out what the API is or discover the subtle bugs in your use of it, writing just a small thing will take painfully long. But even where I never have to think about these things I still pause a lot.

One thing that is different is that I make mistakes often enough that I wait for them; working with one of these people, I noticed that he practiced "optimistic coding"; he would compile and test his code, but by feeding it into a background queue. In that particular project, a build took ~10 minutes, and our test suite took another ~10 minutes. He would launch a build / test every couple of minutes, and had a dbus notification if one failed; once, it did, and he had to go back several (less than 10, I think) commits to fix the problem. He remembered exactly where he was, rebased, and moved on. I couldn't even keep up with him finding the bug, much less fixing it.

The people around here who have a million lines of code in production seem to have that skill, of working without the assistance of a compiler or test harness; their code works the first time. Hell, Rob Pike uses ed. He doesn't even need to refer to his code often enough to make it worthwhile to easily see what he's already written (or go back and change things)---for him, that counts as an abnormal occurrence.

My assumption was that people who can't seem to learn to program can't get to the gut-level belief that computers don't use natural language-- computers require types of precision that people don't need.

However, this is only a guess. Would anyone with teaching experience care to post about where the roadblocks seem to be?

Also, does the proportion of people who can't learn to program seem to be dropping?

On the other hand, I did the JavaScript tutorial at Codacademy, and it was fun of a very annoying sort-- enough fun that I was disappointed that there only seemed to be a small amount of it.

However, I didn't seem to be able to focus enough on the examples until I took out the extra lines and curly parentheses-- I was literally losing track of what I was doing as I went from one distant line to another. If I pursue this, I might need to get used to the white space-- I'm sure it's valuable for keeping track of the sections of a program.

My working memory isn't horrendously bad-- I can reliably play dual 3-back, and am occasionally getting to 4-back.

If there are sensory issues making programming difficult for a particular person, this might be hard to distinguish from a general inability.

I've taught courses at various levels, and in introductory courses (where there's no guarantee anyone has seen source code of any form before), I've been again and again horrified by students months into the course who "tell" the computer to do something. For instance, in a C program, they might write a comment to the computer instructing it to remember the value of a variable and print it if it changed. "Wishful" programming, as it were.

In fact, I might describe that as the key difference between the people who clearly would never take another programming course, and those that might---wishful thinking. Some never understood their own code and seemed to write it like monkeys armed with a binary classifier (the compiler & runtime, either running their program, or crashing) banging out Shakespeare. These typically never had a clear idea about what "program state" was; instead of seeing their program as data evolving over time, they saw it as a bunch of characters on the screen, and maybe if the right incantations were put on the screen, the right things would happen when they said Go.

Common errors in this category include:

  • Infinite loops, because "the loop will obviously be done when it has the value I want".
  • Uninitialized variables, because "it's obvious what I'm computing, and that you start at X".
  • Calling functions that don't exist, because, "well, it ought to".
  • NOT calling functions, because "the function is named PrintWhenDone, it should automatically print when the program is done".

These errors would crop up among a minority of students right up until the class was over. They could be well described by a gut-level belief that computers use natural language; but this only covers 2-6% of students in these courses*, whereas my experience is that less than 50% of students who go into a Computer Science major actually graduate with a Computer Science degree; so I think this is only a small part of what keeps people from programming.

*In three courses, with a roughly 50-person class, there were always 1-3 of these students; I suspect the median is therefore somewhere between 2 and 6%, but perhaps wildly different at another institution and far higher in the general population.

I think I'm over it, but back in college (the 70s), I understood most of the linguistic limitations of computers, but I resented having to accomodate the hardware, and I really hated having to declare variables in advance.

To some extent, I was anticipating the future. There's a huge amount of programming these days where you don't have to think about the hardware (I wish I could remember the specific thing that got on my nerves) and I don't think there are modern languages where you have to declare that something is a variable before you use it.

Of course, hating something isn't the same thing as not being able to understand that you need to do it.

Not graduating with a Computer Science degree isn't the same thing as not having a programming gear. What fraction of that 50% get degrees in other fields that require programming? What proportion drop out of college, probably for other reasons? What proportion can program, but hate doing it?

In my opinion, almost all of that 50% (that drop out) could program, to some extent, if sufficiently motivated.

A great deal of Computer Science students (half? more than half?) love programming and hit a wall when they come to the theoretical side of computer science. Many of them force themselves through it, graduate, and become successful programmers. Many switch majors to Information Technology, and for better or for worse will end up doing mostly system administration work for their career. Some switch majors entirely, and become engineers. I actually think we do ourselves a disservice by failing to segment Computer Science from Software Engineering; a distinction made at very few institutions, and when made, often to the detriment of Software Engineers, regrettably.

So to answer your question; of the 50% that drop out, I think most end up as sub-par programmers, but 80% of that 50% "have programming gear", to the extent that such a thing exists.

I did teach Python at a computer science school (people there already had 2 years of scientific studies after "bac"), and I was amazed to see how hard it was for some of them to understand that in Python :

>>> 4+2
6
>>> "4"+"2"
'42'

So yes, I guess the key is about understanding what types are. The same kind of issues arise between using a variable and the variable name.

Now, I'm not sure how much this is teachable and when (ie, maybe it's a kind of skill you've to learn when you're young to really grasp). I started programming when I was 11, so there may be something around it, but I don't have much data on that.

To be fair, it's not really enough to know what types are to get this one right. You have to understand that the + operator is overloaded based on the types of its operands; that is, + actually means several different things, depending on the operand types. The experience people have of + meaning numerical addition might be interfering with their learning. Maybe if someone else's students had problems with it, they could try defining a function putTogether (a, b) and telling the students that it's a mysterious black box that does one arbitrary thing for numbers and a completely different thing for strings. Then you could leave revealing that it's actually the language's + operator that has this strange behavior for later.

Couldn't you lead them to guess by themselves, by asking them to guess the result of a series of expressions like:

4+2

"Hel" + "lo"

"Di" + "Caprio"

"Jack" + "Black"

"Jack" + " Black"

"ABCD" + "EFGH"

"1234" + "5678"

Maybe insert an "ABCD" + "1234" in between your last two expressions.

Maybe you might like trying Python (there are some more tutorials listed here; specifically, Learn Python the Hard Way, #2 in the Python section, is a nice next step after Codecademy), it has a "cleaner" syntax, in that it doesn't require braces or so many brackets; this could help you to practice without so many distractions.

(And yes, once you've practiced more, you'll be able to keep track of more of the program in your head and so the white space is a navigational aid, rather than a hinderance.)

My experience with my friends without the gear suggests that an pretty good test on adults for the programming gear is to see if they have retained certain kinds of knowledge of arithmetic that schools everywhere try to teach young children.

E.g. ask multiple-choice questions like, "Express .2 as a fraction," and, "Express 1/4 as a decimal," listing 1/2 as one of the choices for the first question.

Another decent one, I am guessing is, "Which is a better deal for someone that knows they are probably going to keep taking Zowie pills for a long time: a bottle of 60 Zowie pills for $45 or a bottle of 100 pills for $80?" E.g., a simple alegbra word problem of a kind with which most people with economic concerns would regularly keep in practice just by being a consumer.

Note: this is me reply to myself (which I concede is a little lame and maybe I shouldn't've.)

I forgot my favorite question of this type, which, BTW, a couple of doctors I consulted did not seem to be able to answer: how many micrograms in .05 milligrams?

In other words, my hypothesis is that the way to identify the "programming gear" is to test knowledge of some really simple "formal system" such as arithmetic or metric-system prefixes that one would expect adults with a practical command of the simple formal system to be well-rehearsed in.

Note that the second one is answered not by computing the prices per pill, but by noticing that buying in bulk is nearly always cheaper (and possibly that it has to be so if manufacturers are trying to make a profit).

Actually, its is not uncommon in the U.S. for the big retail chains like Whole Foods to violate this expectation that larger quantities are nearly always cheaper per unit.

Why do they do that? Because it is a way to distinguish between buyers who are willing to do the calculation and those who are not. Well, to be more precise, it is a way to get a higher price from those unwilling to do the math while at the same time retaining the custom of those who are willing. Practices of this kind are known by economists and marketing professionals as "segmenting the market", retail discount coupons' being an older and more often cited example.

Is Whole Foods exempt from listing unit price?

I doubt it.

There is a supermarket a couple of 100 yards from here, so I went over there, where I learned the following.

Sale items are exempt from the requirement to display the unit price.

The unit price on one brand of chickpeas was expressed as dollars per ounce. Next to it was another brand of chickpeas whose unit price was expressed in dollars per can.

Hope that helps.

[-][anonymous]12y60

That rule gives a wrong answer here.

I've taught C, Java and Python at a university and (a little) at the high school level. I have noticed two simple things that people either surmount or get stuck on. The first seems to be even a basic ability to keep a formal system in mind; see the famous Dehnadi and Bornat paper. The second, I have heard less about: in programming, it's the idea of scope.

The idea of scope in almost all modern programming languages goes like this:

  • A scope starts at some time (some place in the code), and ends somewhere later.
  • A scope can start before another ends; if so, it has to end before the "outer" scope.
  • Inside a scope, objects can be created and manipulated; generally even if another scope has started.
  • Unless something special is done, objects no longer exist after a scope ends.
  • Pivotally (this seems to be the hardest part), a objects can be created with one name in an outer scope and be referred to by a different name in an inner scope. Inner scopes can likewise create and manipulate objects with the same names as objects in an outer scope without affecting the objects in that outer scope.

It's really hard for me to think of an analogous skill in the real world to keeping track of N levels of renaming (which may be why it gives students such difficulty?). The closest I can think of is function composition; if you don't have to pick your way through symbolically integrating a composed function where the variables names don't match, I have pretty high confidence that you can manage nested scopes.

EDIT: There are two other, well-known problems. Recursion and pointers. I've heard stories about students who were okay for a year or two of programming courses, but never "got" recursion or, never understood pointers, and had to change majors. I've seen students have enormous difficulty with both; in fact, I've passed students who never figured one or the other out, but managed to grind through my course anyway. I don't know whether they dropped out or figured it out as their classes got harder---or just kept faking it (I had team members through grad school that couldn't handle more than basic recursion). I'm not inclined to classify either as "programming gear" that they didn't have, but I don't have data to back that up.

Is there a reason to use the same variable name within and outside a scope? It seems like a fertile source of errors.

I can see that someone would need to understand that reusing names like that is possible as a way of identifying bugs.

My post didn't indicate this, but the most common source of scope is functions; calling a function starts a new scope that ends when the function returns. Especially in this case, it does often make sense to use the same variable name:

posterior = ApplyBayes(prior, evidence)
...
function ApplyBayes(prior, evidence) = { ... }

Will have prior=prior, evidence=evidence, and is a good naming scheme. But in most languages, modifying 'evidence' in the function won't affect the value of 'evidence' outside the scope of the function. This sometimes becomes confusing to students when the function above gets called like so:

posterior = ApplyBayes(prior, evidence1)
posterior = ApplyBayes(posterior, evidence2)
posterior = ApplyBayes(posterior, evidence3)

Because their previous model relied on the names being the same, rather than the coincidence of naming being merely helpful.

Overall, I would say that this is still a fertile source of errors, but in some situations the alternative is to have less readable code, which is also a fertile source of errors and makes fixing them more difficult.

Your confusion is due to using "scope", which is actually a lexical concept. What you're dealing with here is variable substitution: in order to evaluate a function call such as posterior = ApplyBayes(prior, evidence1), the actual function arguments need to be plugged into the definition of ApplyBayes(·, ·). This is always true, regardless of what variable names are used in the code for ApplyBayes.

I certainly hope that I'm not confused about my word choice. I write compilers for a living, so I might be in trouble if I don't understand elementary terms.

In all seriousness, my use of the word "scope" was imprecise, because the phenomenon I'm describing is more general than that. I don't know of a better term though, so I don't regret my choice. Perhaps you can help? Students that I've seen have difficulty with variable substitution seem to have difficulty with static scoping as well, and vice versa. To me they feel like different parts of the same confusion.

In a related note, I once took some of my students aside who where having great difficulty getting static scoping, and tried to teach them a bit of a dynamically-scoped LISP. I had miserable results, which is to say that I don't think the idea of dynamic scope resonated with them any more than static scope; I was hoping maybe it would, that there were "dynamic scoping people" and "static scoping people". Maybe there are; my experiment is far from conclusive.

EDIT: Hilariously, right after I wrote this comment the newest story on Hacker News was http://news.ycombinator.com/item?id=4534408, "Actually, YOU don't understand lexical scope!". To be honest, the coincidence of the headline gave me a bit of a start.

This seems better suited to the open thread.

I asked a programmer friend this exact question once. He told me to work through The Little LISPer / Schemer.

Perhaps the iPad app Cargo-Bot.

[-][anonymous]12y00

I don't know whether the existence of such a gear is plausible. But to your point, I might say:

"Try this. If you're having fun an hour from now, you have the gear. Good luck!"

As for on-ramps, I would start with HTML as an introduction to thinking like a programmer, and then transitioning over to Python. But opinions vary, so seek the advice of more experienced programmers than myself.

[-][anonymous]12y180

Not HTML! Not HTML! In addition to the obvious shortcoming of not being a programming language, HTML is confusing and often vague. Start with python, ruby, or a lisp.

or a lisp.

Not Lisp! Not Lisp! It's a great language, but it has no syntax.

I second Python.

it has no syntax.

I've usually heard that as the reason to give Lisp to a new programmer. You don't want them thinking about fine details of syntax; you want them thinking about manipulations of formal systems. Add further syntax only when syntax helps, instead of hinders.

What's the argument for preferring a more syntax-ful language?

I would object to Lisp because it has scary parentheses everywhere. It might be intimidating to a novice.

In fact, I also think Python is good, precisely because there's not too much syntax, especially at the beginning.

People can always find things in surface syntax to object to. Python's whitespace is pretty unpopular with people who think all "normal" languages have to have curly braces — as well as with some folks who grew up with Fortran and think that significant whitespace equals dinosaurity.

The interesting thing about Lisp is not its surface syntax, but the relationship between code and data. The textual syntax of Lisp is a way of expressing data structures; Lisp code is defined in terms of trees, not text. Most languages don't make the syntax tree of the code available to the programmer; it's hidden away as internal data structures within the compiler.

True, and that makes it a good language to be familiar with, I'm just not convinced it's a good language to start with.

If there is a site that will let you play with strictly-validated HTML or something like that, I think that could be a good introduction to the idea of precise syntax without having to worry about algorithms or variables at first. But I agree that typically HTML is a bit too loose.

In fact now that I think about it, teaching syntax and semantics separately (eg with a stricter subset of HTML for the first and some sort of graphical programming thing like Scratch for the second) could be helpful for beginners.

Someone awsome on here recommended Learn Python the Hard Way. I've had school off since Tuesday and I've been kicking it's ass since. It's really fun. I thought it'd be neat to test out what my abilities are like on Project Euclid.

I've solved three so far. I'm particularly proud of coming up with a program to do the Fibonacci sequence. It's a simple program, and probably not as efficient as it could be, but i didn't look at any spoilers and feel like a diabolical genius after having solved it.

That's great! I hope you keep working on it.

I assume you mean Project Euler? If so, I heartily second that, and I have introduced at least one person to programming (in Python) via it, and she was extremely enthusiastic about it. (Admittedly, she was/is extremely mathematically talented, so there is a confounding factor there.)

It's a simple program, and probably not as efficient as it could be, but i didn't look at any spoilers and feel like a diabolical genius after having solved it.

For me, this is one of the best bits about solving Project-Euler-esque questions: often one can make progress and solve a question with a relatively simple (but still really cool!) program, but there's always more tricks to learn (how to cut the run time in half, how to half the number of lines of code, etc etc.), and so more chances to be a diabolical genius!

And then coming back to a few of the questions and solving them in completely different language to see how neat/fast/short one can make the program that way (for people who started with Python, this might mean experimenting with C or assembly or a lisp or Haskell).

Try Ruby.

I've learned how to program in C++, but to someone with no background, normally is taught pseudocode. Assuming the person has some tendency to thing in terms of inferences, not random connections.

to someone with no background, normally is taught pseudocode.

Tip: you can turn this into standard English syntax in one of two ways: (1) delete the word "to" and the comma after "background"; or alternatively, (2) change "normally is taught pseudocode" to "pseudocode is normally taught".

(Apologies if you're actually a native English speaker and the above was merely a typo -- but it pattern-matches to the calquing of syntax from another language, e.g. one of the Romance languages; and your name suggests that you might be a Portuguese speaker.)