Most people believe very strongly that the best way to learn is to learn by doing. Particularly in the field of programming.

I have a different perspective. I see learning as very dependency based. Ie. there are a bunch of concepts you have to know. Think of them as nodes. These nodes have dependencies. As in you have to know A, B and C before you can learn D.

And so I'm always thinking about how to most efficiently traverse this graph of nodes. The efficient way to do it is to learn things in the proper order. For example, if you try to learn D without first understanding, say A and C, you'll struggle. I think that it'd be more efficient to identify your what your holes are (A and C) and address them first before trying to learn D.

I don't think that the "dive into a project" approach leads to an efficient traversal of this concept graph. That's not to say that it doesn't have its advantages. Here are some:

  1. People tend to find the act of building something fun, and thus motivating (even if it has no* use other than as a means to the end of learning).
  2. It's often hard to construct a curriculum that is comprehensive enough. Doing real world projects often forces you to do things that are hard to otherwise address.
  3. It's often hard to construct a curriculum that is ordered properly. Doing real world projects often is a reasonably efficient way of traversing the graph of nodes.
Personally I think that as far as 2) and 3) go, projects sometimes have their place, but they should be combined heavily with some sort of more formal curriculum, and that the projects should be focused projects. My real point is that the tradeoffs should always be taken into account, and that an efficient traversal of the concept graph is largely what you're after.

I should note that I feel most strongly about projects being overrated in the field of programming. I also feel rather strongly about it for quantitative fields in general. But in my limited experience with non-quantitative fields, I sense that 2) and 3) are too difficult to do formally and that projects are probably the best approximations (in 2015; in the future I anticipate smart tutors being way more effective than any project ever was or can be). For example, I've spent some time trying to learn design by reading books and stuff on the internet, but I sense that I'm really missing something that is hard to get without doing projects under the guidance of a good instructor.

What do you guys think about all of this?


 

Side Notes:

*Some people think, "projects are also good because when you're done, you've produced something cool!". I don't buy this argument.
  • Informal response: "C'mon, how many of the projects that you do as you're learning ever end up being used, let alone produce real utility for people?".
  • More formal response: I really believe in the idea that productivity of programmers differs by orders of magnitude. Ie. someone who's 30% more knowledgeable might be 100x more productive (as in faster and able to solve more difficult problems). And so... if you want to be productive, you'd be better off investing in learning until you're really good, and then start to "cash in" by producing.
1. Another thing I hate: when people say, "you just have to practice". I've asked people, "how can I get good at X?" and they've responded, "you just have to practice". And they say it with that condescending sophisticated cynicism. And even after I prod, they remain firm in their affirmation that you "just have to practice". It feels to me like they're saying, "I'm sorry, there's no way to efficiently traverse the graph. It's all the same. You just have to keep practicing." Sorry for the rant-y tone :/. I think that my System I is suffering from the illusion of transparency. I know that they don't see learning as traversing a graph like I do, and that they're probably just trying to give me good advice based on what they know.

2. My thoughts on learning to learn.

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 10:21 AM

You can't determine what the most effective ways of learning are by sitting and thinking. What does the empirical evidence say?

Those "100x programmers": if you try to identify some and look at their history, are they distinguished by having learned things in an unusual order? By being extraordinarily clever? By working very hard? By starting very young?

When you have acquired a skill and become demonstrably good at it (I mean: there is actual external evidence, agreed by others, that you're better than most), has it generally been as a result of carefully ordered theoretical learning or has there been an important element of practice?

Are you sure you aren't setting up a false dichotomy, between just diving into a project and carefully ordered theoretical learning? You do mention the possibility of combining the two but jump immediately (with no evidence or argument I can see) to the assertion that the projects should be tightly-focused ones in the service of a carefully designed formal curriculum.

It seems to me -- but this too is based on sitting and thinking rather than on much empirical evidence -- that we should expect "doing" to be very important in learning. Your brain is a network of neurons, and everything we know about the artificial neural networks we've studied suggests that the way to teach them to do a thing involves having them do that thing repeatedly and adjusting to make them do it better. (Obvious but important caveat: real neurons do not behave the exact same way as the ones in ANNs; real brains are wired in much more complicated ways than our ANNs; there is no guarantee that what's true of one is true of the other; the above is intended more as intuition pump than as formal argument.) No one would expect to be able to learn (say) to play the trumpet well just by reading books; I don't see why anyone should expect to be able to learn to write parsers or prove theorems in enumerative combinatorics or find effective trading strategies or design amplifiers whose output sounds good, just by reading books. (Or attending lectures, or anything else that doesn't involve a lot of "doing".)

And that isn't because of gaps in the dependency tree; it's because "doing" is a very different activity from explicit "learning" and brings about different kinds of changes in the brain, and you need those changes as well as the ones brought about by explicit "learning" if you want to get good at things. (The way it feels from the inside, to me, is: Formal learning can enable you to do a thing by consciously working out the steps, but it's a much less effective way of building "intuition" and "taste" and "fluency" than actual practice; and if you want to be really good at something, you need those.)

I agree with you. Projects can be superficial, showy, time-sinks, warm-fuzzy-feel-goods, or out-right meaningless soul drains. I have heard that project worship is a malignant disease in the education system and academia. Professors are assessed for tenure based on the quantity of projects completed, without a thought given to their ability to teach and hardly a glance at the actual merit of their projects. In k-12 schools, endless projects can cover up the lack of meaningful content in a curriculum.

On the other hand, projects seem wholly appropriate for demonstrating that you have a firm grasp of nodes A, B, C, and D. In fact, doesn't knowing that there is a project employing these concepts help many people pay closer attention since they have to imagine a concept's possible applications? However, the knowledge, not the project, must be the goal. When we make projects the goal, people bandy projects around to represent their alleged competence.

Practice doesn't make your knowledge complete. It reveals where your knowledge is lacking. There's a difference.

[-][anonymous]9y20

As gjm mentioned, don't bother thinking through straight forward problems that already have plenty of empirical data on them. Just look up the data. Two minutes on google brought me to this study:

In-class activities led to higher overall scores than any other teaching method while lecture methods led to the lowest overall scores of any of the teaching methods.

That's just one example. I'm not going to bother with a comprehensive review of existing research. I've heard it from a wide enough variety of sources I consider reliable that project-based learning is extremely effective that I've always considered it a low probability that careful research on the issue would reveal anything especially useful. Also, your proposed problem of learning in the right order has absolutely no relationship with project-based learning. If you need to learn A before learning B, then choose a project that focuses on A before picking a B-based project. Both active and passive learning can lead to A, B, C, and D being taught out of order, and both can be used to teach A, B, C, and D in order as well.

[-][anonymous]9y20

I approve of learning-by-doing simply because the communicable is a subset of the learnable or knowable.

And often to communicate a knowledge via words is not the fastest way to transmit it. Words are high-bandwith communication if and only if both parties know what experiences those words mean i.e. there is shared experience. But to it may hard to describe an orange to Eskimos, it is easier to hand one over and say "this".

Caveat: this really depends on the teaching methods. For example, videos with exercises are better than just books, and even books with exercises are better than just books.

A "perfect e-book" would be an AI-mentor, correcting your mistakes, at that level there is no difference anymore.

Other reasons: we often pay no attention to a theory until we see why it is useful in practice. For example, in school I they made me memorize the definition of OOP (inheritance, polymorphism, encapsulation) and I just memorized and barfed it back without being itnerested in it. Many years later I've read it is all about avoiding complicated repetitive case statements and I got enlightened. This was so much more useful than our hypothetical OOP examples of modelling a toaster. I gave no shits about modelling toasters. But when I was doing something actually useful like a script that makes reports from a database into Excel and emails it to a boss so that I don't fucking have to do them manually, and I got tired of repetitively writing case salesreport do this case purchasereport do that, and the same case statement all over, then this description just made sense: it leads to actual better expressivity.

It is often useful to go through the motions first, realize that HOLY GEE SHIT these motions really make stuff happen OMG my code just drew a bouncy ball! Then being very curious about the theory why and learning it voraciously. When we do it the other way around we get college students who are boredly memorize theory as they have no idea what it is for.

I think the learn to program by programming adage came from a lack of places teaching the stuff that makes people good programmers. I've never worked with someone who has gone through one of the new programming schools, but I don't think they purport to turn out senior-level programmers, much less 99th percentile programmers. As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.

So I'd say that there are nodes on the graph that I don't have labels for, and are not taught formally as far as I know. The best way to learn them is to read lots of big well written code bases and try to figure out why everything was done one way and not some other. Second best then maybe is to write a few huge code bases and figure out why things keep falling apart?

As far as I can tell, folks either learn everything beyond the mechanics and algorithms of programming from your seniors in the workplace or discover it for themself.

... or from Stack Overflow / Wikipedia, no? When encountering a difficult problem, one can either ask someone more knowledgeable, figure it out himself, or look it up on the internet.

I'm talking about things on the level of selecting which concepts are necessary and useful to implement in a system or higher. At the simplest that's recognizing that you have three types of things that have arbitrary attributes attached and implementing an underlying thing-with-arbitrary-attributes type instead of three special cases. You tend to get that kind of review from people with whom you share a project and a social relationship such that they can tell you what you're doing wrong without offense.

When doing a project, you can learn new things... or you can keep repeating your old mistakes.

As a programmer, I am always asked at job interviews how many years of practice do I have; preferably years spent using exactly the same system as the company does. Obviously, the idea is that more years = better. One year makes you not a noob, three years make you a senior developer, after five or seven years you are considered an expert, especially if you spent all those years using the very same systems as the company that is considering hiring you.

I also know a guy who spent more than 10 years as a database application developer, who has never heard about the concept of database normalization, does not understand why primary keys are supposed to be unique, never heard of refactoring, his code consists of thousand-lines long undocumented functions, and... okay, I stop here, because I could go on forever.

So, I guess it matters a lot how specifically you approach doing your projects.

So, I guess it matters a lot how specifically you approach doing your projects.

And whether your coworkers let you get away with thousand-lines long undocumented functions.

I see learning as very dependency based. Ie. there are a bunch of concepts you have to know. Think of them as nodes. These nodes have dependencies. As in you have to know A, B and C before you can learn D.

Spot on. This is a big problem is mathematics education; prior to university a lot of teaching is done without paying heed to the fundamental concepts. For example - here in the UK - calculus is taught well before limits (in fact limits aren't taught until students get to university).

Teaching is all about crossing the inferential distance between the student's current knowledge and the idea being taught. It's my impression that most people who say "you just have to practice," say as such because they don't know how to cross that gap. You see this often with professors who don't know how to teach their own subjects because they've forgotten what it was like not knowing how to calculate the expectation of a perturbed Hamiltonian. I suspect that in some cases the knowledge isn't truly a part of them, so that they don't know how to generate it without already knowing it.

Projects are a good way to help students retain information (the testing effect) and also train appropriate recall. Experts in a field are usually experts because they can look at a problem and see where they should be applying their knowledge - a skill that can only be effectively trained by 'real world' problems. In my experience teaching A-level math students, the best students are usually the ones that can apply concepts they've learned in non-obvious situations.

You might find this article I wrote on studying interesting.

Teaching is all about crossing the inferential distance between the student's current knowledge and the idea being taught. It's my impression that most people who say "you just have to practice," say as such because they don't know how to cross that gap.

When the specific inferential distance is really really small, people can cross it by doing. This is how things were invented for the first time. And repeating this discovery on your own can be a great feeling that gives you confidence and motivation. So it could be a good teachning technique to do this... as long as you have a sufficiently good model of your student, so you know what exactly is the "really small distance", and if you later check whether the new concept was understood correctly.

So I imagine that while some teachers may really use this as an excuse when they don't know how to teach, I would be charitable and say that a lot of them probably do not have correct understanding of how exactly this works (that very small inferential distances can be crossed easily, but large ones cannot), so they just try copying someone else's style and fail. Actually, sometimes they randomly succeed, because once in a while they have a student who happens to be really close to the new concept, and this prevents them from giving up their wrong ideas about teaching.

[-][anonymous]9y-10

Math education is a special case as the students who choose it may not care so much about it s practical use. But in e.g. civil engineering the students will be bored by a theory if they don't have a hands-on experience on how this helps making brick-laying better.

I went to a business school, our teachers problem was we were bored and unmotivated to learn, uninterested in the material, we just wanted a paper. I think this does not happen in math.

Approaching theory through practical problems was helpful in this. The smart business school teacher starts explaining theory by "you know this guy who just lost a bunch of money?" that makes people listen

There's no need to just compare tutorless project work with curriculum based learning.

Of course doing a programming project while having a mentor who reviews your code and points out areas of improvement will lead to better learning than simply doing your project alone.

Being 100x more productive is about not solving hard problems you don't need to. Spending time thinking about ways to avoid the problem often pays off (feature definition, code reuse, slow implementations, ect). Much of the best practices that you read about are solving problems you wish you had - I wish my problem was poor documentation because that means someone actually cares to use it. I was always surprised by how bad the code was out in the wild until I realized it was survivor bias - the previous owner deferred solving some problem for a long time.

I mostly taught myself to program. Did an intro class freshman year of college. Five years later my adviser had me tackle a certain math problem using the computer and to do that, I had to learn a fair bit of programming. I had the distinct impression of being thrown into the deep end and told to figure out how to swim.

I noticed that there's a certain understanding that comes from actually applying a concept that I just didn't get from reading the book and working through the examples. I eventually picked up the habit of looking for a nontrivial application whenever I came across new programming concepts. Oftentimes the application I had in mind would require filling in gaps in my knowledge. Throughout I'd use a lot of things I didn't really understand (by modifying existing code) to get things done, and then when I'd come across that part of the theory everything would just seem to click.

I figure that most programming techniques were invented to deal with specific problems, and understanding those problems gives a lot of intuition about the techniques.

Another thing I hate: when people say, "you just have to practice". I've asked people, "how can I get good at X?" and they've responded, "you just have to practice". And they say it with that condescending sophisticated cynicism. And even after I prod, they remain firm in their affirmation that you "just have to practice". It feels to me like they're saying, "I'm sorry, there's no way to efficiently traverse the graph. It's all the same. You just have to keep practicing."

As Euclid once said, "there's no royal road to geometry". Practice means making mistakes, figuring out your misconceptions and gaps in understanding, then grappling with the material until it finally makes sense.