Wiki Contributions

Comments

Indeed, I misinterpreted you in multiple ways. My model went something like "Jayson_Virissimo is currently working 60-80 hours a week on his start-up. Once it exceeds ramen-profitability, he intends to scale back his efforts to become a full-time student." How very foolish of me!

I certainly hope that I'm not confused about my word choice. I write compilers for a living, so I might be in trouble if I don't understand elementary terms.

In all seriousness, my use of the word "scope" was imprecise, because the phenomenon I'm describing is more general than that. I don't know of a better term though, so I don't regret my choice. Perhaps you can help? Students that I've seen have difficulty with variable substitution seem to have difficulty with static scoping as well, and vice versa. To me they feel like different parts of the same confusion.

In a related note, I once took some of my students aside who where having great difficulty getting static scoping, and tried to teach them a bit of a dynamically-scoped LISP. I had miserable results, which is to say that I don't think the idea of dynamic scope resonated with them any more than static scope; I was hoping maybe it would, that there were "dynamic scoping people" and "static scoping people". Maybe there are; my experiment is far from conclusive.

EDIT: Hilariously, right after I wrote this comment the newest story on Hacker News was http://news.ycombinator.com/item?id=4534408, "Actually, YOU don't understand lexical scope!". To be honest, the coincidence of the headline gave me a bit of a start.

A little. As your income increases, I expect your consumption to become more expensive in monetary terms, but as your business grows I expect the value of your time to increase and for your consumption patterns to become less expensive in terms of time. College is very expensive in terms of time.

I'm not saying this is a bad choice, but it is one that surprises me. I'm still interested in the answers to my questions. Do you intend to sell your start-up, have it run itself, or abandon it? It seems like those options cover the gamut (I might consider requiring < 40 hours a week of your time to be "running itself"; if you're quite dedicated, you could probably fit being a full-time student in even with the start-up taking 40+ hours of your time, making that an alternative option).

My post didn't indicate this, but the most common source of scope is functions; calling a function starts a new scope that ends when the function returns. Especially in this case, it does often make sense to use the same variable name:

posterior = ApplyBayes(prior, evidence)
...
function ApplyBayes(prior, evidence) = { ... }

Will have prior=prior, evidence=evidence, and is a good naming scheme. But in most languages, modifying 'evidence' in the function won't affect the value of 'evidence' outside the scope of the function. This sometimes becomes confusing to students when the function above gets called like so:

posterior = ApplyBayes(prior, evidence1)
posterior = ApplyBayes(posterior, evidence2)
posterior = ApplyBayes(posterior, evidence3)

Because their previous model relied on the names being the same, rather than the coincidence of naming being merely helpful.

Overall, I would say that this is still a fertile source of errors, but in some situations the alternative is to have less readable code, which is also a fertile source of errors and makes fixing them more difficult.

...going back to school to study computer science (if my start-up succeeds before then).

That's amusing. Usually I would say the value of the founder being present is much higher for a successful company than one that has failed. I would actually expect my freedom to pursue other avenues diminish as my success in my current avenue grows.

Do you mean that your start-up, if successful, will pretty much run itself? Or that if it hasn't succeeded /yet/, then you will feel obligated to stay and keep working on it?

Of all my flaws, I currently consider my bias to thought (and study, research, etc.) over action my greatest. I suspect that LessWrong attracts many (a disproportionate number of) such people.

Not too long ago, I lost a week of work and was able to recompose it in the space of an afternoon. It wasn't the same line-for-line, but it was the same design and probably even used the same names for most things, and was roughly 10k LOC. So if I had recent or substantial experience, I can see expecting a 10x speedup in execution. That's pretty specific though; I don't think I have ever had the need to write something that was substantially similar to anything else I'd ever written.

Domain experience is vital, of course. If you have to spend all your time wading through header files to find out what the API is or discover the subtle bugs in your use of it, writing just a small thing will take painfully long. But even where I never have to think about these things I still pause a lot.

One thing that is different is that I make mistakes often enough that I wait for them; working with one of these people, I noticed that he practiced "optimistic coding"; he would compile and test his code, but by feeding it into a background queue. In that particular project, a build took ~10 minutes, and our test suite took another ~10 minutes. He would launch a build / test every couple of minutes, and had a dbus notification if one failed; once, it did, and he had to go back several (less than 10, I think) commits to fix the problem. He remembered exactly where he was, rebased, and moved on. I couldn't even keep up with him finding the bug, much less fixing it.

The people around here who have a million lines of code in production seem to have that skill, of working without the assistance of a compiler or test harness; their code works the first time. Hell, Rob Pike uses ed. He doesn't even need to refer to his code often enough to make it worthwhile to easily see what he's already written (or go back and change things)---for him, that counts as an abnormal occurrence.

I don't know about you, but I can't recall 10k LOC from experience even if I had previously written something before; seeing someone produce that much in the space of three hours is phenomenal, especially when I realize that I probably would have required two or three times as much code to do the same thing on my first attempt. If by "reciting from experience" you mean that they have practiced using the kinds of abstractions they employ many times before, then I agree that they're skilled because of that practice; I still don't think it's a level of mastery that I will ever attain.

In my opinion, almost all of that 50% (that drop out) could program, to some extent, if sufficiently motivated.

A great deal of Computer Science students (half? more than half?) love programming and hit a wall when they come to the theoretical side of computer science. Many of them force themselves through it, graduate, and become successful programmers. Many switch majors to Information Technology, and for better or for worse will end up doing mostly system administration work for their career. Some switch majors entirely, and become engineers. I actually think we do ourselves a disservice by failing to segment Computer Science from Software Engineering; a distinction made at very few institutions, and when made, often to the detriment of Software Engineers, regrettably.

So to answer your question; of the 50% that drop out, I think most end up as sub-par programmers, but 80% of that 50% "have programming gear", to the extent that such a thing exists.

I've taught courses at various levels, and in introductory courses (where there's no guarantee anyone has seen source code of any form before), I've been again and again horrified by students months into the course who "tell" the computer to do something. For instance, in a C program, they might write a comment to the computer instructing it to remember the value of a variable and print it if it changed. "Wishful" programming, as it were.

In fact, I might describe that as the key difference between the people who clearly would never take another programming course, and those that might---wishful thinking. Some never understood their own code and seemed to write it like monkeys armed with a binary classifier (the compiler & runtime, either running their program, or crashing) banging out Shakespeare. These typically never had a clear idea about what "program state" was; instead of seeing their program as data evolving over time, they saw it as a bunch of characters on the screen, and maybe if the right incantations were put on the screen, the right things would happen when they said Go.

Common errors in this category include:

  • Infinite loops, because "the loop will obviously be done when it has the value I want".
  • Uninitialized variables, because "it's obvious what I'm computing, and that you start at X".
  • Calling functions that don't exist, because, "well, it ought to".
  • NOT calling functions, because "the function is named PrintWhenDone, it should automatically print when the program is done".

These errors would crop up among a minority of students right up until the class was over. They could be well described by a gut-level belief that computers use natural language; but this only covers 2-6% of students in these courses*, whereas my experience is that less than 50% of students who go into a Computer Science major actually graduate with a Computer Science degree; so I think this is only a small part of what keeps people from programming.

*In three courses, with a roughly 50-person class, there were always 1-3 of these students; I suspect the median is therefore somewhere between 2 and 6%, but perhaps wildly different at another institution and far higher in the general population.

Load More