Comment author: NancyLebovitz 09 September 2012 03:27:34PM *  5 points [-]

My assumption was that people who can't seem to learn to program can't get to the gut-level belief that computers don't use natural language-- computers require types of precision that people don't need.

However, this is only a guess. Would anyone with teaching experience care to post about where the roadblocks seem to be?

Also, does the proportion of people who can't learn to program seem to be dropping?

On the other hand, I did the JavaScript tutorial at Codacademy, and it was fun of a very annoying sort-- enough fun that I was disappointed that there only seemed to be a small amount of it.

However, I didn't seem to be able to focus enough on the examples until I took out the extra lines and curly parentheses-- I was literally losing track of what I was doing as I went from one distant line to another. If I pursue this, I might need to get used to the white space-- I'm sure it's valuable for keeping track of the sections of a program.

My working memory isn't horrendously bad-- I can reliably play dual 3-back, and am occasionally getting to 4-back.

If there are sensory issues making programming difficult for a particular person, this might be hard to distinguish from a general inability.

Comment author: datadataeverywhere 12 September 2012 01:04:02PM 8 points [-]

I've taught courses at various levels, and in introductory courses (where there's no guarantee anyone has seen source code of any form before), I've been again and again horrified by students months into the course who "tell" the computer to do something. For instance, in a C program, they might write a comment to the computer instructing it to remember the value of a variable and print it if it changed. "Wishful" programming, as it were.

In fact, I might describe that as the key difference between the people who clearly would never take another programming course, and those that might---wishful thinking. Some never understood their own code and seemed to write it like monkeys armed with a binary classifier (the compiler & runtime, either running their program, or crashing) banging out Shakespeare. These typically never had a clear idea about what "program state" was; instead of seeing their program as data evolving over time, they saw it as a bunch of characters on the screen, and maybe if the right incantations were put on the screen, the right things would happen when they said Go.

Common errors in this category include: * Infinite loops, because "the loop will obviously be done when it has the value I want". * Uninitialized variables, because "it's obvious what I'm computing, and that you start at X". * Calling functions that don't exist, because, "well, it ought to". * NOT calling functions, because "the function is named PrintWhenDone, it should automatically print when the program is done".

These errors would crop up among a minority of students right up until the class was over. They could be well described by a gut-level belief that computers use natural language; but this only covers 2-6% of students in these courses*, whereas my experience is that less than 50% of students who go into a Computer Science major actually graduate with a Computer Science degree; so I think this is only a small part of what keeps people from programming.

*In three courses, with a roughly 50-person class, there were always 1-3 of these students; I suspect the median is therefore somewhere between 2 and 6%, but perhaps wildly different at another institution and far higher in the general population.

Comment author: datadataeverywhere 12 September 2012 12:31:00PM *  2 points [-]

I've taught C, Java and Python at a university and (a little) at the high school level. I have noticed two simple things that people either surmount or get stuck on. The first seems to be even a basic ability to keep a formal system in mind; see the famous Dehnadi and Bornat paper. The second, I have heard less about: in programming, it's the idea of scope.

The idea of scope in almost all modern programming languages goes like this: * A scope starts at some time (some place in the code), and ends somewhere later. * A scope can start before another ends; if so, it has to end before the "outer" scope. * Inside a scope, objects can be created and manipulated; generally even if another scope has started. * Unless something special is done, objects no longer exist after a scope ends. * Pivotally (this seems to be the hardest part), a objects can be created with one name in an outer scope and be referred to by a different name in an inner scope. Inner scopes can likewise create and manipulate objects with the same names as objects in an outer scope without affecting the objects in that outer scope.

It's really hard for me to think of an analogous skill in the real world to keeping track of N levels of renaming (which may be why it gives students such difficulty?). The closest I can think of is function composition; if you don't have to pick your way through symbolically integrating a composed function where the variables names don't match, I have pretty high confidence that you can manage nested scopes.

EDIT: There are two other, well-known problems. Recursion and pointers. I've heard stories about students who were okay for a year or two of programming courses, but never "got" recursion or, never understood pointers, and had to change majors. I've seen students have enormous difficulty with both; in fact, I've passed students who never figured one or the other out, but managed to grind through my course anyway. I don't know whether they dropped out or figured it out as their classes got harder---or just kept faking it (I had team members through grad school that couldn't handle more than basic recursion). I'm not inclined to classify either as "programming gear" that they didn't have, but I don't have data to back that up.

Comment author: DanArmak 08 September 2012 09:06:04PM *  4 points [-]

I think you should give a more precise definition of the aptitude needed to be labelled has-a-gear.

I program for a living, and I would like to think that I fall among "those who can" on the bimodal distribution (if one exists). I've seen programmers and non-programmers of all levels of ability (except for far above mine, because those are hard to recognize). One man's programmer is another man's deadweight.

Individual people grow in talent until they stop (and maybe they resume later). So if there exists a test to predict whether they'll stop at some future level, it probably doesn't involve actual programming. (For instance, testing people's understanding of variable semantics is pointless unless you've taught them those semantics first.) It would have to test something else that happens to be strongly correlated with it. So

Incidentally, this was this was recently discussed on Programmers Stack Exchange:

Comment author: datadataeverywhere 12 September 2012 11:58:42AM 0 points [-]

For the record, I think programming is so measurable and has such a tight feedback loop that it is one arena in which it's relatively easy to recognize ability that far exceeds your own.

1) Code quality is fairly subjective, and in particular novice (very novice) programmers have difficulty rating code. Most professional programmers seem to be able to recognize it though, and feel awe when they come across beautiful code.

2) Code quantity can be misleading, but if you're on a team and producing a 100-line delta a day, you will notice the odd team member producing 1000-line daily deltas; coupled with even decent ability to tell whether or not that code is maintainable and efficient (in terms of functionality / loc), this is a strong indicator.

3) Actually watching a master write code is fantastic and intimidating. People that code at 60 wpm without even stopping to consider their algorithms, data structures or APIs, but manage at the end of an hour to have a tight, unit-tested, efficient and readable module.

I can think of five people that I know that I would classify as being in discrete levels above me (that is, each of them is distinguishable by me as being either better or worse than the others). I think there are some gaps in there; Jeff Dean is so mindbogglingly skilled that I can't see anyone on my list ever catching up to him, so there are probably a few levels I don't have examples for.

Comment author: JenniferRM 17 August 2012 10:32:56AM *  13 points [-]

I copy and pasted the "Time To AI" chart and did some simple graphic manipulations to make the vertical and horizontal axis equal, extend the X-axis, and draw diagonal lines "down and to the right" to show which points predicted which dates. It was an even more interesting graphic that way!

It sort of looked like four or five gaussians representing four or five distinct theories were on display. All the early predictions (I assume that first one is Turing himself) go with a sort of "robots by 2000" prediction scheme that seems consistent with the Jetson's and what might have happened without "the great stagnation". All of the espousers of this theory published before the AI winter and you can see a gap in predictions being made on the subject from about 1978 to about 1994. Predicting AGI arrival in 2006 was never trendy, it seems to have always been predicted earlier or later.

The region from 2015 thru 2063 has either one or two groups betting on it because instead of "guassian-ish" it is strongly weighted towards the front end, suggesting perhaps a bimodal group that isn't easy to break into two definite groups. One hump sometimes predicts dates out as late as the 2050's, but the main group really likes the 2020's and 2030's. The first person to express anything like this theory was an expert in about 1979 (before the AI winter really set in, which is interesting) and I'm not sure who it was off the top of my head. There's a massive horde expressing this general theory, but they seem to have come in a wave of non-experts during the dotcom bubble (predicting early-ish) and then there's a gap in the aftermath of the bubble, then a wave of experts predicting a bit later.

Like 2006, the year 2072 is not very trendy for AGI predictions. However around 2080 to 2110 there seems to be a cluster that was lead by three non-expert opinions expressed in 1999 to 2003 (ie the dotcom bubble aftermath). A few years later five experts chime in to affirm the theory. I don't recognize the theory by name or rhetoric but my rough label for their theory might be "the singularity is late" just based on the sparse data.

The final coherent theory seems to be four people predicting "2200", my guess here is just that its really far in the future and a nice round number. Four people do this, two experts and two non-experts. It looks like two pre-bubble and two post bubble?

For what its worth, eyeballing my re-worked "Time to AI" figure indicates a median of about 2035, and my last moderately thoughtful calculation gave a median arrival of AGI at about 2037, with later arrivals being more likely to be "better" and, in the meantime, prevention of major wars or arms races being potentially more important to work on than AGI issues. The proximity of these dates to the year 2038 is pure ironic gravy, though I have always sort of suspected that one chunk of probability mass should take the singularity seriously because if it happens then it will be enormously important, while another chunk of probability mass should be methodologically mindful of the memetic similarities between the Y2K Bug and the Singularity (i.e. both of them being non-supernatural computer-based eschatologies which, whatever their ultimate truth status, would naturally propagate in roughly similar ways before the fact was settled).

Comment author: datadataeverywhere 12 September 2012 10:41:14AM 2 points [-]

How many degrees of freedom does your "composition of N theories" theory have? I'm not inclined to guess, since I don't know how you went about this. I just want to point out that 260 is not many data points; clustering is very likely going to give highly non-reproducible results unless you're very careful.

Presentation on Learning

3 datadataeverywhere 17 November 2011 05:30PM

In order to do a better job putting together my thoughts and knowledge on the subject, I precommitted myself to giving a presentation on learning. My specific goal for the presentation is to inform audience members about how humans actually learn and teach them how to leverage this knowledge to efficiently learn and maintain factual and procedural knowledge and create desired habits.

I will be focusing a little on background neuroscience, borrowing especially from A Crash Course in the Neuroscience of Human Motivation. I will heavily discuss spaced repetition, and I will also talk about the relevance of System 1 and System 2 thinking. I will not be talking about research, or about how to discover what to learn; for the purposes of my presentation, people already know what they want or need to learn, and have a fairly accurate picture of what that knowledge or those behaviors look like.

Given that I will only have an hour to speak, I will be unable to explore everything I might like to in depth. Less Wrong (both the site and the community) are my most valuable resource here, so I am asking two things:

  1. In one hour, what would you cover if you earnestly wanted to improve people's ability to learn?
  2. What background material do I need to ensure fluency with? This should be material that I need to have adequate familiarity with or else risk presenting an error, even if I don't need to present the material itself in any depth.
The audience will be students and faculty in a Computer Science department. In decreasing order of number of members, the audience will be Masters students, seniors, Ph.D candidates, professors; no Junior or lower-level undergraduates, so I will probably use computing analogies that wouldn't make sense in other contexts. Because of the audience, I'm also comfortable giving a fairly information-dense presentation, but since I intend to persuade as well as inform the presentation will not be a report.

 

Comment author: jhuffman 21 October 2011 01:29:03PM 3 points [-]

People say they hated it at first, but over time, grew to love it. One must be trained to like it.

This can raise a warning flag but I've experienced this myself with coffee and some other foods. It didn't take any training for me but a lot of people who like beer don't like the bitter, hoppy beers like IPAs without some training - and while pretentious beer snobs are annoying and amusing on several levels I can't quite doubt them when I have the same preferences.

Comment author: datadataeverywhere 22 October 2011 12:22:26PM 1 point [-]

I agree (have had the same experience), although I argue that mustard, sauerkraut or other bitter/sour foods are better examples than coffee or beer, simply because drugs change the way we process surrounding stimuli.

Comment author: Daniel_Burfoot 22 October 2011 01:43:48AM 21 points [-]

Phil, I'll remind you of your own comment:

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other...

Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.

Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.

In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors.

It seems entirely plausible that a person's appreciation of a piece of music depends strongly on all the music to which she's previously been exposed. Two different observers with different music-histories may have very different internal representations of the same piece of new music. A given piece of music may be well-formed or high quality in one representation, but not another.

Comment author: datadataeverywhere 22 October 2011 12:15:53PM 4 points [-]

This also goes some distance to explaining (in an alternate fashion) why repeated exposure to the artwork increases appreciation for it. Assuming the piece really relies on their exposure to related music, extended exposure forces people to have increasingly similar backgrounds.

Comment author: Hyena 16 October 2011 09:07:22PM 3 points [-]

This is perfectly well true, but I'm not interested in addressing this because I have never known this to be anyone's sufficient objection to eating meat.

Would you eat a well-treated chicken? How about a deer instantly killed by a Predator drone equipped to vaporize its brain faster than neurons react?

Comment author: datadataeverywhere 17 October 2011 05:23:33PM 5 points [-]

Torture (not murder) is my stated objection to eating meat.

Comment author: scientism 14 October 2011 05:31:12PM 1 point [-]

I don't want to die but I'm OK with other people dying. In most cases, to put it bluntly, I don't think it is a significant loss (although it might be a personal loss to me). There are some people in the world I'd fight very strongly to see them remain alive for as long as possible, even if they were reluctant (I speak here not of friends and family but of valuable contributors). But I've never understood the desire to save every life. It seems obvious to me that only a few people are here for the Life's Great Adventure and most are killing time until they kick the bucket. I take some issue with that (I think they're falling short of the Good) but it's not a problem that would be fixed by convincing them to change their attitudes towards death (the problem is their attitude towards life). The reason I want to live indefinitely is straightforward: I have some really longterm goals.

Comment author: datadataeverywhere 14 October 2011 08:27:27PM 1 point [-]

Funny. I feel the opposite way: I'm okay with dying, but don't want other people to die.

While I do tend toward suicidal thoughts, even when I'm feeling pretty great the idea of my life continuing is at best of low value. I would hate to die because I know it would hurt lots of people that I'm close to, and I'm also averse to the pain of the process of dying, but nonexistence is generally an attractive concept to me. If I could get away with dying in a manner that didn't hurt me or others, I probably would.

On the other hand, I would be and have been very pained at the death of others, or even at the thought of them dying. I would react very selfishly to keep people close to me from dying, and attempt to extend that near-mode behavior to far-mode action as well.

Comment author: dreeves 10 October 2011 09:55:40PM *  7 points [-]

Isn't it better if we blow the money on cocaine and hookers, to maximize the pain of giving it to us? :) (Seriously though, this is highly valuable feedback; really appreciate it!)

StickK.com originally envisioned being the beneficiary of people's commitment contracts but found that people would not go for that. That should certainly give us pause, but here's why we think it could make more sense in our case:

  1. The exponential fee schedule [http://beeminder.com/money] makes a big difference. In addition to removing the difficult choice about how much to risk, it makes it feel more reasonable for Beeminder to be the beneficiary. You're starting with a small amount at risk after you've already gotten value out of Beeminder. (That could change if you climb up the fee schedule very far though so we need to keep thinking about options for specifying other beneficiaries.)

  2. I think we're fundamentally providing more value than StickK because of the pretty graphs and storing your data.

As for where the money is going, well, it's still on the early side to say much about that, as you can see from these dogfood graphs:

http://beeminder.com/meta/atrisk http://beeminder.com/meta/paid

We'd love to hear more thoughts on this, like are we in fantasyland with the above rationalizations for being the beneficiary?

Comment author: datadataeverywhere 13 October 2011 09:27:06PM 2 points [-]

I'd like to weigh in on this, agreeing with pjeby. I joined beeminder, am enjoying it and expect it to be of great use to me. I don't care even a little where the money goes. The amount is a penalty to me, and I like the way it is automatically set. If the money allows you to focus more on improving beeminder, that's great. If it ends up making you rich, that's just evidence you're providing a valuable service.

View more: Prev | Next