Comment author: datadataeverywhere 12 September 2012 12:31:00PM *  2 points [-]

I've taught C, Java and Python at a university and (a little) at the high school level. I have noticed two simple things that people either surmount or get stuck on. The first seems to be even a basic ability to keep a formal system in mind; see the famous Dehnadi and Bornat paper. The second, I have heard less about: in programming, it's the idea of scope.

The idea of scope in almost all modern programming languages goes like this: * A scope starts at some time (some place in the code), and ends somewhere later. * A scope can start before another ends; if so, it has to end before the "outer" scope. * Inside a scope, objects can be created and manipulated; generally even if another scope has started. * Unless something special is done, objects no longer exist after a scope ends. * Pivotally (this seems to be the hardest part), a objects can be created with one name in an outer scope and be referred to by a different name in an inner scope. Inner scopes can likewise create and manipulate objects with the same names as objects in an outer scope without affecting the objects in that outer scope.

It's really hard for me to think of an analogous skill in the real world to keeping track of N levels of renaming (which may be why it gives students such difficulty?). The closest I can think of is function composition; if you don't have to pick your way through symbolically integrating a composed function where the variables names don't match, I have pretty high confidence that you can manage nested scopes.

EDIT: There are two other, well-known problems. Recursion and pointers. I've heard stories about students who were okay for a year or two of programming courses, but never "got" recursion or, never understood pointers, and had to change majors. I've seen students have enormous difficulty with both; in fact, I've passed students who never figured one or the other out, but managed to grind through my course anyway. I don't know whether they dropped out or figured it out as their classes got harder---or just kept faking it (I had team members through grad school that couldn't handle more than basic recursion). I'm not inclined to classify either as "programming gear" that they didn't have, but I don't have data to back that up.

Comment author: DanArmak 08 September 2012 09:06:04PM *  4 points [-]

I think you should give a more precise definition of the aptitude needed to be labelled has-a-gear.

I program for a living, and I would like to think that I fall among "those who can" on the bimodal distribution (if one exists). I've seen programmers and non-programmers of all levels of ability (except for far above mine, because those are hard to recognize). One man's programmer is another man's deadweight.

Individual people grow in talent until they stop (and maybe they resume later). So if there exists a test to predict whether they'll stop at some future level, it probably doesn't involve actual programming. (For instance, testing people's understanding of variable semantics is pointless unless you've taught them those semantics first.) It would have to test something else that happens to be strongly correlated with it. So

Incidentally, this was this was recently discussed on Programmers Stack Exchange:

Comment author: datadataeverywhere 12 September 2012 11:58:42AM 0 points [-]

For the record, I think programming is so measurable and has such a tight feedback loop that it is one arena in which it's relatively easy to recognize ability that far exceeds your own.

1) Code quality is fairly subjective, and in particular novice (very novice) programmers have difficulty rating code. Most professional programmers seem to be able to recognize it though, and feel awe when they come across beautiful code.

2) Code quantity can be misleading, but if you're on a team and producing a 100-line delta a day, you will notice the odd team member producing 1000-line daily deltas; coupled with even decent ability to tell whether or not that code is maintainable and efficient (in terms of functionality / loc), this is a strong indicator.

3) Actually watching a master write code is fantastic and intimidating. People that code at 60 wpm without even stopping to consider their algorithms, data structures or APIs, but manage at the end of an hour to have a tight, unit-tested, efficient and readable module.

I can think of five people that I know that I would classify as being in discrete levels above me (that is, each of them is distinguishable by me as being either better or worse than the others). I think there are some gaps in there; Jeff Dean is so mindbogglingly skilled that I can't see anyone on my list ever catching up to him, so there are probably a few levels I don't have examples for.

Comment author: NancyLebovitz 11 September 2012 10:54:46AM 1 point [-]

One more behavior: I took a survey (which I can't find again) about hugging from behind, and everyone who answered hated it, except for a few who had a short list of people who they permitted it from.

I didn't have a random or especially large sample, but the unanimity was striking.

Comment author: datadataeverywhere 12 September 2012 11:19:16AM 3 points [-]

I like being hugged from behind...by a very small number of people. From everyone else, it's quite unwanted.

This has had an interesting effect; if someone hugs me from behind, I unconsciously either put them in a bucket of people that I like a great deal, or make myself uncomfortable by telling them "don't do that". There's an odd bit of wiggle room in there, where someone might make me like them more by doing something somewhat uncomfortable to me. If this happened more often, I would take more care to address this particular bias; I also suspect there are subtler variants that I haven't recognized (I only just realized the above while reflecting on your post).

Comment author: JenniferRM 17 August 2012 10:32:56AM *  13 points [-]

I copy and pasted the "Time To AI" chart and did some simple graphic manipulations to make the vertical and horizontal axis equal, extend the X-axis, and draw diagonal lines "down and to the right" to show which points predicted which dates. It was an even more interesting graphic that way!

It sort of looked like four or five gaussians representing four or five distinct theories were on display. All the early predictions (I assume that first one is Turing himself) go with a sort of "robots by 2000" prediction scheme that seems consistent with the Jetson's and what might have happened without "the great stagnation". All of the espousers of this theory published before the AI winter and you can see a gap in predictions being made on the subject from about 1978 to about 1994. Predicting AGI arrival in 2006 was never trendy, it seems to have always been predicted earlier or later.

The region from 2015 thru 2063 has either one or two groups betting on it because instead of "guassian-ish" it is strongly weighted towards the front end, suggesting perhaps a bimodal group that isn't easy to break into two definite groups. One hump sometimes predicts dates out as late as the 2050's, but the main group really likes the 2020's and 2030's. The first person to express anything like this theory was an expert in about 1979 (before the AI winter really set in, which is interesting) and I'm not sure who it was off the top of my head. There's a massive horde expressing this general theory, but they seem to have come in a wave of non-experts during the dotcom bubble (predicting early-ish) and then there's a gap in the aftermath of the bubble, then a wave of experts predicting a bit later.

Like 2006, the year 2072 is not very trendy for AGI predictions. However around 2080 to 2110 there seems to be a cluster that was lead by three non-expert opinions expressed in 1999 to 2003 (ie the dotcom bubble aftermath). A few years later five experts chime in to affirm the theory. I don't recognize the theory by name or rhetoric but my rough label for their theory might be "the singularity is late" just based on the sparse data.

The final coherent theory seems to be four people predicting "2200", my guess here is just that its really far in the future and a nice round number. Four people do this, two experts and two non-experts. It looks like two pre-bubble and two post bubble?

For what its worth, eyeballing my re-worked "Time to AI" figure indicates a median of about 2035, and my last moderately thoughtful calculation gave a median arrival of AGI at about 2037, with later arrivals being more likely to be "better" and, in the meantime, prevention of major wars or arms races being potentially more important to work on than AGI issues. The proximity of these dates to the year 2038 is pure ironic gravy, though I have always sort of suspected that one chunk of probability mass should take the singularity seriously because if it happens then it will be enormously important, while another chunk of probability mass should be methodologically mindful of the memetic similarities between the Y2K Bug and the Singularity (i.e. both of them being non-supernatural computer-based eschatologies which, whatever their ultimate truth status, would naturally propagate in roughly similar ways before the fact was settled).

Comment author: datadataeverywhere 12 September 2012 10:41:14AM 2 points [-]

How many degrees of freedom does your "composition of N theories" theory have? I'm not inclined to guess, since I don't know how you went about this. I just want to point out that 260 is not many data points; clustering is very likely going to give highly non-reproducible results unless you're very careful.

Comment author: jhuffman 21 October 2011 01:29:03PM 3 points [-]

People say they hated it at first, but over time, grew to love it. One must be trained to like it.

This can raise a warning flag but I've experienced this myself with coffee and some other foods. It didn't take any training for me but a lot of people who like beer don't like the bitter, hoppy beers like IPAs without some training - and while pretentious beer snobs are annoying and amusing on several levels I can't quite doubt them when I have the same preferences.

Comment author: datadataeverywhere 22 October 2011 12:22:26PM 1 point [-]

I agree (have had the same experience), although I argue that mustard, sauerkraut or other bitter/sour foods are better examples than coffee or beer, simply because drugs change the way we process surrounding stimuli.

Comment author: Daniel_Burfoot 22 October 2011 01:43:48AM 21 points [-]

Phil, I'll remind you of your own comment:

Incommensurate thoughts: People with different life-experiences are literally incapable of understanding each other...

Analogy: Take some problem domain in which each data point is a 500-dimensional vector. Take a big set of 500D vectors and apply PCA to them to get a new reduced space of 25 dimensions. Store all data in the 25D space, and operate on it in that space.

Two programs exposed to different sets of 500D vectors, which differ in a biased way, will construct different basic vectors during PCA, and so will reduce all vectors in the future into a different 25D space.

In just this way, two people with life experiences that differ in a biased way (due to eg socioeconomic status, country of birth, culture) will construct different underlying compression schemes. You can give them each a text with the same words in it, but the representations that each constructs internally are incommensurate; they exist in different spaces, which introduce different errors.

It seems entirely plausible that a person's appreciation of a piece of music depends strongly on all the music to which she's previously been exposed. Two different observers with different music-histories may have very different internal representations of the same piece of new music. A given piece of music may be well-formed or high quality in one representation, but not another.

Comment author: datadataeverywhere 22 October 2011 12:15:53PM 4 points [-]

This also goes some distance to explaining (in an alternate fashion) why repeated exposure to the artwork increases appreciation for it. Assuming the piece really relies on their exposure to related music, extended exposure forces people to have increasingly similar backgrounds.

Comment author: Hyena 16 October 2011 09:07:22PM 3 points [-]

This is perfectly well true, but I'm not interested in addressing this because I have never known this to be anyone's sufficient objection to eating meat.

Would you eat a well-treated chicken? How about a deer instantly killed by a Predator drone equipped to vaporize its brain faster than neurons react?

Comment author: datadataeverywhere 17 October 2011 05:23:33PM 5 points [-]

Torture (not murder) is my stated objection to eating meat.

Comment author: scientism 14 October 2011 05:31:12PM 1 point [-]

I don't want to die but I'm OK with other people dying. In most cases, to put it bluntly, I don't think it is a significant loss (although it might be a personal loss to me). There are some people in the world I'd fight very strongly to see them remain alive for as long as possible, even if they were reluctant (I speak here not of friends and family but of valuable contributors). But I've never understood the desire to save every life. It seems obvious to me that only a few people are here for the Life's Great Adventure and most are killing time until they kick the bucket. I take some issue with that (I think they're falling short of the Good) but it's not a problem that would be fixed by convincing them to change their attitudes towards death (the problem is their attitude towards life). The reason I want to live indefinitely is straightforward: I have some really longterm goals.

Comment author: datadataeverywhere 14 October 2011 08:27:27PM 1 point [-]

Funny. I feel the opposite way: I'm okay with dying, but don't want other people to die.

While I do tend toward suicidal thoughts, even when I'm feeling pretty great the idea of my life continuing is at best of low value. I would hate to die because I know it would hurt lots of people that I'm close to, and I'm also averse to the pain of the process of dying, but nonexistence is generally an attractive concept to me. If I could get away with dying in a manner that didn't hurt me or others, I probably would.

On the other hand, I would be and have been very pained at the death of others, or even at the thought of them dying. I would react very selfishly to keep people close to me from dying, and attempt to extend that near-mode behavior to far-mode action as well.

Comment author: dreeves 10 October 2011 09:55:40PM *  7 points [-]

Isn't it better if we blow the money on cocaine and hookers, to maximize the pain of giving it to us? :) (Seriously though, this is highly valuable feedback; really appreciate it!)

StickK.com originally envisioned being the beneficiary of people's commitment contracts but found that people would not go for that. That should certainly give us pause, but here's why we think it could make more sense in our case:

  1. The exponential fee schedule [http://beeminder.com/money] makes a big difference. In addition to removing the difficult choice about how much to risk, it makes it feel more reasonable for Beeminder to be the beneficiary. You're starting with a small amount at risk after you've already gotten value out of Beeminder. (That could change if you climb up the fee schedule very far though so we need to keep thinking about options for specifying other beneficiaries.)

  2. I think we're fundamentally providing more value than StickK because of the pretty graphs and storing your data.

As for where the money is going, well, it's still on the early side to say much about that, as you can see from these dogfood graphs:

http://beeminder.com/meta/atrisk http://beeminder.com/meta/paid

We'd love to hear more thoughts on this, like are we in fantasyland with the above rationalizations for being the beneficiary?

Comment author: datadataeverywhere 13 October 2011 09:27:06PM 2 points [-]

I'd like to weigh in on this, agreeing with pjeby. I joined beeminder, am enjoying it and expect it to be of great use to me. I don't care even a little where the money goes. The amount is a penalty to me, and I like the way it is automatically set. If the money allows you to focus more on improving beeminder, that's great. If it ends up making you rich, that's just evidence you're providing a valuable service.

Comment author: Vladimir_Nesov 09 October 2010 09:29:50PM *  3 points [-]

O(log n) stack size is allowed (since normal loop would also take O(log n) just to write down n), but you need to keep each stack frame constant size, not O(log(n)), since otherwise you get O(log^2 n) total space complexity.

Comment author: datadataeverywhere 01 October 2011 04:46:04AM 1 point [-]

I had thought the solution was very simple before you pointed this out. With some difficulty I improved my solution to O(log(log(n)) * log(n)), and it took quite a bit more time for me to get completely constant sized stack frames.

I suspect most people initially come up with the O(log^2(n)) solution and jump next to the O(log(n)) solution without getting stuck in the middle there, but I'm curious if this gave you any problems.

View more: Prev | Next