I'm not a programmer. I wish I were. I've tried to learn it several times, different languages, but never went very far. The most complex piece of software I ever wrote was a bulky, inefficient game of life.

Recently I've been exposed to the idea of a visual programming language named subtext. The concept seemed interesting, and the potential great. In short, the assumptions and principles sustaining this language seem more natural and more powerful than those behind writing lines of codes. For instance, a program written as lines of codes is uni-dimensional, and even the best of us may find it difficult to sort that out, model the flow of instructions in your mind, how distant parts of the code interact together, etc. Here it's already more apparent because of the two-dimensional structure of the code.

I don't know whether this particular project will bear fruit. But it seems to me many more people could become more interested in programming, and at least advance further before giving up, if programming languages were easier to learn and use for people who don't necessarily have the necessary mindset to be a programmer in the current paradigm.

It could even benefit people who're already good at it. Any programmer may have a threshold above which the complexity of the code goes beyond their ability to manipulate or understand. I think it should be possible to push that threshold farther with such languages/frameworks, enabling the writing of more complex, yet functional pieces of software.

Do you know anything about similar projects? Also, what could be done to help turn such a project into a workable programming language? Do you see obvious flaws in such an approach? If so, what could be done to repair these, or at least salvage part of this concept?

New to LessWrong?

New Comment
89 comments, sorted by Click to highlight new comments since: Today at 6:36 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I would like to find a programming language that makes programming easier. And I have seen dozens that claimed to have that effect. But it seems to me that at best they work like placebo -- they make programming appear easier, which encourages students to give it a try.

Seems to me that most of these projects are based on cheap analogies. Let me explain... Imagine that you are in a business of publishing scientific literature, and then someone tell you: "Why don't you publish a children's book? There is a big market out there." Problem is, you have never seen any children's book. You are aware that those books must be different from books for adults, so you make a research. You discover that successful children books use big letters and big colorful pictures. Great! So you take Wittgenstein's Tractatus Logico-Philosophicus, triple the font size, insert pictures of cute kittens, and the book is ready. However, despite having all signs of a successful children's book, children are not enjoying it.

Essentially, you can't remove programming from programming. You could give people a game editor and call it a programming language, and that would be a nice first step, but if you w... (read more)

3loup-vaillant12y
This alone is enough to get my upvote. I often struggle to explain that programming is a form of math (or at least, that it needs math). One typical answer goes like My suffocation and stuttering (refusing to change one's mind in the face of compelling sounding arguments tends to do that) squash any attempt at a proper rebuttal. But now, I have one:
0Dr_Manhattan12y
There is a Lighbot implementation on the iPad (under different) name. It's a nice app, but boredom sets in pretty fast (at least for my kids). What is needed is a common "building block" language for many interesting environments, teaching higher levels of abstraction.
0Viliam_Bur12y
One language, many environments -- exactly. People remember by repeating, so completing 10 levels is not enough, but completing 1000 levels in the same environment would be boring. You can practice the same concept, e.g. a while-loop, by letting a robot walk towards the wall, or cooking the cake until it is ready. You can practice a for-loop by collecting 3 apples in the garden or walking 3 blocks away on the map of the town. All you need is different environments with different sets of primitives and one editor with environment-independent commands.

I don't know whether this particular project will bear fruit.

There's a non-empty reference class of previous efforts to create visual programming languages, including e.g. Prograph, and the success rate so far is very low (Scratch is perhaps a notable exception, making inroads in the unfortunately small "teach kids to program" community.)

To be fair, Subtext looks superficially like it does have some novel ideas, and actually differs from its predecessors.

Be careful with the implied equation of "visual" with "intuitive". They don't necessarily have anything to do with each other.

ETA: I've tried downloading the current version to see if I could do something simple with it, such as a FizzBuzz implementation. No dice; the .exe won't start. Maybe the program has dependencies that are not fulfilled on my (virtual) Windows box and it fails silently, or something else is wrong. Updating on the experience, I wouldn't expect very much from this effort. It's literally a non-starter.

2John_Maxwell12y
I started a couple of my younger brothers and sisters on Scratch, and they got quite far. Now my sixth grade brother has downloaded a mod for his digital camera, and he wrote a calculator program in Lua for it. And my fifth grade sister has been teaching herself Python using this book: http://www.briggs.net.nz/snake-wrangling-for-kids.html
0Maelin12y
On a tangent (and just for my curiosity), can you explain/link an explanation of what the phrase "non-empty reference class" means? I infer from context it means that there is a non-empty set of instances, but what is the meaning of this specific 'reference class' wording?
3Morendil12y
By reference class I mean "the set of things that are like Subtext that I'd use to get my prior probability of success from, before updating on the specific merits of Subtext (or its flaws)". I've acquired the term both from previous discussions here on LW and from slightly more formal training in forecasting, specifically participating in the Good Judgment Project. There are perils of forecasting based on reference classes (more), but it can be a useful heuristic.
0[anonymous]12y
I suspect it is "Previous Similar Attempts," useful in avoiding Planning Fallacy and as fault analysis material.

One big thing about plain text (and so conventional programming languages) is speed: for a proficient and practiced user, the keyboard is a really efficient method of entering data into a computer, and even more so when using an editor like Emacs or vim or a fully-featured IDE. As soon as you start using the mouse you slow down dramatically.

Another thing about normal text is the tooling has built up around it. Version control is the largest one: being able to narrow down to exactly when bug Y was introduced ("git bisect" is pretty cool) and see exactly what changed is really useful. Currently I don't think there is anything like this for visual programming languages, and this is a requirement.

I don't think these points are necessarily impossible to address: a visual programming language could still use the keyboard for manipulation and the structure of the code is build into the storage format so it would be feasible for version control to be more intelligent and useful than current ones.

Also, using visual connection can separate things too much, so the meaning isn't obvious at a glance, e.g. in this picture one has to follow the lines back to work out what's being used w... (read more)

I don't know about subtext specifically, but I've grown a bit more skeptical about the possibilities of visual programming languages over the years.

As a game developer, I sometimes had to find ways to give non-programmers control of a system - allowing a level designer to control where and when new enemies spawn, a game designer to design the attack patterns of a specific boss, a sound designer to make his music match in-game events, an FX artist to trigger specific FX at certain times ... it's not easy to do right. We programmers sometimes make things that seem obvious and simple with little graphs with arrows and dependencies, but it turns out to be a headache for someone else to wrap his head around. What seems to work best is not making a fully programmable system (even if it's nice and visual), but rather defining a narrow set of operations that make sense for the behavior needed, and give a way of simply editing those; making something like a narrow minilanguage. And for that, simple linear text-based edition can work fine, without any graphical frills.

(Working with a visual tool works fine too, but it shouldn't become a full-blown programming language; give a level designer ... (read more)

[A] program written as lines of codes is uni-dimensional, and even the best of us may find it difficult to sort out [...] how distant parts of the code interact together.

That would only be a problem if you could only refer to things by line number. When I call doTheWatoosi(), it doesn't matter much if it's the next function down or something defined in another program completely. It's the symbol names that tell us about the interaction, not the locations of stuff in the file.

And, the space of possible names has many many dimensions, which actually gives it quite a leg up over visual languages which have 3 at best, and probably actually only 2 if they want to have a decent user interface.

Which of course doesn't address the very real issue you raise: that text is much more opaque to beginners than visuals. But I am very skeptical of the notion that a visual programming language would be of much help to programmers who are already strong.

0dbaupp12y
You could easily have "dimensional sliders" so you can move back and forth in 4 or 5 (etc.) dimensions. Not that this would make the user interface clearer, or the programming language more intuitive.

If you're serious about learning, I suggest you take an online course from Udacity. Their 101 course is a very gentle introduction.

Registration is already open. They start tomorrow. It's free.

2James_Miller12y
http://www.codecademy.com is also great.

I am a programmer, and have been for about 20 years or so. My impressions here...

Diagrams and visual models of programs have typically been disappointing. Diagrams based on basic examples always look neat, tidy, intuitive and useful. When scaling up to a real example, the diagram often looks like the inside of a box of wires - lines going in all directions. Where the simple diagram showed simple lines drawing boxes together, the complex one has the same problem as the wiring box - you have 40 different 'ends' of the lines, and it's a tedious job to pair th... (read more)

0bogus12y
Part of this is probably due to VPLs not exposing the right abstractions--and of course, exposing an abstraction organically in a visual representation may be unfeasible. I looked at some instances of LabView programs linked in another comment, and there seemed to be a lot of repetition which would no-doubt be abstracted away in a text-based language.

Do you see obvious flaws in such an approach?

I'm not sure. I think being able to model the computer's actions in your head is something of a requirement to be a good programmer. If people who use (a hypothetical completed) subtext learn to do that more rapidly, then great. If instead they learn to just barely cobble something together without really understanding what is going on, I think that would be a net negative (I don't want those people writing my bank's software). I'm not sure which is the likely outcome.

Or maybe I'm conflicted because I am a co... (read more)

If people who use (a hypothetical completed) subtext learn to do that more rapidly, then great. If instead they learn to just barely cobble something together without really understanding what is going on, I think that would be a net negative (I don't want those people writing my bank's software).

Programming languages that make programming easier are a good goal. Problem is, there are too many languages that make programming of simple programs easier, and programming of complex programs more difficult. The language is optimized for doing a specific set of tasks, and if you walk outside that set, you are damned. (Although the authors will assure you that everything can be done by their language, it's just a bit inconvenient.)

Things appealing to beginning programmers are often appealing for the wrong reasons. For example "you don't have to write semicolons after each statements" or "you don't have to declare variables". Ouch! I agree that not having to write semicolons is convenient, but at the same time I think "if having to write a semicolon after each statement is such a big deal for you, I can't imagine you as a successful programmer", because alt... (read more)

2bogus12y
Agreed. But this problem can be avoided by embedding such domain-specific languages inside a general-purpose language. Then writing simple programs (for some definition of "simple") is still fairly easy, because the DSL can be implemented with a one-time cost in complexity. However, coding complex programs is still feasible. Visual representations of programs are interesting in their own right, because they allow reasoning about some program properties in very intuitive ways (depending on the representation, this may be syntax, data flow, control flow, data representation, etc.). However, it is probably the case that there is no single "best" visual representation for programs, and thus no such thing as a one-size-fits-all "visual programming language".
0Viliam_Bur12y
Or by making a really convenient library for a general-purpose language. Although the language puts some limits on how convenient the library can be. But I suspect one probably makes more money selling a new programming language than selling a library.
1loup-vaillant12y
Or by making a really convenient DSL factory. The only use for your "general purpose" language would be to write DSLs. A bit extreme, but it shows some promise. Current results suggest this approach uses 3 orders of magnitude less code than current systems —possibly even less.
1loup-vaillant12y
The effects of such visual flowering is greater than one might think, especially on beginners. Once you grok the concept of instruction, block, and nesting, you barely see the curly brackets (or the "begin" and "end" keyword) and the semicolons. A bit like a Lisp programmer that don't "see" the parentheses any more. Beginners are more sensitive. The cognitive load you call trivial is probably significant to them, because they still think in ACII instead of AST. In the ASCII world, a semicolon or a bracket takes about as cognitively loaded as any other keyword. Indentation, not so much. Now one could see it as a test. I wonder if the ability to think through unhelpful syntax would be a good predictor of future success?
0Viliam_Bur12y
I hate writing "begin" and "end" in Pascal, because these words take too much of the screen space, and also visually pattern-match with identifiers. I think Pascal would be 50% more legible if it replaced "begin" and "end" with curly brackets. So I guess removing the semicolons and curly brackets is also an improvement in legibility. Still, maybe the beginners are trying to move forward too fast. Maybe a lot of problems come from trying to run before one is able to walk reliably. When children learn mathematics, they have to solve dozens of "2+3=?" problems before they move to something more complex. When learning programming, students should also solve dozens of one-line or two-line problems before they move on. But there is often not enough time in the curriculum.
0Random83212y
What would you replace the semicolon with? There are a few obvious answers: One is to simply not allow multiple statements on the same visual line (even if they are closely related and idiomatic). Another is to define the semicolon (or equivalent) as a separator, with the side effect that you can no longer have a single statement split across multiple visual lines. Another is to, along with the 'separator' solution, add an additional symbol for splitting long statements across multiple visual lines - as in earlier Visual Basic. And yet another option is to have a separator and "guess" whether they meant a line break to end a statement or not - as in Javascript and modern Visual Basic.
2loup-vaillant12y
You can also mix approaches: optional semicolons, but use indentation to guess if it's the same instruction or not. That way: // 3 instructions blah; blah blah // 2 instructions blah blah blah // 1 instruction (indentation is significant!) blah blah blah // This one is tricky. Id' say syntax error, or a warning blah; blah blah // This looks better, more obvious to me: 2 instructions blah; blah blah // Alternatively, this one may also count for 2 instructions blah; blah blah // begin of a new block appropriate_keyword blah blah blah // end of a block (one instruction in the inner block, one instruction in the outer block). blah blah // 2 instructions (but frankly, I'd provide a warning for the superfluous ";") blah; blah; This should be flexible enough and unambiguous enough.
1Viliam_Bur12y
In Python, you are supposed to write a colon before you start a block, right? So the rules can be rather simple: * colon, with indentation = start of a new block * colon, no indentation = an empty block (or a syntax error) * no colon, with indentation = continuing of the previous line * no colon, no indentation = next statement * semicolon = statement boundary Block ends where the indentation returns to the level of the line that opened the block. Continued line ends when indentation returns to the level of the starting line. (Where "to the level" = to the level, or below the level.)
0Viliam_Bur12y
I spent a lot of time thinking about this, and now it seems to me that this is a wrong question. The right question is: "how to make the best legible language?" Maybe it will require some changes to the concept of "statement". Why one statement plus one statement makes two statements, but one expression plus one expression makes one expression; why "x=1; y=1;" is two units, but "(x == 1) && (y == 1)" is one unit? What happens if a statement is a part of an expression, in an inline anonymous function? Where should we place semicolons or line breaks then? Sorry, I don't have a good answer. As a half-good answer, I would go with the early VB syntax: the rule is unambiguous (unlike some JavaScript rules), and it requires a special symbol in a special situation (as opposed to using a special symbol in non-special situation). Another half-good answer: use four-space tabs for "this is the next statement" and a half-tab (two spaces) for "here continues the previous line". (If the statement has more than two lines, all the lines except the first one are aligned the same; the half-tabs don't accumulate.)
0Random83212y
Because a statement is the fundamental unit of an imperative language. If "x=1; y=1;" were one unit, it would be one statement. Technically, on another level, multiple statements enclosed in braces is a single statement. Your objection does suggest another solution I forgot to put in - ban arbitrarily complex expressions. Then statements are of bounded length and have no need to span multiple lines. The obvious example for a language that makes this choice is assembly. You could ban inline anonymous functions, or require them to be a single expression. You could implement half of Lisp as named functions that are building blocks for your "single expression" anonymous functions, so this doesn't necessarily lose expressive power. That Microsoft changed it is weak evidence against it - it suggests that people really don't like having to add that extra symbol. There is that ambiguity problem, though. (Javascript's rule* technically requires an arbitrarily large amount of lookahead - I think the modern VB rule is more sane from a compiler perspective, but can still have annoying consequences) Your "other half-good answer" isn't really very distinct from the first: the half-tab takes the role of the special symbol; it being at the beginning of the line just changes how you specify the grammar. (Vim scripting is an example of an existing language that uses a symbol at the beginning of a line for continuations) It also creates an extra burden (even compared to current whitespace-sensitive languages like Python) to maintain the indentation correctly. In particular, it forbids you from adding lots of extra indentation to, for example, line up the second part of a statement with a similar element on the first line (think making a C-style function call, then indenting subsequent lines to the point where the opening bracket of the argument list was. Or indenting to the opening bracket of the innermost still-open group in general.) *Technical note: Javascript's rule is "put in
1DanArmak12y
I don't believe this is true, at least not for the usual sense of "statement", which is "code with side effects which, unlike an expression, has no type (not even unit/void) and does not evaluate to a value". You can easily make a language with no statements, just expressions. As an example, start with C. Remove the semicolon and replace all uses of it with the comma operator. You may need to adjust the semantics very slightly to compensate (I can't say where offhand). Presto, you have a statement-less language that looks quite functional: everything (other than definitions) is an expression (i.e. has a type and yields a value), and every program corresponds to the evaluation of a nested tree of expressions (rather than the execution of a sequence of statements). Yet, the expressions have side effects upon evaluation, there is global shared mutable state, there are variables, there is a strict and well-defined eager order of evaluation - all the semantics of C are intact. Calling this a non-imperative language would be a matter of definition, I guess, but there's no substantial difference between real C and this subset of it.
0Viliam_Bur12y
So the question "what kind of language are we trying to make?" must be answered before "what syntax would make it most legible?". Assuming an imperative language, the simplest solution would be one command per line, no exceptions. There is a scrollbar at the bottom; or you can split a long line into more lines by using temporary variables. No syntax can make all programs legible. A good syntax is without exceptions and without unnecessary clutter. But if the user decides to write programs horribly, nothing can stop them. An important choice is whether you make formatting significant (Python-style) or not. Making formatting significant has an advantage that you would probably format your code anyway, so the formatting can carry some information that does not have to be written explicitly, e.g. by curly brackets. But people will complain that in some situations a possibility to use their own formatting would be better. You probably can't make everyone happy.
1lavalamp12y
I think you stated my thoughts better than I did.

I recently did the biggest bit of useful programming I've ever done - automating large chunks of sysadmin work - in ant. ant is basically makefiles for Java. But it's Turing-complete!

What it feels like: using an esoteric programming language whose conceit is that all code must be correctly-formed XML. Most of the work was the mathematical puzzle of how to implement some really simple thing in ant. (Every domain specific language that is allowed to become Turing-complete will evolve into brainfuck.)

My point is not that ant is a horrible, horrible language t... (read more)

In the Subtext FAQ there's a question that is now my favorite question to ask of any new programming tool:

How will this scale to programs larger than the screen?

Disconcertingly, the answer for Subtext is

Unknown.

This article suggests that something like 30% to 60% of people cannot learn to code. I think that's interesting. EDIT: This also might be wrong; see child comment.

The three hurdles the article describes are variable assignment, recursion, and concurrency. I don't think you can program at all without those three elements.

Programming is interesting in that the difference between good programmers and bad programmers seems to be far more pronounced than the difference between people who are good and bad at other tasks-- I recently observed about ten smart frie... (read more)

This article suggests that something like 30% to 60% of people cannot learn to code. I think that's interesting.

This is the top link which the 2006 Coding Horror article is based on:

http://www.eis.mdx.ac.uk/research/PhDArea/saeed/

It's to Saeed Dehnadi's research. In 2006, Dehnadi and Bornat put out a paper purporting to "have discovered a test which divides programming sheep from non-programming goats. This test predicts ability to program with very high accuracy before the subjects have ever seen a program or a programming language." The Coding Horror article, which was heavily linked and discussed in various forums, seems to have popularized this research quite well.

In 2008, the followup research on a much larger and more diverse set of students failed to confirm the effect.

And a 2009 followup showed mixed results.

These followups received substantially less widespread discussion than the original claim. My sneaking suspicion is that this may reflect not only the usual bias in favor of positive results, but a preference on the part of the programming community for the notion that programmers are a special class of people.

(Or it may just be that Coding Horror didn't cover them.)

4Viliam_Bur12y
My suspicion is that such results reflect a failure of teaching. Imagine that you are teaching people mathematics, and you skip some beginner lessons, and start with the more advanced ones. Some people will have the necessary knowledge (from home, books, internet, etc.), so they can follow you and improve their knowledge. Most people simply don't understand what you are talking about. At the end of the year the test will show that there are two separate groups -- those who know a lot, and those who have no clue. Please note that the failure of teaching is not necessarily at the level where the problem was discovered. It may be a failure from previous levels. For example a university teacher may expect some really simple knowledge, but many high schools fail to teach it.
3dbaupp12y
The type system of Haskell is quite restrictive for beginners (it's a little annoying to not be able to debug by putting a print anywhere, or read user input wherever you want) and the laziness can be a little unintuitive, especially for people who haven't done much mathematics (e.g. ones = 1:ones... "I'm defining something in terms of itself, aghafghfg"). But, I do agree that functional languages might be easier to teach to certain groups of people, like those who have done a fair bit of maths, and that Haskell has some very neat features for learning to program (GHCi and Hoogle are awesome!).
2gRR12y
There's unsafePerformIO :: IO a -> a
4gwern12y
Or, er, Debug.Trace.trace...
0Solvent12y
I agree.
1Eugine_Nier12y
Well, depending on what platform you're using, you don't necessarily need concurrency.
0roystgnr12y
I don't think you can program at all without at least one of the first two elements, but you can get by with only one or the other if you restrict your choice of languages. That's just for learning how to code, though; you'll never get through first-semester CS with that kind of limitation.

As others pointed out, different ways of "programming" are best for different problem domains, and there is virtually no chance that a one-size-fits-all language can be useful to do or teach programming in every domain.

Moreover, regardless of the language, you have to develop the ability to think like a computer, which means that there is no magical DWIM (do what I mean) button/keyword available to you. Some people have a harder time developing this essential ability than others; they should probably consider a different career path, no matter what languages are available out there.

Recently I've been exposed to the idea of a visual programming language named subtext

I'm glad you like subtext. Me too.

I just had a big "update". EDIT: I'm a little less sure now. See the end.

I found something to teach programming on an immediate level to non-programmers without knowing they are programming, without any cruft. I always wished this was possible, but now I think we're really close.

If you want to get programming, and are a visual thinker, but never could get over some sort of inhibition, I think you should try this. You won't e... (read more)

0witzvo12y
Update: See also: www.worrydream.com

Visual programming is great where the visual constructs map well to the problem domain. Where it does not apply well it becomes a burden to the programmer. The same can be said about text based programming. The same can be said about programming paradigms. For example object oriented programming is great... when it maps well to the problem being solved, but for other problems it simply sucks and perhaps functional programming is a better model.

In general, programming is easy when the implementation domain (the programming language, abstract model, developm... (read more)

2loup-vaillant12y
(Duplicate of this) If you haven't heard of the STEPS project from the Viewpoint Research Institute already, it may interest you. (Their last report is here)
0David_Allen12y
Thank you for the reference to STEPS; I am now evaluating this material in some detail. I would like to discuss the differences and similarities I see between their work and my perspective; are you are familiar enough with STEPS to discuss it from their point of view? In reply to this: This use of a general purpose language also shows up in the current generation of language workbenches (and here). For example JetBrains' Meta Programming System uses a Java-like base language, and Intentional Software uses a C# (like?) base language. My claim is that this use of a base general purpose language is not necessary, and possibly not generally desirable. With an ecosystem of DSLs general purpose languages can be generated when needed, and DSLs can be generated using only other DSLs.
2loup-vaillant12y
I think I am (though I'm but an outsider). However, I can't really see any significant difference between their approach and yours. Except maybe that their DSLs tend to be much more Turing complete than what you would like. It makes little matter however, because the cost of implementing a DSL is so low that there is little danger of being trapped in a Turing tar-pit. (To give you an idea, implementing Javascript on top of their stack takes 200 lines. And I believe the whole language stack implements itself in about 1000 lines .) In the unlikely case you haven't already, you may want to check out their other papers, which include the other progress reports, and other specific findings. You should be most interested by Ian Piumarta's work on maru, and Alessandro Warth's on OMeta, which can be examined separately.
0asr12y
This seems like a bad idea. There is a high cognitive cost to learning a language. There is a high engineering cost to making different languages play nice together -- you need to figure out precisely what happens to types, synchronization, etc etc at the boundaries. I suspect that breaking programs into pieces that are defined in terms of separate languages is lousy engineering. Among other things, traditional unix shell programming has very much this flavor -- a little awk, a little sed, a little perl, all glued together with some shell. And the outcome is usually pretty gross.
1David_Allen12y
These are well targeted critiques, and are points that must be addressed in my proposal. I will address these critiques here while not claiming that the approach I propose is immune to "bad design". Yes, traditional general purpose languages (GPLs) and many domain specific languages (DSLs) are hard to learn. There are a few reasons that I believe this can be allayed by the approach I propose. The DSLs I propose are (generally) small, composable, heavily reused, and interface oriented which is probably very different than the GPLs (and perhaps DSLs) from your experience. Also, I will describe what I call the encoding problem and map it between DSLs and GPLs to show why well chosen DSLs should be better. In my model there will be heavy reuse of small (or even tiny) DSLs. The DSLs can be small because they can be composed to create new DSLs (via transparent implementations, heavy use of generics, transformation, and partial specialization). Composition allows each DSL to deal with a distinct and simple concern but yet be combined. Reuse is enhanced because many problem domains regardless of their abstraction level can be effectively modeled using common concerns. For example consider functions, Boolean logic, control structures, trees, lists, and sets. Cross-cutting concerns can be handled using the approaches of Aspect-oriented programming. The small size of these commonly used DSLs, and their focused concerns make them individually easy to learn. The heavy reuse provides good leveraging of knowledge across projects and across scales and types of abstractions. Probably learning how to program with a large number of these DSLs will be the equivalent of learning a new GPL. In my model DSLs are best thought of as interfaces, where the interface is customized to provide an efficient and easily understood method of manipulating solutions within the problem domain. In some cases this might be text based interfaces such as we commonly program in now, but it also could be
2asr12y
I'm only going to respond to the last few paragraphs you wrote. I did read the rest. But I think most of the relevant issues are easier to talk about in a concrete context which the shell analogy supplies. Yes. It's clunky. But it's not clunky by happenstance. It's clunky because standardized IPC is really hard. It's a standard observation in the programming language community that a library is sort of a miniature domain-specific language. Every language worth talking about can be "extended" in this way. But there's nothing novel about saying "we can extend the core Java language by defining additional classes." Languages like C++ and Scala go to some trouble to let user classes resemble the core language, syntactically. (With features like operator overloading). I assume you want to do something different from that, since if you wanted C++, you know where to find it. In particular, I assume you want to be able to write and compose DSLs, where those DSLs cannot be implemented as libraries in some base GPL. But that's a self-contradictory desire. If DSL A and DSL B don't share common abstractions, they won't compose cleanly. Think about types for a minute. Suppose DSL A has some type system t, and DSL B has some other set of types t'. If t and t' aren't identical, then you'll have trouble sharing data between those DSLs, since there won't be a way to represent the data from A in B (or vice versa). Alternatively, ask about implementation. I have a chunk of code written in A and a chunk written in B. I'd like my compiler/translator to optimize across the boundary. I also want to be able to handle memory management, synchronization, etc across the boundary. That's what composability means, I think. Today, we often achieve it by having a shared representation that we compile down to. For instance, there are a bunch of languages that all compile down to JVM bytecode, to the .NET CLR, or to GCC's intermediate representation. (This also sidesteps the type problem I m

MaxMSP is a music technology program that works like this - you can visually track the flow of information. It might be relevant - I'll write about it more tomorrow, i'm on a mobile phone at the moment.

1Bill_McGrath12y
Okay, it's been about two years since I've used Max/MSP, and they've brought out a significantly different version since then, but this is what I remember. It's a program for building musical instruments, patches, and effects. You place objects (which take the form of labelled boxes) on a blank space, and connect them together with wires. The UI is pretty bare - it's mostly just black and white, though the user can add a degree of their own design to the patch for actual use. The objects are, for the most part, pretty simple, so it can be quite difficult to achieve even simple tasks. To create a sine tone, you create a specific object that has the "create sine tone function", input a number (for example 440 - it's measured in Hz) into it, and output it to an audio control. Building bigger and more complex devices gets pretty dense, and if something isn't working it can be very difficult to figure out where the problem lies. That said, I found it quite helpful to have the ability to visually track the flow of information - one exception to the usually black-and-white UI is that wires carrying sound rather than numerical data appear as crosshatched grey and yellow, rather than simple black line. I'm not sure how helpful this is; I've no knowledge of programming, but maybe it'll serve as a useful comparison.
0jmmcd12y
I spent some time programming with Max (less with MSP) and found roughly the results that others have reported for visual languages. It makes something like an FM synthesizer (y = a sin(b sin (ct))) look a lot more pleasant to a non-programmer musician, but for bigger projects it slows you down and gets in the way and prevents version control etc. But I didn't spend a lot of time with it, so a grain of salt is needed of course.

I'm not a programmer. I wish I were. I've tried to learn it several times, different languages, but never went very far. The most complex piece of software I ever wrote was a bulky, inefficient game of life.

For how long? I've been able to solve the first 5 Project Euler problems after 2 days of Python, and I'd probably be able to solve more, but that isn't programming.

I doubt you can become anything that would deserve the label "programmer" in under 3 years, as long as you are not a genius.

3Will_Newsome12y
I know at least two people that got $70k+ programming jobs after only about three months of study. Not sure what "genius" means in this context.
3Jayson_Virissimo12y
What did they do before programming?
7Will_Newsome12y
One was a physics grad student, the other a mathematics grad student. Both had some prior experience with basic Bash.
2[anonymous]12y
This makes it much less surprising. Anecdotally in my social circle it seems that people who have had studied math or physics seem to easily pick up programming.
1komponisto12y
I think that's exaggerated. From what I understand it was more like one $70k and one $40k, after something like 6-8 months of study. That said, anyone with anecdotes like this is invited to share them. They sound cool, and give one hope for this world.
0Will_Newsome12y
The 40k one wasn't the one I had in mind, but I'll accept your correction re the 70k one.
0XiXiDu12y
I know very little so it is hard to judge for me. I would be impressed by someone with no programming experience who could write a post like this, after three months of study, without a previous math or computer science background.
2lavalamp12y
That author's level isn't necessary to make a living at computer programming.
2gjm12y
And (for the avoidance of doubt) that author doesn't in any sense lack "a previous math or computer science background", although he says he's a programming beginner. He's a first-rate physicist and the author of an important book on quantum computing. So I'm not sure what XiXiDu is saying here; that Nielsen's level of insight is what it takes to deserve the label of "programmer"? (No.) Or that Nielsen is a genius? (Maybe, but so what?) Or what?
[-][anonymous]12y00

It seems that there are some who are incapable of learning programming. That said, when you are programming, you are virtually always working with a Von Neumann Architecture, so many languages have common ground.

Code is in general presented as composable units. Working in Symbolic Graphs of plaintext names or in actual graphs in 2d makes little difference.

If you really want to learn programming, but think that regular Java(script) or Python is trite and annoying, try Haskell. Haskell requires you to know math to actually make sense of anything, and it is v... (read more)

2gwern12y
Replication has proven difficult: http://www.gwern.net/Notes#the-camel-has-two-humps

Related point - I remember that programming in hyperscript (for hypercard) was a lot like explaining something to someone who had no domain knowledge.

Get the text of field 'address'.

Put it after field 'label'.

Stuff like that.

I lost interest in programming after I realized how much effort it would take me to do anything cool.

For those people here who consider themselves reasonably skilled at programming: how long would it take you to implement a clone of Tetris? I've got a computer engineering degree, and "cin" was the only user input method they taught me in programming classes...

Edit: You're not allowed to use ASCII graphics, and it has to run at the same speed on different processors, but other than that, requirements are flexible.

3RolfAndreassen12y
Depends on how many bells and whistles you want. For just a basic clone, I can do it in an evening, using a third-party library to handle the windowing and whatnot. Of course if you wanted to implement the windowing and input handling by talking directly to the OS, it would be an utter nightmare; call it three months.
3Nornagest12y
User experience makes questions like this tricky: graphics, sound, score-tracking, tightening up controls. If I was working with a graphics library I was already familiar with, I doubt the core gameplay would take me more than a few hours. Writing a reasonable clone of the version of Tetris I played fifteen years ago on the Atari ST would take at least a couple of weeks, though, and that's if I had all the resources I needed on hand. An exact clone would take even longer.
2CronoDAS12y
A related question: how much would a beginner have to study before being able to write Tetris? As I said, I graduated with a degree in computer engineering without being able to code Tetris, because I have no idea how to write anything except console programs that use cin and cout to do all the input/output work. "Draw something on the screen, and have it move when someone presses a key" would seem to be a fundamental task in computer programming, but apparently it's some kind of super-advanced topic. :P
2Emile12y
The focus on cin / cout as opposed to GUI is probably because cin is simple and always works the same way (mostly because nobody uses it, no need for a zillion libraries), whereas there are a lot of very different GUI libraries with different ways of doing things; learning one of those would take time and not help you use another one much. If you want to learn yow to make a GUI you can probably find a "hello world" example for your language/os of choice and just copy-paste the code and then adjust it to suit your needs.
3CronoDAS12y
Yeah... everything is in libraries these days and the libraries are all incompatible with each other. :(
2ShardPhoenix12y
It's not as hard as it might sound. Modern languages have nice libraries and frameworks that make input and basic graphics very easy. Here's a tutorial for Slick (a Java-based 2D game framework) that walks you through how to do exactly what you ask: http://slick.cokeandcode.com/wiki/doku.php?id=01_-_a_basic_slick_game Here's a tutorial for how to make Tetris: http://slick.cokeandcode.com/wiki/doku.php?id=02_-_slickblocks I'm sure similar things exist for C++, especially since it's the most popular language for making games in. edit: If you want to actually follow one of the above tutorials, see this setup info first: http://slick.cokeandcode.com/wiki/doku.php?id=getting_started_and_setup
1Nornagest12y
Probably not too long. I wrote (a crappy version of) Breakout in my second semester of high-school programming, and that was using Pascal plus some homebrewed x86 assembler for the graphics (both of which were a nightmare that I wouldn't recommend to anyone), so simple games clearly don't require any deep knowledge of the discipline; if you've got a computer-engineering background already and you're working with a modern graphics library, I'd call it a couple weeks of casual study. Less if you're using a game-specific framework, but those skills don't usually transfer well to other things.
2ShardPhoenix12y
I did this (in Java) when I had some spare time at work recently. It took a couple of days of work (with some slacking off), including learning a (pretty simple) new game/graphics framework.
2lavalamp12y
If you don't care about graphics, like a couple hours. But you could spend as much time as you want on graphics.
1fubarobfusco12y
Use Pygame. Evidence: People frequently produce playable games using Pygame in 24-hour hackathons.
1Vladimir_Nesov12y
An evening? Depends on what you mean by "Tetris" and the target quality. I did a two player Tetris-over-network for a class once (the opponent gets an additional garbled line when you clear multiple lines). It's easy, which is part of the problem: with enough expertise, it can become boring, you don't learn as much new stuff, it becomes more like laying bricks, not designing intricate machines or learning the principles of their operation. An estimate that I heard on multiple occasions, disbelieved, and then witnessed come true, is that it takes about 4 years of hands-on experience for an enthusiastic smart adult to conquer the learning curve and as a result lose enthusiasm for software development in the abstract (so that you'd need something special about the purpose of the activity, not just the activity itself). I don't know about the probability of this happening, but the timescale seems about right.
0gRR12y
I wrote tetris for IBM-370/VM once, during school practice. The main problem was making the keyboard and display work in real time, had to write significant parts in assembler. Took about three weeks of evenings.

There is good reason that several programmers have referred to GUIs as "point-and-grunt" interfaces. And actually programming requires even more flexibility. An intuitive and functional "intuitive" programming system is going to be based around natural language, not pretty pictures. That isn't to say images won't be used, most likely in structuring the sub-components of the overall program, but LabView already shows the strengths and weaknesses of that if the programmer doesn't have sufficient linguistic control at a lower level.

ADDE... (read more)

[-][anonymous]8y-20

Fantastic thread! Are there any statistical programming, or programming languages of any kind that are, well, 'obvious'. Something I can type 'survival analysis with lalalala' instead of 'stset 34.3 alpha 334' or something like that?

model the flow of instructions in your mind, how distant parts of the code interact together

Unless you're hacking you usually don't need to do this. You just need to understand what state the program is in before and after each operation. You never need to understand the whole thing at once, just understand one part at a time.

3jimrandomh12y
Er, what? You absolutely do need to model control flow, and how distant parts fit together. You should only think about state one operation at a time when you're confused, or suspicious of the code you're looking at, because step-by-step thinking is very slow and can't support most of the operations you'd want to do on a program.
2Incorrect12y
When modelling how distant parts fit together, you use abstraction. You don't need to model how the internals of your sort function interact with other parts of your code, you just remember that it sorts things. You're still thinking in terms of one operation at a time, just using more high-level operations. Notice that software design best practices improve your ability to do this: separation of concerns, avoidance of mutable global variables, lack of non-obvious side effects.
4jimrandomh12y
From my own experience as a programmer, I think this is idealized to the point of being false. Finding a few distantly-separated, interacting regions of code which don't respect a clean abstraction is pretty common, especially when debugging (in which case there is an abstraction but it doesn't work).
3asr12y
This isn't really possible in many cases. Many programs are resource-constrained. And the heap, IO resources, etc, are shared state. We don't have good ways of abstracting that away. Likewise, synchronization is still a giant can of worms.