Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

RichardKennaway comments on Sensual Experience - Less Wrong

13 Post author: Eliezer_Yudkowsky 21 December 2008 12:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 23 January 2013 01:22:49PM *  0 points [-]

Conway's Life in Matlab:

function L = life(L)
  L1 = imfilter( L, [1 1 1;1 0 1;1 1 1], 'circular' );
  L = int32( (L1==3) | ((L1==2) & L) );
end

That's about 50 lexical tokens against APL's 30, but does not require advanced knowledge of Matlab to understand. Not that I want to get into a language war here, there are any number of things I dislike about Matlab.

Here's the equivalent of the primes program from the APL Wiki page:

function P = prms( R )
  P = 2:R; % Make an array of the numbers from 2 to R.
  PP = P' * P; % Make a 2D array of all pairwise products.
  PP = PP(PP<=R); % Make a 1D array of the products no more than R.
  P(PP-1) = [ ]; % Remove those products from P.
end

Language support for array operations is the major advantage of APL, Matlab, Q, and K, and I wish every language had it.

Comment author: IlyaShpitser 23 January 2013 03:02:45PM *  2 points [-]

My point wasn't actually that it's a useful thing to pursue shortest ways of writing a given algorithm. In fact I am not an APL expert, and find it hard to read. My point is that there is no particular reason other than inertia that we happen to formalize mathematical/algorithmic ideas via a linear string of ascii characters. In fact, this representation makes it unnatural to {reason about|write algorithms on} many common types of structures. The fact that many attempts to do something better do poorly (as the great-grand-parent poster experienced) does not mean improvements do not exist -- the space is very large.

For example, a regular expression is a graph. Why on earth do we insist on encoding it as a very hard to read string of ASCII? (I am sure one could be a very efficient regexp jockey with practice, but in some sense the representation is working against us and our powerful vision subsystem). There are all these theorems in graphical models that have proofs much easier for humans to follow because they use graph theory and not algebra, etc.

Comment author: arundelo 23 January 2013 04:12:43PM 0 points [-]

If you don't already know about it, you'll enjoy reading about Olin Shivers's SRE regex notation.

Comment author: IlyaShpitser 23 January 2013 04:24:22PM *  1 point [-]

Yes, I am aware of this (and lispy things in general), but thanks! s-expressions are great if you like metaprogramming, but they share the same fundamental problem as ordinary regular expressions -- they encode non-linear structures as a line of ASCII.

Actually, there is no reason macro-based metaprogramming couldn't work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. "Graph rewriting" is practically a cottage industry.

Comment author: RichardKennaway 23 January 2013 04:34:12PM 0 points [-]

Actually, there is no reason macro-based metaprogramming couldn't work in a language that uses graphs as a priimitive UI element, rather than a list like LISP does. "Graph rewriting" is practically a cottage industry.

Where you wrote "UI element", did you mean "data structure"? I don't know what it would mean to talk about graphs as a primitive user interface element.

With a language with sufficiently expressive metaprogramming facilities (LISP enthusiasts will recommend LISP for this role) you can extend it with whatever data structures you want.

Comment author: IlyaShpitser 23 January 2013 05:59:23PM 0 points [-]

I guess I meant both a data structure and a visual representation of a data structure (in LISP they are almost the same, which is what makes metaprogramming in LISP so natural).

Comment author: RichardKennaway 23 January 2013 04:17:18PM 3 points [-]

I had an student doing an M.Sc. thesis (recently passed the viva, with a paper in press and an invitation from the external examiner to give a presentation) on a system he built for combining visual and textual programming. For example, if a variable happens to be holding an image, in a debugger you see a thumbnail of the image instead of a piece of textual information like <float array(512x512x3)>. One of the examples he used was a visual display of regular expressions.

But there are several standard problems that come up with every attempt to improve on plain text.

  1. Text is universal -- to improve on it is a high bar to pass.

  2. A lot of work has been done on visual programming, but a problem that crops up over and over is that every idea of how to do it will work for toy examples, but most of them won't scale. You just get a huge, tangled mess on your screen. Thinking up visual representations is the easy part, scaling is the real problem.

  3. What makes intuitive visual sense to a human is not necessarily easily made sense of by the software that is supposed to handle it. Even plain text can be opaque if it wasn't designed with computer parsing in mind. I speak here from experience of implementing a pre-existing notation for recording sign languages. The first thing we did was convert it from its own custom font into ASCII, and the second was to write a horribly complicated parser to transform it into more sensible syntax trees -- which were then represented as XML text. Only then was it possible to do the real work, that of generating animation data. And even then, I kept coming across cases where it was obvious to a human being (me) what the original notation meant, but not obvious to the implementer (me) how to express that understanding in program code.

Comment author: IlyaShpitser 23 January 2013 04:31:45PM 1 point [-]

I am not a programming language expert, but hobbyist/amateur, so generally I defer to people who do this stuff for a living. My only points are:

(a) The space of possible languages is large.

(b) It would be curious indeed if ASCII lines was the optimum for a species with such a strong visual subsystem.

(c) Computer science community has a terrible institutional memory for its own advances (e.g. LISP code is its own syntax tree with hardly any parsing, Perl's garbage collector for the longest time failed on circular references, etc.) So progress is slow.

These I take as evidence that there is much more progress to be made just on notation and representation.