This is a bit-over-an-hour-long talk with slides that I found extremely interesting from this year's Strange Loop conference, and I thought I would share it with any programmers in the audience here that hadn't seen it.

It's about Sussman's ideas about the potential of modeling computation with sophisticated constraint-propagation/dataflow-esque constructs that have some really interesting properties, and might be of interest to anyone thinking about how an AI might create and maintain Bayesian beliefs.

It would be accessible to anyone who can read a little Lisp, and I strongly recommend it unless this is all old hat to you already.

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute

Unfortunately, I don't see a transcript yet.  But here's a brief timeline I of the video I jotted down, in case you want to skip early parts of it and get to a piece that interests you:

0:00 - 10:40: Preface discussing the extreme flexibility and low latency of biological systems as opposed to human-designed systems. Programmers naturally design systems with very rigid constraints, but few natural systems are like this.  Sussman feels that flexibility, not efficiency or correctness, is the central problem holding back human programmers.  Example of Kanizsa's "triangle" figure-ground illusion where humans immediately see the shape of a triangle between other shapes.

10:40 - 23:45: Recap of the existing capability for compile-time flexibility in static languages and Lisp; gives an example of doing symbolic algebra in Lisp and building Lisp system for solving problems in classical mechanics.  Suggests that the most powerful flexibility we have is to define new operators at runtime and redefine operators to operate on data in new ways at runtime.  There is a direct tradeoff between this kind of flexibility and the ability to prove programs correct.

23:45 - 25:30: Sussman says that future computers are inevitably going to have to operate on a hugely parallel architecture where individual processors are plentiful and cheap.  A solution to the flexibility problem needs to be highly parallel.

25:30 - 28:10: Discusses his experience teaching EE to students at MIT.  Explains how strong engineers do network analysis on electrical circuits:  They take some assumptions about parts of the circuit and scientific models to gradually conclude more and more about the voltage in different parts of the circuit.  If an assumption turns out to be wrong, they can back it out with minimal effort.

28:10 - 32:50: Sussman and RMS wrote a Lisp program to replicate the process of how engineers analyze a circuit.  Discusses how Lisp syntax helped him write an exploratory program to model the different parts of a circuit and a process in code.  Sussman's program could take the wiring diagram of the circuit and the various rules for current to answer similar questions, calculate the current at some point, and then recapitulate the assumptions and rules that led the program to its answer, like an expert engineer could.

32:50 - 35:00: Sussman summarizes this idea as a "propagator" that has stateless machines moving pieces of data between stateful cells.  Each machine is monitoring its inputs and pushing out some outputs when the inputs change.  This is a good parallel architecture. 

35:00 - 40:30: Expression-oriented languages have a problem in the syntax.  Nested expressions have many inputs but one output.  The intermediate sub-expression outputs aren't named or retained.  This is not like real-world systems.  You can change this to a propagator model by taking each intermediate output and making a propagator cell out of it that a machine can watch and act on.  Shows a simple Lisp implementation of Newton's method modeled as a propagator.

40:30 - 42:45: Propagators can work in such a way that they update information multi-directionally; shows how he encodes the rules for resistors as propagators in Lisp.

42:45 - 46:45: Presents the interview-esque problem of "how do I measure building height given a barometer and a ruler and a stopwatch."  You can make multiple measurements to solve the problem.  Since propagators work multi-directionally, you can model the problem in a way where taking multiple measurements not only narrows your error margin for the building result, but the inputs feed back and narrow the error margin for the other inputs.

46:45 - 50:15: Describes monads as "lambda-calculus plumbing" to carry along metadata together with a value.  Shows how information going into a propagator network can carry metadata about its premises, so that outputs can keep track of which premises they depend upon.

50:15 - 55:00: Humans don't give up if different belief subsystems come to a contradictory conclusion.  Presents the idea and Lisp implementation of a "truth maintenance system" developed by him, RMS, and his students. Each cell in the propagator network can keep track of multiple pieces of data, each dependent upon different premises. Premises can be believed or disbelieved and the output can be produced according to the current "worldview" with minimal necessary recomputation.

55:00 - 56:45: Truth maintenance systems can maintain locally consistent worldviews in the presence of globally contradictory inputs.  The propagator network can automatically identify which beliefs are inconsistent, and very efficiently do backtracking and find sets of beliefs which are internally consistent.  All of this works well in parallel.

56:45 - 60:00: Presents a logic puzzle where 5 people live in 5 houses with different rules saying which person may be or may not be assigned to each house.  With normal recursive backtracking, solving the problem would take 3,000 backtracks.  Sussman's propagation network reasoned out the right assignment with 63 backtracks.

58:00 - 1:04:20: Closing remarks on the potential application of propagation-based ideas.  Points out that the triangle illusion plausibly seems like the result of a propagation network; pieces of evidence in the figure all contribute as inputs to the perception of the triangle in the background.  Sussman doesn't know if this is right but feels like we need some drastically different model of computation to ever approach the functionality of natural systems.

Finally, here's the information which Sussman and his student have published about their propagator work so far, with code included:

http://groups.csail.mit.edu/mac/users/gjs/propagators/

New Comment
4 comments, sorted by Click to highlight new comments since:

Thank you. Very applicable to my current work.

Very nice. The idea of propagators linked, in my mind, with that of "scrubbing" UIs.

http://worrydream.com/#!/ScrubbingCalculator

Fine summary.

[-]djcb00

Thanks for sharing this! I look forward to play a bit with propagators. And, more general, it's impressive how Sussman keeps on coming up with clear, enlightening ideas over decades.