wnoise comments on Open Thread: March 2010 - Less Wrong

5 Post author: AdeleneDawner 01 March 2010 09:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (658)

You are viewing a single comment's thread. Show more comments above.

Comment author: wnoise 02 March 2010 05:29:12PM 4 points [-]

I want to get a grasp of the underlying nature of computer science,

Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.

Comment author: XiXiDu 02 March 2010 06:11:21PM 0 points [-]

Recommendations on the above? Books, essays...

Comment author: hugh 02 March 2010 08:29:20PM *  1 point [-]

Sipser's Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It's also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another.

I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn't recommend them as a starting point.

Comment author: RobinZ 02 March 2010 07:50:33PM *  0 points [-]

Elsewhere wnoise said that SICP and Knuth were computer science, but additional suggestions would be nice.

Comment author: wnoise 02 March 2010 08:33:39PM 1 point [-]

Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.

For my undergraduate work, I used two books. The first is Jan L. A. van de Snepscheut's What Computing Is All About. It is, unfortunately, out-of-print.

The second was Elements of the Theory of Computation by Harry Lewis and Christos H. Papadimitriou.

Comment author: SoullessAutomaton 03 March 2010 12:41:37AM 2 points [-]

Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.

Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!

Comment author: Douglas_Knight 03 March 2010 03:45:37AM 2 points [-]

You probably should have spelled out that SICP is on the λ-calculus side.

Comment author: wedrifid 03 March 2010 03:48:07AM 0 points [-]

Gah. Do I need to add this to my reading list?

Comment author: Douglas_Knight 03 March 2010 04:25:36AM *  4 points [-]

You seem to already know Lisp, so probably not. Read the table of contents. If you haven't written an interpreter, then yes.

The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say "of course you can emulate a Turing machine as data on the tape of a universal Turing machine," and there's no point to fill in the details. But it's easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML.

SICP is a programming book, not a theoretical book, but there's a lot of overlap when it comes to interpreters. And you probably learn both better this way.

I almost put this history lesson in my previous comment:
Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I'm not entirely sure why the difference. Because they're more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it's easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)

Comment author: Eliezer_Yudkowsky 03 March 2010 04:44:09AM 4 points [-]

Um... I think it's a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.

EDIT: It's interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who's not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.

Comment author: Morendil 03 March 2010 08:33:57AM 3 points [-]

A friend of mine has invented a "Game of Lambda" played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well.

Alligator Eggs is another variation on the same theme.

Comment author: SoullessAutomaton 03 March 2010 05:35:37AM *  3 points [-]

Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.

Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing remotely sensible.

There's plenty of ridiculously opaque models of computation (Post's tag machine, Conway's Life, exponential Diophantine equations...) but I can't begin to imagine one that would be more comprehensible than untyped lambda calculus.

Comment author: wnoise 03 March 2010 05:51:18AM *  2 points [-]

You realize you've just called every computer scientist inhuman?

Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where "true" is a function returning its first argument, and "false" is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.

Comment author: Douglas_Knight 03 March 2010 05:26:49AM 0 points [-]

Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.

Comment author: rwallace 03 March 2010 04:51:25AM 0 points [-]

It's much of a muchness; in pure form, both are incomprehensible for nontrivial programs. Practical programming languages have aspects of both.

Comment author: hugh 03 March 2010 05:12:21AM 0 points [-]

I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church's claim.

I can't back up these claims as well as I'd like. I'm not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter---there was a long period before near-universal acceptance), that is my impression.

Comment author: Douglas_Knight 03 March 2010 05:17:58AM *  0 points [-]

Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.

ETA: well, it's straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing's paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.

EDIT2: so maybe that is what you said originally.

Comment author: wedrifid 03 March 2010 04:38:24AM 0 points [-]

You seem to already know Lisp, so probably not.

I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn't have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name 'lisp' wasn't so apt as a metaphor for readability.

Comment author: wedrifid 03 March 2010 04:34:32AM 0 points [-]

Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?

Comment author: AngryParsley 03 March 2010 04:51:46AM 2 points [-]

Wikipedia says lambda calculus was published in 1936 and the Turing machine was published in 1937.

I'm betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn't as mathematically pure as lambda calculus, but it's a lot closer to how real computers work.

Comment author: orthonormal 03 March 2010 05:10:14AM 1 point [-]

Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the "well, that's great in theory, but it could never be implemented in practice" objections that are so hard to push people past.

Comment author: RobinZ 02 March 2010 08:47:31PM 0 points [-]

Any opinion on the 2nd edition of Elements?

Comment author: wnoise 02 March 2010 08:52:10PM 0 points [-]

Nope. I used the first edition. I wouldn't call it a "classic", but it was readable and covered the basics.