Today's post, Ghosts in the Machine was originally published on 17 June 2008. A summary (taken from the LW wiki):
There is a way of thinking about programming a computer that conforms well to human intuitions: telling the computer what to do. The problem is that the computer isn't going to understand you, unless you program the computer to understand. If you are programming an AI, you are not giving instructions to a ghost in the machine; you are creating the ghost.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Grasping Slippery Things, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
That's true if and only if some aspect of biological neural architecture (as opposed to the many artificial neural network architectures out there) turns out to be Turing irreducible; all computing systems meeting some basic requirements are able to simulate each other in a pretty strong and general way. As far as I'm aware, we don't know about any physical processes which can't be simulated on a von Neumann (or any other Turing-complete) architecture, so claiming natural neurology as part of that category seems to be jumping the gun just a little bit.