Today's post, Heading Toward Morality was originally published on 20 June 2008. A summary (taken from the LW wiki):
A description of the last several months of sequence posts, that identifies the topic that Eliezer actually wants to explain: morality.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was LA-602 vs RHIC Review, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Are you serious? Do you really think that morality can be programmed on computers? Good luck then. Pursuing even unrealistic goals can yield useful results. As the least, your effort will mark more clearly boundaries and limitations of the computer programming method in solving the AI problem.
Required reading: The Generalized Antizombie Principle