I think you misunderstood my point here.
I was responding directly to this claim:
Are you serious? Do you really think that morality can be programmed on computers? Good luck then.
... which I would not make due to the violation of GAP.
Regarding the somewhat weaker claim "programming morality into computers would be very hard" we may have less disagreement. My expectation is that even with the best human minds dedicated into 'programming morality into computers" after first spending decades of research into those 'high-level architectures' they are still quite likely to make a mistake and thereby kill us all.
I thought those question were innocent. But if it looks like a violation of some policy, then I apologize for that. I never meant any personal attack. I think you understand my point now (at least partially) and can see how weird it looks to me such ideas as programming morality. I now realize, there maybe many people here who take these ideas seriously.
Today's post, Heading Toward Morality was originally published on 20 June 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was LA-602 vs RHIC Review, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.