Today's post, Heading Toward Morality was originally published on 20 June 2008. A summary (taken from the LW wiki):
A description of the last several months of sequence posts, that identifies the topic that Eliezer actually wants to explain: morality.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was LA-602 vs RHIC Review, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
I think you misunderstood my point here.
But first, yes, I skimmed through the recommended article, but don
t see how does it fit in here. It
s an old familiar dispute about philosophical zombies. My take on this, the idea of such zombies is rather artificial. I think it is advocated by people who have problems a understanding mind/body connection. These people are dualists, even if they don`t admit it.Now about morality. There is a good expression in the article you referenced:
high-level cognitive architectures
. We don`t know yet what this architecture is, but this is the level that provides categories and the language one has to understand and adopt in order to understand high-level mind functionality, including morality. Programming languages are a way below that level and not suitable for the purpose. As an illustration, imagine that we have a complex expert system that performs extensive data base searches and sophisticated logical inferences, and then we try to understand how it works in terms of gates, transistors, capacitors that operate on a microchip. It will not do it! The same is about trying to program morality. How one is going to do this? To write a function like, bool isMoral(...)? You pass parameters that represent a certain life situation and it returns true of false for moral/immoral? That seems absurd to me. The best that I can think about utilizing programming for AI is to write a software that models behavior of neurons. There still will remain a long way up tohigh-level cognitive architectures
, and only then, morality.I was responding directly to this claim:
... which I would not make due to the violation of GAP.
Regarding the somewhat weaker claim "programming morality into computers would be very hard" we may have less disagreement. My expectation is that even with the best human minds dedicated into 'programming morality into computers" after first spending decades of research into those 'high-level architectures' they are still quite likely to make a mistake and thereby kill us all.