Eliezer_Yudkowsky comments on Morality as Fixed Computation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
Well, for that matter, there are some AIs I'd trust to administer drugs to me, just as there are some AIs I wouldn't trust to tell me true facts.
At this point, though, it becomes useful to distinguish metaethics for human use and metaethics in FAI. In terms of metaethics for human use, any human telling you a true fact that affects your morality, is helping you; in terms of metaethics for human use, we don't worry too much about the orderings of valid arguments.
In FAI you've got to worry about a superintelligence searching huge amounts of argument-sequence space. My main thought for handling this sort of thing, consists of searching all ways and superposing them and considering the coherence or incoherence thereof as the extrapolated volition, rather than trying to map out one unique/optimal path.