ata comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
"Machinery" was a figure of speech, I'm not saying we're going to find a deontology lobe. I was referring, for instance, to the point that there are evolutionary reasons why we'd expect to find (as we do) that an understanding of deontological injunctions is fairly universal among humans.
Oops, sorry, I accidentally used the opposite of the word I meant. That should have been "specific", not "general". Yes, we understand expected utility maximization with highly general machinery, and in very abstract terms.
EY's theory linked in the 1st post that deontological injunctions evolved as some sort of additional defense against black swan events does not appear especially convincing to me. The cortex is intrinsically predictive consequentialist at a low level, but simple deontological rules are vast computational shortcuts.
An animal brain learns the hard way, the way AIXI does, thoroughly consequentialist at first, but once predictable pattern matches are learned at higher levels they can be sometimes simplified down to simpler rules for quick decisions.
Even non-verbal animals find ways to pass down some knowledge to their offspring, but in humans this is vastly amplified through language.
Every time a parent tells a child what to do, the parent is transmitting complex consequentualist results down to the younger mind in the form of simpler cached deontological behaviors. Ex: It would be painful for the child to learn a firsthand consequentualist account of why stealing is detrimental (the tribe will punish you).
Once this machinery was in place, it could extend over generations and develop into more complex cultural and religious deontologies. All of this can be accomplished through cortical reinforcement learning as the child develops.
Feral children, for all intents and purposes, act like feral animals. Human minds are cultural/linguistic software phenomena.
I'm not aware of any practical approach to AI which consists of programming concepts directly into an AI. All modern approaches program only the equivalent of an empty brain, the concepts and resulting mind forms through learning.
Humans concepts are expressed in natural language, and for an AGI to compete with humans it will need to learn extant human knowledge. Learning natural language thus seems like the most practical approach.
The problem is this: if we define an algorithm to represent our best outcome and use that as the standard of rationality, and the algorithm's predictions then differ significantly from actual human decisions: is it a problem with the algorithm or the human mind?
If we had an algorithm that represented a human mind perfectly, then that mind would always be rational by that definition.
Even if deontological injunctions are only transmitted through language, they are based on human predispositions (read brain wiring) to act morally and cooperate, which has evolved.
This somewhat applies to animals too, there's been research on altruism in animals.