Kaj_Sotala comments on Stupid Questions, December 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (138)
I don't think that this distinction really cuts reality at the joints. In general, it's my impression that researchers have been moving towards rejecting the whole nature/nurture distinction, as e.g. hinted at in the last paragraph of the Wikipedia article that you linked.
More specifically, as the Hanson article you linked to notes, the human mind seems pretty much built for a very large degree of value plasticity, and for being capable of adopting a wide range of values depending on its environment. That by itself starts to make the distinction suspect - if it's easy for us to acquire new terminal values via nurture because our nature is one that easily adopts new kinds of values that come from nurture... then how do you tell whether some value came more from nurture or nature? If both were integral in the acquisition of this value, then it's unclear whether the distinction makes any sense.
One way of looking at it: an artificial neural network can in principle learn any computable function. So you take an untrained network, and teach it to classify things based on which side of the line drawn by the function 2X + 6 they fall on. Does the property of classifying things based on the function 2X + 6 come from nature or nurture? Arguably from nurture, since without that particular training data, the neural net wouldn't have learned to classify things according to that specific function. But on the other hand "learning any function" is in the untrained neural network's nature, so just because something came from nurture, doesn't mean that the intervention from nurture would have shifted the neural network away from some function that it would have learned to compute in the absence of any intervention. In the absence of any intervention from nurture, the neural network wouldn't have learned to discriminate anything.
Similarly, without a culture surrounding us we'll just end up as feral children (though arguably even feral children grow up in some culture, like an animal one). We're clearly born with tendencies towards manifesting some values more likely than others, but in order for those tendencies to manifest, we also need a culture that manufactures things on top of those tendencies. Similar to how different neural net architectures will make the net more predisposed towards learning a specific function more easily, but they still need the environmental training data to determine which function is actually learned.
Similar to the neural net analogy - where the NN has the potential to learn an infinite number of different functions, and training data selects some part of that potential to teach it specific functions - Jonathan Haidt has argued that different cultures take part of the pre-existing potential for morality and then select parts of it, so that the latent "potential morality" becomes an actual concrete morality:
To take your proposed test, of taking a value and trying to find out how cross-cultural it is: consider appreciation of novels, movies, and video games. On one hand, you could argue that an appreciation of these things is clearly not a human universal, because cultures that haven't yet invented them don't value them. And there are cultures such as the Amish that reject at least some of these values. On the other hand, you could argue that an appreciation of these things comes naturally to humans, because these are all art forms that tap into our pre-existing value of appreciating stories and storytelling. But then, that still doesn't prevent some cultures from rejecting these things...