ElGalambo comments on Less Wrong views on morality? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
I am saying evolutionary morality as a whole is an invalid concept that is irrelevant to the subject of morality.
Actually, I can think of a minutely useful aspect of evolutionary morality: It tells us the evolutionary mechanism by which we got our current intuitions about morality is stupid because it is also the same mechanism that gave lions the intuition to (quoting the article I linked to) 'slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom)'.
If the mechanism by which we got our intuitions about morality is stupid, then we learn that our intuitions are completely irrelevant to the subject of morality. We also learn that we should not waste our time studying such a stupid mechanism.
I initially wrote up a bit of a rant, but I just want to ask a question for clarification:
Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?
I'm worried that you don't because the argument you supplied can be augmented to apply there as well: just replace "genes" with "brains". If your answer is a resounding 'no', I have a lengthy response. :)
IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.
I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.
Thanks
What would probably help is if you said what you thought was relevant to morality, rather than only telling us about things you think are irrelevant. It would make it easier to interpret your irrelevancies.
I'm not sure if it's elementary, but I do have a couple of questions first. You say:
This seems to suggest that you're a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.
Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of "morality" if "morality" is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.
Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy - we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered - we have at least three valuation mechanisms, and their interaction isn't yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study of Valuation From that article:
The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it's imperative to avoid making the criterion "too high for humanity".
One last thing I'd point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:
Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.
I can easily imagine an individual from species described along the lines of the author's hypothetical reading the following:
And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven't. It's quite an interesting (relevant) short story.
So, I have a bit more to write but I'm short on time at the moment. I'd be interested to hear if there is anything you find particularly objectionable here though.