TimS comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Yeah, but my point was that doing so is not actually useful because now we have to decide whether or not babies fit this criterion (and if this criterion is good), and we're inevitably going to do that by analogy and example if at all. I was trying to skip right to that step, but I suppose we did get there eventually.
Pigs are smart. Pigs are very smart: they have complex personalities, developed rules of social interaction, intentional deception, object permanence, high ability to learn... given a few hours they can figure out how mirrors work and use them to see around or behind obstacles. The list goes on. (Even still, you would not be tempted to confuse them with people.) I'm pretty sure pigs are smarter than wolves, for example. Certainly, if you spend any time around them, pigs are "obviously" smarter than babies.
To my knowledge babies have none of those abilities, nor, indeed, many of the other characteristics of functioning people.
Regardless, I have no doubt that pigs are closer to functioning adult humans than babies are. You'd best give up pork. (Or do what I view as the reasonable thing and give up the idea that babies are people.)
I'd be interested in what standard of "functional" you might propose that newborns would meet, though. Perhaps give examples of things which seem close to to line, on either side? For example, do wolves seem to you like people? Should killing a wolf be considered a moral wrong on par with murder?
I have to ask, at this point: have you seriously considered the possibility that babies aren't people?
The script is as I described it. It's compiling an AI and then launching it. Here, I'll write it for you:
We are supposing that it's still on the first step, compilation. However, with no interaction on our part, it's going to finish compiling and begin running the sufficiently-advanced AI. Unless we interrupt it before compilation finishes, in which case it will not.
You seem to have gotten on something of a tangent here. I'm not sure why you're talking about maladaptive behaviors. I'm talking about immoral behaviors.
It is, for example, almost certainly maladaptive to allow all women to go into higher education and industry, because those correlate strongly with having fewer children and that causes serious problems. (Witness Japan circa now.) This is, as you put it, a poor gambling strategy. Does that imply it's immoral for society to allow women to be education? Do reasonable people look at people who support women's rights and wonder what's wrong with them? Of course not.
So no, maladaptive does not imply immoral. As such, I stand by my original point, which was that I don't think you would have invented a moral rule against infanticide if you weren't raised with one.
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don't think your example about women in higher education is as cut and dried as you present it.
Agreed. (Nor is it written quite as clearly as it could be.) It was just the first thing that came to mind - I've been reading about Japan's current population problems. Hopefully it's adequate to convince readers that maladaptive isn't obviously equivalent to immoral, though.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don't exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn't an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here - the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I'm a moral anti-realist and don't believe the evo. psych -> morality story.
Ah, I see what you mean. I don't think one has to believe in objective morality as such to agree that "morality is the godshatter of evolution". Moreover, I think it's pretty key to the "godshatter" notion that our values have diverged from evolution's "value", and we now value things "for their own sake" rather than for their benefit to fitness. As such, I would say that the "godshatter" notion opposes the idea that "maladaptive is practically the definition of immoral", even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.