Raemon comments on What is life? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (44)
I'm personally interested in several aspects or questions...
How far off is humanity from being able to synthesise a living being purely from matter that did not come from another living being?
Many people hold that living beings should be granted a right to not be wantonly deprived of life, other things being equal. But what are the attributes a being requires to qualify for such moral 'protection' ?
If an AI can be said to be alive, is it still alive when the execution of the code is temporarily suspended? If it is a scale, is one whose code has been slowed to one clock cycle per year less alive?
Morality isn't based on alive-ness, it's based on sentience, IMO. Beings have moral weight when they have preferences about how the universe should be.
Is "preference" a word we have any idea how to define rigorously?
I have the increasingly strong conviction that we ascribe emotions and values to things we can anthropomorphize, and there's no real possibility of underlying philosophical coherence.
Short answer: Rigorously? I don't know.
But I know that the quality that causes me to care about something, morally, is not whether it is capable of reproducing, or whether it is made of carbon. I care about things that are conscious in some way that is at least similar to the way I am conscious.
No, I don't know what causes consciousness, no, I don't know how to test for it. But basically, I only care about things that care about things. (And by extension, I care about non-caring things that are cared about).
I'm willing to extend this beyond human motivation. I'd give (some) moral standing to a hypothetical paperclip maximizer that experienced satisfaction when it created paperclips and experienced suffering when it failed. I wouldn't give moral standing to an identical "zombie" paperclip maximizer. I give moral standing to animals (guessing as best I can which are likely to have evolved systems that produce suffering and satisfaction)
I give higher priority to human-like motivations (so in a sense, I'm totally fine with giving higher moral standing to things I can anthropomorphize). I'd sacrifice sentient clippies and chickens for humans, but in the abstract I'd rather the universe contain clippies and chickens than nothing sentient at all. (I think I'd prefer chickens to clippies because they are more likely to eventually produce something closer to human motivation).
Don't worry - I am not under the impression my moral philosophy is all that coherent. But unless there's a moral philosophy that at least loosely approximates my vague intuitions, I probably don't care about it.
The main point, though, is that if we're picking a hazy, nonsense word to define rigorously, it should be 'sentience,' not 'life.'
(edit: might be meaning to use the word "sapient," I can never get those straight")
The fact is that the meanings different people use for sentient vary much more than for sapient.
Interesting.
I read you as arguing for a narrower class that didn't include the chicken. I'd sacrifice Clippy in a second for something valuable to humans, but I don't really care whether the universe has non-self-aware animals.
I believe chickens are self-aware (albeit pretty dumb). I could be wrong, and don't have a good way to test it. (Though I have read some things suggesting they ARE near the borderline of the what kind of sentience is worth worrying about)
A common test for that (which I'm under the impression some people treat more like an ‘operative definition’ of self-awareness) is the mirror test. Great apes, dolphins, elephants and magpies pass it. Dunno about chickens -- I guess not.
That would test a level of intelligence, but not the ability to percieve pain/pleasure/related-things, which is what I'm caring about.
Then self-aware is quite a bad word for it. I suspect that fish and newborn babies can feel pain and pleasure, but that they're not ‘self-aware’ the way I'd use that word.
Nociception has been demonstrated in insects. Small insects.
Edit: Not to mention C. elegans, which has somewhere around three hundred neurons total.
Anthropomorphizing animals is justified based on the degree of similarity between their brains and ours. For example, we know that the parts of our brain we have found are responsible for strong emotions are also present in reptiles, so we might assume that reptiles also have strong emotions. Mammals are more similar to us, so we feel more moral obligation to them.
Saying that moral weight is based on sentience is IMO largely a tautology. Sentience is mostly the word we use for "whatever poorly defined features of a mind give it moral weight".
Short version of my other response: Sentience and life are probably both nonsense words, but if we're picking a nonsense word to define rigorously and care about, it should be sentience.
Even granting that, it at least expresses that moral weight is a function of a mind, which is not entirely tautological.
Hence the word "largely".
Yes, but saying that everything alive deserves moral consideration is a different position.