Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.
I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can expe...
Why is this tagged "serasvictoriawouldyoumarryme"?
Anything to do with this fictional character?
For me, a morally significant agent is one that has positive and negative qualia. Since I can't tell this by looking at an agent, I guess. I'm pretty sure that my brother does, and I'm pretty sure that a rock doesn't.
Cyan mentioned pain asymbolia on the last thread: the ability to feel pain, but not really find anything painful or problematic about it. If someone had asymbolia generalized across all mental functions, I would stop counting that person as a moral agent.
If a "morally significant agent" is one whose goals I would want to further, then I am the only inherently morally significant agent. The moral significance of other agents shifts with my mood.
We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligen...
The capacity to abide by morality carves out the right cluster in thingspace for me, though I'd hesitate to call it the determining factor. If a thing has this capacity, we care about its preferences proportionately.
People, save probably infants, are fully capable, in theory, of understanding and abiding by morality. Most animals are not. Those more capable of doing so, domestic pets, beasts of burden for example, receive some protection. Those who do not have this capacity are generally less protected and that which is done to them less morally relevant.
I don't fully endorse this view, but it feels like it explians a lot.
The following was originally going to be a top-level post, but I never posted it because I couldn't complete the proof of my assertion.
In his recent book I Am a Strange Loop, Douglas Hofstadter writes:
...A spectacular evolutionary gulf opened up at some point as human beings were gradually separating from other primates: their category systems became arbitrarily extensible. Into our mental lives there entered a dramatic quality of open-endedness, an essentially unlimited extensibility, as compared with a very palpable limitedness in other species. Concept
I will help a suffering thing if it benefits me to help it, or if the social contract requires me to. Otherwise I will walk away.
I adopted this cruel position after going through one long relationship where I constantly demanded emotional "help" from the girl, then another relationship soon afterwards where the girl constantly demanded similar "help" from me. Both those situations felt so sick that I finally understood: participating in any guilt-trip scenario makes you a worse person, no matter whether you're tripping or being tripped....
Interesting question....
Could there be suffering in anything not considered an MSA? While I can imagine a hypothetical MSA that could not suffer, it's hard to think of a being that suffers yet could not be considered an MSA.
But do we have a good operational definition of 'suffering'? The study with the fish is a start, but is planning really a good criterion?
The discussion reminds of that story On being a bat (iirc) in Hofstadter/Dennets highly recommended The Mind's I, on the impossibility of understanding at all what it is like to be something so different from us.
prase probably describes naive judgement of moral significance correctly - I see no reason to expect a simple answer to the question. I shall perhaps comment later, having had time to consider more deeply.
Most people do this intuitionally, and most people keep to make rationalisations of their intuitive judgements or construct neat logical moral theories in order to support them (and these theories usually fail to describe what they are intended to describe, because of their simplicity relative to the complexity of an average man's value system).
That said, for me an agent is the more morally significant the more is it similar to human, and I determine suffering by comparison with my own experiences and some necessary extrapolation. Not much useful answer perhaps, but I don't know of any better.
BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.
I don't know if you meant that as a joke, but that's pretty much my take from a contractarian perspective (though I wouldn't use the phrase "morally significant agent"). Fish can't do much about us cooking and eating them, so they are not a party to any social contract. That's also the logic behind my tolerance of infanticide.
Was it okay to kill the Indians back in the 1700s, before they got guns? What were they going to do? Throw rocks at us?
The potential to enhance the information complexity of another agent. Where the degree of this potential and the degree of the complexity provided indicates the degree of moral significance.
Which reduces the problem to the somewhat less difficult one of estimating complexity and so estimating potential complexity influences among agents. By this, I means something more nuanced than algorithmic or Kolmogorov complexity. We need something that takes into account fun theory and how both simple systems and random noise are innately less complex than systems w...
Do I deserve points for consistency? I personally tend to respect bullet-biters more, but I am one. I'm not sure I have a very good reason for that. When I say that I think bullet-dodgers tend to be less sensible I could just be affirming something about myself. I don't know your (or other non-biters) reasons for giving points, other than over-confidence being more respected than hedged mealy-mouth wishy-washing. One might say that by following ideas to their logical conclusion we are more likely to determine which ideas are false (perhaps making bullet-biters epistemological kamikazes), but by accepting repugnant/absurd reductios we may just all end up very wrong rather than some incoherent mix of somewhat wrong. In the case of morality though I can't say what it really means to be wrong or what motivation there is to be right. Like fiction, I choose to evict these deadbeat mind-haunting geists and forestall risks to my epistemic hygiene.
I did read the morality stuff at Overcoming Bias (other than the stuff in Eliezer's sci-fi series, I don't read fiction), didn't find it convincing and made similar arguments there.
You're right that emotivism doesn't imply indifference to the suffering of others, it's really a meta-ethical theory which says that moral talk is "yay", "boo", "I approve and want you to as well" and so on rather than having objective (as in one-parameter-function) content. Going below the meta-level to my actual morals (if they can be called such), I am a Stirnerite egoist. I believe Vichy is as well, but I don't know if other LWers are. Even that doesn't preclude sympathy for others (though its hard to say what it does preclude). I think it meshes well with emotivism, and Bryan Caplan seems to as well since he deems Stirner an "emotivist anarchist". Let's ignore for now that he never called himself an anarchist and Sidney Parker said the egoist must be an archist!
At any rate, with no moral truth or God to punish me I have no reason to subject myself to any moral standard. To look out for ones' self is what comes easiest and determines most of our behavior. That comes into tension with other impulses, but I am liberated from the tribal constraints which would force me to affirm the communal faith. I probably would not do that if I felt the conflicting emotions that others do (low in Agreeable, presumably like most atheists but even moreso). To the extent that I can determine how I feel, I choose to do so in a way that serves my purposes. Being an adaptation-executer, my purposes are linked to how I feel and so I'm quite open to Nozick's experience machine (in some sense there's a good chance I'm already in one) or wireheading. Hopefully Anonymous is also an egoist, but would seek perpetual subjective existence even if it means an eternity of torture.
One proto-emotivist book, though it doesn't embrace egoism, is The Theory of Moral Sentiments. I haven't actually read it in the original, but there's a passage about a disaster in China compared to the loss of your own finger. I think it aptly describes how most of us would react. The occurrence in China is distant, like something in a work of fiction. If the universe is infinite there may well be an infinite number of Chinas, or Earths, disappearing right now. And the past, with its native americans and colonists or peasants and proletariat that died for modernity is similar. If we thought utilitarianism was true, it would be sheerest nonsense to care more about your own finger than all the Chinese, or even insects. But I do care more about my finger and am completely comfortable with that reflexive priority. If I was going to be in charge of making deals then the massive subjective harm from the perspective of the Chinese would be something to consider, and that leads us back to the ability to take part in a contract.
Aschwin de Wolf's Against Politics site used to have a lot more material on contractarianism and minimal ethics, but the re-launched version has less and I was asked to take down my mirror site. There is still some there to check out, and cyonics enthusiasts may be interested in the related Depressed Metabolism site.
For a long time, I wanted to ask something. I was just thinking about it again when I saw that Alicorn has a post on a similar topic. So I decided to go ahead.
The question is: what is the difference between morally neutral stimulus responces and agony? What features must an animal, machine, program, alien, human fetus, molecule, or anime character have before you will say that if their utility meter is low, it needs to be raised. For example, if you wanted to know if lobsters suffer when they're cooked alive, what exactly are you asking?
On reflection, I'm actually asking two questions: what is a morally significant agent (MSA; is there an established term for this?) whose goals you would want to further; and having determined that, under what conditions would you consider it to be suffering, so that you would?
I think that an MSA would not be defined by one feature. So try to list several features, possibly assigning relative weights to each.
IIRC, I read a study that tried to determine if fish suffer by injecting them with toxins and observing whether their reactions are planned or entirely instinctive. (They found that there's a bit of planning among bony fish, but none among the cartilaginous.) I don't know why they had to actually hurt the fish, especially in a way that didn't leave much room for planning, if all they wanted to know was if the fish can plan. But that was their definition. You might also name introspection, remembering the pain after it's over...
This is the ultimate subjective question, so the only wrong answer is one that is never given. Speak, or be wrong. I will downvote any post you don't make.
BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.