Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.
I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can expe...
Why is this tagged "serasvictoriawouldyoumarryme"?
Anything to do with this fictional character?
For me, a morally significant agent is one that has positive and negative qualia. Since I can't tell this by looking at an agent, I guess. I'm pretty sure that my brother does, and I'm pretty sure that a rock doesn't.
Cyan mentioned pain asymbolia on the last thread: the ability to feel pain, but not really find anything painful or problematic about it. If someone had asymbolia generalized across all mental functions, I would stop counting that person as a moral agent.
If a "morally significant agent" is one whose goals I would want to further, then I am the only inherently morally significant agent. The moral significance of other agents shifts with my mood.
We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligen...
The capacity to abide by morality carves out the right cluster in thingspace for me, though I'd hesitate to call it the determining factor. If a thing has this capacity, we care about its preferences proportionately.
People, save probably infants, are fully capable, in theory, of understanding and abiding by morality. Most animals are not. Those more capable of doing so, domestic pets, beasts of burden for example, receive some protection. Those who do not have this capacity are generally less protected and that which is done to them less morally relevant.
I don't fully endorse this view, but it feels like it explians a lot.
The following was originally going to be a top-level post, but I never posted it because I couldn't complete the proof of my assertion.
In his recent book I Am a Strange Loop, Douglas Hofstadter writes:
...A spectacular evolutionary gulf opened up at some point as human beings were gradually separating from other primates: their category systems became arbitrarily extensible. Into our mental lives there entered a dramatic quality of open-endedness, an essentially unlimited extensibility, as compared with a very palpable limitedness in other species. Concept
I will help a suffering thing if it benefits me to help it, or if the social contract requires me to. Otherwise I will walk away.
I adopted this cruel position after going through one long relationship where I constantly demanded emotional "help" from the girl, then another relationship soon afterwards where the girl constantly demanded similar "help" from me. Both those situations felt so sick that I finally understood: participating in any guilt-trip scenario makes you a worse person, no matter whether you're tripping or being tripped....
Interesting question....
Could there be suffering in anything not considered an MSA? While I can imagine a hypothetical MSA that could not suffer, it's hard to think of a being that suffers yet could not be considered an MSA.
But do we have a good operational definition of 'suffering'? The study with the fish is a start, but is planning really a good criterion?
The discussion reminds of that story On being a bat (iirc) in Hofstadter/Dennets highly recommended The Mind's I, on the impossibility of understanding at all what it is like to be something so different from us.
prase probably describes naive judgement of moral significance correctly - I see no reason to expect a simple answer to the question. I shall perhaps comment later, having had time to consider more deeply.
Most people do this intuitionally, and most people keep to make rationalisations of their intuitive judgements or construct neat logical moral theories in order to support them (and these theories usually fail to describe what they are intended to describe, because of their simplicity relative to the complexity of an average man's value system).
That said, for me an agent is the more morally significant the more is it similar to human, and I determine suffering by comparison with my own experiences and some necessary extrapolation. Not much useful answer perhaps, but I don't know of any better.
BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.
I don't know if you meant that as a joke, but that's pretty much my take from a contractarian perspective (though I wouldn't use the phrase "morally significant agent"). Fish can't do much about us cooking and eating them, so they are not a party to any social contract. That's also the logic behind my tolerance of infanticide.
Was it okay to kill the Indians back in the 1700s, before they got guns? What were they going to do? Throw rocks at us?
The potential to enhance the information complexity of another agent. Where the degree of this potential and the degree of the complexity provided indicates the degree of moral significance.
Which reduces the problem to the somewhat less difficult one of estimating complexity and so estimating potential complexity influences among agents. By this, I means something more nuanced than algorithmic or Kolmogorov complexity. We need something that takes into account fun theory and how both simple systems and random noise are innately less complex than systems w...
For a long time, I wanted to ask something. I was just thinking about it again when I saw that Alicorn has a post on a similar topic. So I decided to go ahead.
The question is: what is the difference between morally neutral stimulus responces and agony? What features must an animal, machine, program, alien, human fetus, molecule, or anime character have before you will say that if their utility meter is low, it needs to be raised. For example, if you wanted to know if lobsters suffer when they're cooked alive, what exactly are you asking?
On reflection, I'm actually asking two questions: what is a morally significant agent (MSA; is there an established term for this?) whose goals you would want to further; and having determined that, under what conditions would you consider it to be suffering, so that you would?
I think that an MSA would not be defined by one feature. So try to list several features, possibly assigning relative weights to each.
IIRC, I read a study that tried to determine if fish suffer by injecting them with toxins and observing whether their reactions are planned or entirely instinctive. (They found that there's a bit of planning among bony fish, but none among the cartilaginous.) I don't know why they had to actually hurt the fish, especially in a way that didn't leave much room for planning, if all they wanted to know was if the fish can plan. But that was their definition. You might also name introspection, remembering the pain after it's over...
This is the ultimate subjective question, so the only wrong answer is one that is never given. Speak, or be wrong. I will downvote any post you don't make.
BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.