For a long time, I wanted to ask something. I was just thinking about it again when I saw that Alicorn has a post on a similar topic. So I decided to go ahead.

The question is: what is the difference between morally neutral stimulus responces and agony? What features must an animal, machine, program, alien, human fetus, molecule, or anime character have before you will say that if their utility meter is low, it needs to be raised. For example, if you wanted to know if lobsters suffer when they're cooked alive, what exactly are you asking?

On reflection, I'm actually asking two questions: what is a morally significant agent (MSA; is there an established term for this?) whose goals you would want to further; and having determined that, under what conditions would you consider it to be suffering, so that you would?

I think that an MSA would not be defined by one feature. So try to list several features, possibly assigning relative weights to each.

IIRC, I read a study that tried to determine if fish suffer by injecting them with toxins and observing whether their reactions are planned or entirely instinctive. (They found that there's a bit of planning among bony fish, but none among the cartilaginous.) I don't know why they had to actually hurt the fish, especially in a way that didn't leave much room for planning, if all they wanted to know was if the fish can plan. But that was their definition. You might also name introspection, remembering the pain after it's over...

This is the ultimate subjective question, so the only wrong answer is one that is never given. Speak, or be wrong. I will downvote any post you don't make.

BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.

New Comment
98 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Reading the comments here, there seem to be two issues entangled. One is which organisms are capable of suffering (which is probably roughly the same set that is capable of experiencing qualia; we might call this the set of sentient beings). The other is which entities we would care about and perhaps try to help.

I don't think the second question is really relevant here. It is not the issue Tiiba is trying to raise. If you're a selfish bastard, or a saintly altruist, fine. That doesn't matter. What matters is what constitutes a sentient being which can expe... (read more)

1quwgri
The holy problem of qualia may actually be close to the question at hand here. What do you mean when you ask yourself: "Does my neighbor have qualia?" Do you mean: "Does my neighbor have the same experiences?" No. You know for sure that the answer is "No." Your brains and minds are not connected. What's going on in your neighbor's head will never be your experiences. It doesn't matter whether it's (ontologically) magical blue fire or complex neural squiggles. Your experiences and your neighbor's brain processes are different things anyway. What do you mean when you ask yourself: "Are my neighbor's brain processes similar to my experiences?" What degree of similarity or resemblance do you mean? Some people think that this is purely a value question. It is an arbitrary decision by a piece of the Universe about which other pieces of the Universe it will empathize with. Yes, some people try to solve this question through Advaita. One can try to view the Universe as a single mind suffering from dissociative disorder. I know that if my brain and my neighbor's brain are connected in a certain way, then I will feel his suffering as my suffering. But I also know that if my brain and an atomic bomb are connected in a certain way, then I will feel the thermonuclear explosion as an orgasm. Should I empathize with atomic bombs? We can try to look at the problem a little differently. The main difference between my sensation of pain and my neighbor's sensation of pain is the individual neural encoding. But I do not sense the neural encoding of my sensations. Or I do not sense that I sense it. If you make a million copies of me, whose memories and sensations are translated into different neural encodings (while maintaining informational identity), then none of them will be able to say with certainty what neural encoding it currently has. Perhaps, when analyzing the question "what is suffering", we should discard the aspect of individual neural encoding. That is, suffering is

Why is this tagged "serasvictoriawouldyoumarryme"?

Anything to do with this fictional character?

For me, a morally significant agent is one that has positive and negative qualia. Since I can't tell this by looking at an agent, I guess. I'm pretty sure that my brother does, and I'm pretty sure that a rock doesn't.

Cyan mentioned pain asymbolia on the last thread: the ability to feel pain, but not really find anything painful or problematic about it. If someone had asymbolia generalized across all mental functions, I would stop counting that person as a moral agent.

2DanielLC
Wouldn't it be enough to have positive /or/ negative qualia?
2Scott Alexander
...yes.
0Nominull
Could you elaborate on your reasons for doubting that a rock has qualia?
3orthonormal
Qualia appear to require complicated internal structure: knock out a certain brain area and you lose some aspect of it.
2DanielLC
I figure it's symmetric. It's just as likely a rock has good qualia as bad qualia in a given situation. Humans are more asymmetric. If they have bad qualia, they tend to stop doing whatever it is, so you can tell they have bad qualia.
0scroogemcduck1
I cannot speak for Scott, but I can speculate. I am quite sure a rock doesn't have qualia, because it doesn't have any processing center, gives no sign of having any utility to maximize, and has no reaction to stimuli. It most probably doesn't have a mind.

If a "morally significant agent" is one whose goals I would want to further, then I am the only inherently morally significant agent. The moral significance of other agents shifts with my mood.

0Vladimir_Nesov
What if the other agent is more "you" than you are? You are hiding the complexity of "moral significance" in the "I".
8Nominull
There is literally no agent more me than I am. I don't mean to brag, but I am pretty damned me.
0conchis
What if an MSA is one whose goals you would want to want to further?

We talk a lot here about creating Artificial Intelligence. What I think Tiiba is asking about is how we might create Artificial Consciousness, or Artificial Sentience. Could there be a being which is conscious and which can suffer and have other experiences, but which is not intelligent? Contrariwise, could there be a being which is intelligent and a great problem solver, able to act as a Bayesian agent very effectively and achieve goals, but which is not conscious, not sentient, has no qualia, cannot be said to suffer? Are these two properties, intelligen... (read more)

[-][anonymous]20

serasvictoriawouldyoumarryme?

0[anonymous]
Yeah, would an editor please delete that OT tag? She's probably turned him down by now anyway. (Or her, I don't know Tiiba's gender, perhaps a Tiiba is something really feminine.)
0[anonymous]
Let the lady speak for herself.

The capacity to abide by morality carves out the right cluster in thingspace for me, though I'd hesitate to call it the determining factor. If a thing has this capacity, we care about its preferences proportionately.

People, save probably infants, are fully capable, in theory, of understanding and abiding by morality. Most animals are not. Those more capable of doing so, domestic pets, beasts of burden for example, receive some protection. Those who do not have this capacity are generally less protected and that which is done to them less morally relevant.

I don't fully endorse this view, but it feels like it explians a lot.

1quwgri
This is a logical vicious circle. Morality itself is the handmaiden of humans (and similar creatures in fantasy and SF). Morality has value only insofar as we find it important to care about human and quasi-human interests. This does not answer the question "Why do we care about human and quasi-human interests?" One could try to find an answer in the prisoner's dilemma. In the logic of Kant's categorical imperative. Cooperation of rational agents and the like. Then I should sympathize with any system that cares about my interests, even if that system is otherwise like the Paperclipmaker and completely devoid of "unproductive" self-reflection. Great. There is some cynical common sense in this, but I feel a little disappointed.
0pengvado
Which cluster is that: Agents currently acknowledging a morality similar to yours (with "capacity" referring to their choice on whether or not to act according to those nominal beliefs at any give time)? Agents who would be moved by your moral arguments (even if those arguments haven't yet been presented to them)? Anything Turing-complete (even if not currently running an algorithm that has anything to do with morality)?
0Psychohistorian
"Agents capable of being moral" corresponds very closely with my intuitive set "agents whose desires we should have some degree of respect for." Thus, it captures my personal sense of what morality is quite well, though it doesn't really capture why that's my sense of it.

The following was originally going to be a top-level post, but I never posted it because I couldn't complete the proof of my assertion.

In his recent book I Am a Strange Loop, Douglas Hofstadter writes:

A spectacular evolutionary gulf opened up at some point as human beings were gradually separating from other primates: their category systems became arbitrarily extensible. Into our mental lives there entered a dramatic quality of open-endedness, an essentially unlimited extensibility, as compared with a very palpable limitedness in other species. Concept

... (read more)
1PhilGoetz
That's not what you're saying. You're saying, "Torture kittens, or don't; it's all the same."
0CronoDAS
But I like kittens. :(

I will help a suffering thing if it benefits me to help it, or if the social contract requires me to. Otherwise I will walk away.

I adopted this cruel position after going through one long relationship where I constantly demanded emotional "help" from the girl, then another relationship soon afterwards where the girl constantly demanded similar "help" from me. Both those situations felt so sick that I finally understood: participating in any guilt-trip scenario makes you a worse person, no matter whether you're tripping or being tripped.... (read more)

1contravariant
People who require help can be divided into those who are capable of helping themselves, and those who are not. Such a position as yours would express the value preference that sacrificing the good of the latter group is better than letting the first group get unpaid rewards - in all cases. For me it's not that simple, the choice depends on the proportion of the groups, cost to me and society, and just how much good is being sacrificed. To make an extreme example, I would save someone's life even if this encourages other people to be less careful protecting theirs.
1Vladimir_Nesov
I think it helps to distinguish moral injunctions from statements about human preference. First are the heuristics, while latter are the statements of truth. A "position" is a heuristic, but it isn't necessarily the right thing to do, in some of the case where it applies. Generalization from personal experience may be useful on average, but doesn't give knowledge about preference with certainty. When you "follow you own utility", you are merely following your imperfect understanding of your own utility, and there is always a potential for getting the map closer to reality.
0cousin_it
You're talking about preferences over outcomes and you're right that they don't change much. I interpreted Tiiba as asking about preferences over actions ("whose goals you would want to further"), those depend on heuristics.
1Vladimir_Nesov
I don't understand what you're saying here...
0Tem42
This differs from what I had hypothesized was the standard model. I think I like my hypothesis of the standard model better than my understanding of your model, so I'll mention it here, on the off-chance that you might also like it. I think that most people make (or intuit) the calculation "If it's not too much trouble, I should help this person one time. If they are appropriately thankful, and if they do not inconvenience me too much, I will consider helping them again; if they reciprocate appropriately, I will probably be friends with them, and engage in a long-term reciprocal relationship." In this calculation, 'appropriately thankful', 'inconvenience me too much', and 'reciprocate appropriately' are highly subjective, but this model appears to account for most stable relationships. It also accounts for guilt-trip based relationships being unstable. The "I should help one time" clause may make the world a better place in general, although it is unclear if that's why most people hold it. It is possible that when you say "social contract" and I say "reciprocal relationship", we mean exactly the same thing.
0CronoDAS
That sounds very... Objectivist of you. (Not that that's a bad thing, necessarily.)
0Emily
Are you saying that "help the helpless" is a bad idea?
4cousin_it
If you discovered their helplessness yourself, most likely a good idea; if it was advertised to you, almost certainly a bad idea.
1Emily
That seems like it could make sense. If you discover their helplessness, does that come under "it benefits me" or "the social contract requires me" to help them? What about the helpless who would normally be discovered by no one in a position to help them, and don't have their helplessness advertised? Is it a good idea under this formula to go and actively seek them out, or not?
1cousin_it
If I discover their helplessness and expect a high enough degree of gratitude, I'll help for selfish reasons, otherwise move on. For example, I love helping old women on the metro with their heavy bags because they're always so surprised that someone decided to help them (Moscow's not a polite city), but I never give money to beggars. For an even more clear-cut example, I will yield my seat to an elderly person unless specifically demanded to. Actively seeking out people to help might be warranted if the resulting warm fuzzies are high enough.
0Emily
This kinda bothers me, and I don't know whether it's just an emotional, illogical reaction or whether there are some good reasons to be bothered by it. In practice, I would imagine it's not a bad description of how most people behave most of the time. But if everyone used these criteria all the time, something is telling me the world would not be a better place. I could well be wrong. ps. I assumed that was supposed to read "I will not yield my seat...", but I guess it's possible that it wasn't supposed to. ?
3cousin_it
Nah, it was supposed to read "I will". Someone who demands that I yield my seat isn't likely to show gratitude when I comply. Can't speak about the whole world, but anyone who's very prone to manipulating and being manipulated (like I was before) will benefit from adopting this strategy, and everyone around them will benefit too.
2Emily
I see. That's an interesting approach. (Voted up because you're making me think. Still not at all sure I find it a good one.)
[-]djcb00

Interesting question....

Could there be suffering in anything not considered an MSA? While I can imagine a hypothetical MSA that could not suffer, it's hard to think of a being that suffers yet could not be considered an MSA.

But do we have a good operational definition of 'suffering'? The study with the fish is a start, but is planning really a good criterion?

The discussion reminds of that story On being a bat (iirc) in Hofstadter/Dennets highly recommended The Mind's I, on the impossibility of understanding at all what it is like to be something so different from us.

2Daniel_Lewis
Thomas Nagel's "What is it like to be a bat?" [PDF], indeed included in The Mind's I.

prase probably describes naive judgement of moral significance correctly - I see no reason to expect a simple answer to the question. I shall perhaps comment later, having had time to consider more deeply.

Most people do this intuitionally, and most people keep to make rationalisations of their intuitive judgements or construct neat logical moral theories in order to support them (and these theories usually fail to describe what they are intended to describe, because of their simplicity relative to the complexity of an average man's value system).

That said, for me an agent is the more morally significant the more is it similar to human, and I determine suffering by comparison with my own experiences and some necessary extrapolation. Not much useful answer perhaps, but I don't know of any better.

2Alicorn
Similar to a human in what way? We're more closely related to the aforementioned cartilaginous fish than to any given sapient alien. We probably have psychology more similar to that of a border collie than that of at least some possible types of sapient alien.
0prase
Similar in an intuitive way. As for the fish, I don't know, it depends how the aliens are thinking and communicating. In this respect, I don't feel much similarity with fish anyway. As for the collie, very probably we are more similar. And I would probably care more about border collies than about crystalline baby-eating aliens. If you have a dog, you can probably imagine that the relation between a man and a collie can be pretty strong.
0Vladimir_Nesov
And you are hiding the complexity of "moral significance" in "similarity". Is a statue of a human more similar to a human than a horse? Is a human corpse? What if you take out the brain and replace it with a life-support system that keeps the rest of the body alive?
0prase
Similarity of thinking, communication and behaviour makes very important part. So statues and corpses don't rank high in my value list. You may have a point, but similarity sounds a bit less vague to me than moral significance. At least it makes some restrictions: if objects A and B differ only in one quality, and A is human-like in this quality while B not so, then A is clearly more similar to humans. If A is more human-like in certain respects while B in other, more precise description is needed, but I can't describe my preferences and their forming more precisely at the moment.
0Dagon
more morally significant the more is it similar to human I'd expand this to "the more I empathize with it". Often, I feel more strongly about the suffering of some felines than some humans. Of course, that's just a description, not a recommendation. The question of "what entities should one empathize with" remains difficult. Most answers which are self-consistent and match observed behaviors are pretty divergent from the signaling (including self-signaling) that you'd like to give out.
0prase
Of course it's a description. I understood the original post as asking for description as much as recommendation. The question "what entities should one empathize with" is as difficult as many similar questions about morality, since it's not absolutely clear what "should" means here. If your values form a system which can derive the answer, do it; but one can hardly expect wide consensus. My recommendation is: you don't need the answer, instead use your own intuition. I think the chances that our intuitions overlap significantly are higher than chances of discovering an answer satisfactory for all.

BTW, I think the most important defining feature of an MSA is ability to kick people's asses. Very humanizing.

I don't know if you meant that as a joke, but that's pretty much my take from a contractarian perspective (though I wouldn't use the phrase "morally significant agent"). Fish can't do much about us cooking and eating them, so they are not a party to any social contract. That's also the logic behind my tolerance of infanticide.

Was it okay to kill the Indians back in the 1700s, before they got guns? What were they going to do? Throw rocks at us?

7Baughn
I realize you're joking, but it bears mentioning in a general-knowledge kind of way: Bows and arrows were, at the time, as dangerous if not more so than guns. The reason guns were superior back then was solely due to a lack of required training. (Bows take decades of practice, and it's been joked that you should start with the grandfather - but in practice, starting with the father is a good idea.)
1byrnema
I guess it was meant that you start with the grandfather because he would be most skilled.. Has this been described in certain kinds of books? (Diaries, etc.)
4Baughn
No, the grandfather would be the least skilled of the three. The basic idea is that to make a good archer, you need to start when he's (women need not apply) practically a baby. In order to teach well, you must be an archer yourself; thus, the father should be an archer. Adding in the grandfather was probably a case of exaggeration for effect, but - no, I haven't read any diaries about it, so I could be wrong. You'd probably get some benefit from it.. I have no idea how much.
5teageegeepea
I am an emotivist and do not believe anything is good or bad in an objective sense. I think some Indians may have had guns by the 1700s, but their bows and arrows weren't terribly outclassed by many of the old muskets back then either (I'm actually discussing that at my blog right now). The biggest advantage of the colonists was their ever-increasing numbers (while disease steadily drained those of the natives). The indians frequently did respond in kind to killings and the extent to which they could do so would strike me as as the most significant factor to take into consideration when it comes to the decision to kill them. There is also the factor of trade relations that could be disrupted, but most people engaged in prolonged voluntary trade are going to have significant ass-kicking ability or otherwise they would have been conquered and their goods seized by force already. I understand Peter Leeson has a paper "Trading with bandits" disputing that point, but the frequency with which dominance based resource extraction occurs makes me think the phenomena he discusses only occur under very limited conditions.
8Scott Alexander
Well, I can't accuse you of having any unwillingness to bite bullets. Nor of having any unwillingness to do lots of other questionable things with bullets besides. Still, Less Wrong has got to be the only place where I can ask if it's okay to massacre Indians, and get one person who says it depends what the people living back then thought, and another who says it depends on the sophistication of musketry technology. I don't know if that's a good thing or a bad thing about this site.
6teageegeepea
I suspect there are a higher-than-average number of bullet-biters here, and I number myself among them. I don't grant the intuitions which lead people to dodge bullets much credence. Although I am a gun-owner, I don't think I am substantially more likely to shoot anyone (delicious animals are another story) than the others here. Though you may think my above-mentioned criteria (including the government as a source of ass-kicking and taking into account risk aversion) don't count, I'd say that constitutes a substantial unwillingness. Also, while this is pedantic, I'd like to again emphasize the importance of disease over guns. Note that north america and australia have had nearly complete population replacement by europeans, while africa has been decolonized. The reason for that is not technology, but relative vulnerability to disease. If it makes you feel any better about the inhabitants of Less Wrong, note that your reaction was voted up while my response (which was relevant and informative with links to more information, if I may judge my own case for a moment) was voted down. I do not say this to object to anyone's actions (I don't bother voting myself and have no plans to make a front-page post) but to indicate that this is evidence of what the community approves. Although, as mentioned, I don't believe in objective normative truth, we can pretend for a little while in response to joeteicher. We believe we have a better understanding of many things than 1700s colonists did. If we could bring them in a time-machine to the present we could presumably convince them of many of those things. Do you think we could convince them of our moral superiority? From a Bayesian perspective (I think this is Aumann's specialty) do they have any less justification for dismissing our time period's (or country's) morality as being obviously wrong? Or would they be horrified and make a note to lock up anyone who promotes such crazy ideas in their own day? G. K. Chesterton once s
8Scott Alexander
I don't doubt you're a nonviolent and non-aggressive guy in every day life, nor that in its proper historical context the history of colonists and Indians in the New World was really complicated. I wasn't asking you the question because of an interest in 18th century history, I was asking it as a simplified way to see how far you were taking this "Anyone who can't kick ass isn't a morally significant agent" thing. Your willingness to take it as far as you do is...well, I'll be honest. To me it's weird, especially since you describe yourself as an emotivist and therefore willing to link morality to feeling. I can think of two interpretations. One, you literally wouldn't feel bad about killing people, as long as they're defenseless. This would make you a psychopath by the technical definition, the one where you simply lack the moral feelings the rest of us take for granted. Two, you have the same tendency to feel bad about actually killing an Indian or any other defenseless person as the rest of us, but you want to uncouple your feelings from "rationality" and make a theory of morality that ignores them (but then how are you an emotivist?!). I know you read all of the morality stuff on Overcoming Bias and that that stuff gave what I thought was a pretty good argument for not doing that. Do you have a counterargument? (Or I could be completely misunderstanding what you're saying and taking your statement much further than you meant for it to go.) By the way, I didn't downvote your response; you deserve points for consistency.
3teageegeepea
Do I deserve points for consistency? I personally tend to respect bullet-biters more, but I am one. I'm not sure I have a very good reason for that. When I say that I think bullet-dodgers tend to be less sensible I could just be affirming something about myself. I don't know your (or other non-biters) reasons for giving points, other than over-confidence being more respected than hedged mealy-mouth wishy-washing. One might say that by following ideas to their logical conclusion we are more likely to determine which ideas are false (perhaps making bullet-biters epistemological kamikazes), but by accepting repugnant/absurd reductios we may just all end up very wrong rather than some incoherent mix of somewhat wrong. In the case of morality though I can't say what it really means to be wrong or what motivation there is to be right. Like fiction, I choose to evict these deadbeat mind-haunting geists and forestall risks to my epistemic hygiene. I did read the morality stuff at Overcoming Bias (other than the stuff in Eliezer's sci-fi series, I don't read fiction), didn't find it convincing and made similar arguments there. You're right that emotivism doesn't imply indifference to the suffering of others, it's really a meta-ethical theory which says that moral talk is "yay", "boo", "I approve and want you to as well" and so on rather than having objective (as in one-parameter-function) content. Going below the meta-level to my actual morals (if they can be called such), I am a Stirnerite egoist. I believe Vichy is as well, but I don't know if other LWers are. Even that doesn't preclude sympathy for others (though its hard to say what it does preclude). I think it meshes well with emotivism, and Bryan Caplan seems to as well since he deems Stirner an "emotivist anarchist". Let's ignore for now that he never called himself an anarchist and Sidney Parker said the egoist must be an archist! At any rate, with no moral truth or God to punish me I have no reason to subject my
4RobinZ
It's not that unusual in my experience, to be perfectly frank. Once you get out of the YouTube-comment swamps to less-mainstream, more geeky sites, the GIFT-ratio starts to drop enough to allow intelligent provocative conversation. I could easily imagine this comment thread on a Making Light post, for example.
1[anonymous]
As an emotivist, you might be interested in reading After Virtue, particularly the first three or four chapters. He presents a rather compelling argument against emotivism, and if you want to maintain your emotivism you probably ought to find some rationalization defending yourself from his argument.
0Psychohistorian
One should generally seek reasons as a defense from argument, not rationalization. {Edit: My mistake, he really did mean emotivism and this paragraph kind of misses the point. Not going to delete, as it may confuse later comments.} More to the point, though, a refutation of emotivism is not a refutation of moral relativism, and, based on the little bit I could get off Amazon previews, relativism seems to be his problem, even if he wants to straw-man it as emotivism. Similarly, TGGP (given that he redundantly conjoins "I do not believe anything is good or bad in an objective sense" with "emotivism") seems to be more about the relativism than the emotivism specifically. If that author actually manages to put a decent dent in moral relativism, please explain so I can go buy this book immediately, because I would be literally stunned to see such an argument.
2[anonymous]
Actually, based on this comment, TGGP actually believes in emotivism as such. He isolates three reasons in the second chapter: * Moral approval is a magical category that hides what is meant by "moral." * Emotivism conflates 'expressions of personal preference' ("I like this!") with 'evaluative expressions' ("This is good!"), despite the fact the first is gets part of its meaning from the person saying it ("I like this!") and the second doesn't. * Emotivism attempts to assign meaning to the sentence, when the sentence itself might express different feelings or attitudes in different uses. (See Gandalf's take on "Good morning!" in The Hobbit). This is probably where emotivism can be rehabilitated, as MacIntyre goes on to say: Note that I'm not defending MacIntyre's position, here; I'm only bringing it up because an emotivist should know what his or her response to it is, because it is a pretty large objection. My experience is that they go into absolute denial upon hearing the second and third objections, and that's just not cool.
3Douglas_Knight
What does "pretty large" mean of an objection other than "good"? But you say you're not defending MacIntyre. I'd just like to know what the position is. The second bullet point looks like the "point and gape" attack. It simply restates emotivism and replies by declaring the opposite to be fact. The whole point of emotivism is that the "I" is implicity in "this is good," that the syntax is deceptive. The defense seems to be that we should trust syntax. Is "moral approval" any more magic than "moral"? It seems like a pretty straightforward category: when people express approval using moral language. This fails to predict when people will express moral approval rather than the ordinary type, but that hardly makes it magical. Is there any moral theory to which the third bullet point does not apply? Surely, every moral theory has opponents who will apply it incorrectly to "good morning." The second bullet point says we should trust syntax, while the third that language is tricky. The quoted part seems like a good response to virtually all of analytic philosophy; perhaps it can be rehabilitated. But surely emotivism is explicit about promoting performance over meaning? Isn't that thewhole point of emotivism as opposed to other forms of moral relativism?
2[anonymous]
1) "pretty large" tends to mean the same thing as "fundamental", "general", "widely binding" -- at least in my experience. E.g., "Godel's Theorem was a pretty large rejection of the Russell program." And no, I'm not defending MacIntyre. All I'm trying to demonstrate is that his arguments against emotivism are worthy enough for emotivists to learn. 2) No. You've never heard someone say, "I may not like it, but it's still good?" For example, there are people who are personally dislike gay marriage, but support it anyway because they feel it is good. 3) Defining "moral approval" as "when people express approval using moral language" says nothing about what the term "moral" means, and that's something any ethical system really ought to get to eventually. 4) Yes: deontological systems don't give one whit about the syntax of a statement; if your 'intention' was bad, your speech act was still bad. Utilitarianism also is more concerned with the actual weal or woe caused by a sentence, not its syntatic form. And I'm done. If you want to learn more about MacIntyre, read the damn book. I'm a mathematician, not a philosopher.
0Douglas_Knight
You said that emotivists you know go into "absolute denial" at point 2; how do they react to an example like this? I would expect them to say that the people are lying or feel constrained by social conventions. In Haidt terms, they feel both fairness and disgust or violation of tradition and feel that fairness trumps tradition/purity in this instance. Or they live in a liberal milieu where they're not allowed to treat tradition or purity morally. (I should give a lying example, but I'm not sure what I meant.) ETA: if MacIntyre treated deontology the way he treats emotivism, he'd say that the morning is not an actor, therefore it cannot be "good" so "good morning" is incoherent. But I guess deontology is not a theory of language, so it's OK to just say that people are wrong.
0thomblake
For reference, I think you've done MacIntyre sufficient justice here. I think that's putting the cart before the horse. Figuring out what 'moral' means should be something you do before even starting to try to study morality.
0Psychohistorian
Ah, I stand corrected; I got the impression from the intro of the book that the author was trying to slay relativism by slaying emotivism, which really doesn't work. I basically agree with the point against emotivism; it does not capture meaning well. I ascribe to projectivism myself, but it looks like I've mistaken where the original argument was going, so sorry for adding confusion.
0Douglas_Knight
If you understand the point, could you spell it out? Weren't there supposed to be three points? I don't see anything in the above to distinguish emotivism from projectivism. I suspect that you just assumed an argument against something you rejected was the argument you use.
0[anonymous]
There are three points, marked with bullet points. 1) "Moral approval" is magical. 2) Reducing "This is good" to "I like this" misrepresents the way people actually speak. 3) Emotivism doesn't account for the use of sentences in a context -- which is the whole of actual ethical speech. Emotivism is very different from projectivism. One is a theory of ethical language, and one is a theory of mind. EDIT: Perhaps this wasn't so clear -- one consequence of projectivism is a theory of ethical language as well; see Psychohistorian below. My point was that it's a category error to consider them as indistinguishable, because projectivism proper has consequences in several other fields of philosophy, whereas emotivism proper is mostly about ethical language and doesn't say anything wrt how we think about things other than moral approval.
1Psychohistorian
Ethical projectivism isn't quite so much a theory of mind as it is a theory of ethical language. It's clear that in most cases where people say, "X is wrong" they ascribe the objective quality "wrongness" to X. Projectivism holds that there is no such objective quality, thus, the property of wrongness is in the mind, but it doesn't feel like it, much like how the concepts of beauty and disgust are in the mind, but don't feel like it. You can't smell something disgusting and say, "Well, that's just my opinion, that's not really a property of the smell;" it still smells disgusting. Thus, projectivism has the same rejection of objective morality as emotivism does, but it describes how we actually think and speak much better than emotivism does. The attack on emotivism as not accurately expressing what we mean is largely orthogonal to realism vs. subjectivism. Just because we speak about objective moral principles as if they exist does not mean they actually exist, anymore than speaking about the Flying Spaghetti Monster as if it existed conjures it into existence. But the view that moral statements actually express mere approval or disapproval seems clearly wrong; that's just not what people mean when they talk about morality.
0Douglas_Knight
As I see it, you ignore the first and third bullet points and take the second bullet point to promote projectivism over emotivism. It's certainly true that projectivism takes speech more at face value than emotivism. But since emotivism is up-front about this, this is a pretty weak complaint. Maybe it means that emotivism has to do more work to fill in a psychological theory of morality, but producing a psychological theory of morality seems big enough that it's not obvious whether it makes it harder or easier. What if I posited a part of the mind that tried to figure out what moral claims it could (socially) get away with making and chose the one it felt was most advantageous to impose on the conscious mind as a moral imperative. Would you call that emotivism or projectivism?
0Psychohistorian
I have trouble understanding this; mostly, I don't get if you think it exists or if you just want me to pretend it does. But, if I do understand the concept correctly, if something is being imposed on the conscious mind as a moral imperative, that would be projectivist, as it would feel real. If you had part of the unconscious mind that imposed the most socially acceptable, expedient concept of "disgust" on the rest of the mind, one would still feel genuinely disgusted by whatever it "thought" you should be disgusted by. The problem with emotivism is that most people who make moral statements genuinely believe them to be objective, so rendering them into emotive statements loses meaning. Projectivism retains this meaning without accepting the completely unsupported (and I believe unsupportable) claim that objective morals exist. The magical category objection doesn't really make sense, even for emotivism. If "Murder is bad" means, "Boo murder!" no category is evoked and none need be. Furthermore, from any anti-realist perspective, any thing or act could potentially be viewed as immoral, so trying to describe a set of things or acts that count as valid subjects of "moral approval" makes no sense. "Perhaps remarking that approval is of many kinds" makes no (or at the best, very ill-defined) sense. The author doesn't mention a single kind, and it is unclear what would distinguish kinds in a way that meets his own standards. Forcing the other side to navigate an ill-defined, context-free classification system and claiming their definition is defective when they fail to do so proves nothing. As for the third point, it's a straw man. Claiming that emotivism must act as a mapping function such that any sentence XYZ -> a new sentence ABC irrespective of context is a caricature; English doesn't work like this, and no self-respecting theory of language would pretend it does. Unless emotivists consistently claim that context is irrelevant and can be ignored, this point shoul
0Douglas_Knight
I don't remember why I asked that question. It sure reads as a trick question. It's certainly reasonable to treat things as a dichotomy if the overlap is not likely, but I think that's wrong here. I endorse this very broad projectivist view that includes this example, and I imagine most emotivists agree; I doubt that most emotivists are sociopaths projecting their abnormality onto the general population. But I also think emotivism is possible, such as along the lines of this example, or more broadly. I do think you're treating projectivism as broad, and thus likely, and emotivism as narrow, and thus unlikely. In theory, that's fine, except for miscommunication, but in practice it's terrible. Either you give emotivism's neighbors names, greatly raising their salience, or you don't, greatly lowering their salience. (Contrast this to the first bullet point, which seems to reject emotivism on the ground that it's broad. That's silly.) Since projectivism is a theory of mind and emotivism a theory of language or social interaction, they are potentially compatible, though it seems tricky to merge their simple interpretations. But neither minds nor meaning are unitary. If projectivism says that there's a part of the mind that does something, that's broad theory, thus likely to be true, but it also doesn't seem to predict much. Emotivism is a claim about the overall meaning. That's narrower than a claim that there exists a part of the mind that takes a particular meaning and broader than the claim that the mind is unitary and takes a particular meaning. But the overall meaning is the most important.
0thomblake
I think some confusion here might arise from missing the distinction between "projectivism" and "ethical projectivism". Projectivism is a family of theories in philosophy, one of which applies to ethics. You might be talking past each other.
0[anonymous]
Psychohistorian and I seem to be in agreement, actually.
0Douglas_Knight
I didn't say I can't distinguish them, I said the particular attack on emotivism applies just as well to projectivism.
1[anonymous]
My bad; I misread you.
0Douglas_Knight
As much as I'd like to think so, I'll try to learn a lesson about pronouns and antecedents in high latency communications.
0Aurini
Just an aside, you should look up some of the writings by my old (and favourite) Professor Dr. (James?) Weaver of McMaster University. He argues that it was the social technology of institutions, banking, land speculating, and established commerce that allowed whites to take over North America, not individual hostility. The key players he notes are the empire (who vacillated between expansionist and not-expansionist), the homesteaders, and the land speculators. The Indians were harsh and intelligent bargainers, but they were playing by the rules of a game that white people wrote and created - the house always wins. Fact: All Historians approach historical documents with their own set of contexts and biases - all Historians except Dr. Weaver, that is. Fact: Most Historians have to cite sources - Dr. Weaver is able to go back in time and create them.
2Eliezer Yudkowsky
No, cryonic suspension wasn't available back then, and a headshot would have prevented it in any case. In general, murder strikes me as a very dangerous activity - I can see why it's outlawed.
2bentarm
Erm... so it's ok to kill people as long as you cryonically suspend them afterwards? I've no idea if you actually believe this (I assume not, or you would probably have committed suicide), but even joking about it seems to be very bad politics if you're a cryonics advocate.
2[anonymous]
Did the majority of people living at the time feel like it was okay? Is it okay for you to second guess the judgement of thoughtful people who understood the context way better than anyone does now? If at some point most people believe that killing mammals for food is monstrous, and it is banned, and children learn with horror about 21st century practices of murdering and devouring millions of cows and pigs each year, will that make it wrong to eat a hamburger now? Will eating a hamburger now be okay if that never happens? I certainly don't feel that the moral value of my actions should depend on the beliefs of people living hundreds of years in the future.
4cousin_it
Moral realism fallacy alert? Yes, that will make it wrong in their view. There's no law of nature that says different people from different times should have identical moral judgements. No, the moral value of your actions in your view doesn't have to depend on their beliefs. There's no law of nature that says different people from different times should have identical moral judgements.
0thomblake
Ha! Nice to see we have this one covered from both sides.
3Alicorn
Are you defending some kind of temporal moral relativism here?
2thomblake
Don't worry, you have it backwards. The moral value of your actions is not determined by the beliefs of any people, but rather the people's beliefs are an attempt to track the facts about the moral value of your actions (assuming there is such a thing at all).
7wedrifid
So once I create a friendly-to-me AI I am the only morally significant agent in existence? I think not. Relevant moral significance seems to be far more determined by the ability of any agent (not limited to just themselves) to kick ass on their behalf. So infants, fish or cows can have moral significance just because someone says so (and is willing to back that up). Fortunately for you this means that if I happen to gain overwhelming power you will remain a morally significant agent based purely on my whim.
1PhilGoetz
That's using the word "moral" to mean its opposite. Or, it's a claim that "morality" is a nonsensical concept, disguised as an alternate view of morality.
0Perplexed
You need to read something by Gauthier or Binmore. The idea that morality is closely related to rational self-interest is hardly a crackpot idea. There are at least two lines of argument pointing in this direction. One derives from Hume's point that a system of morality must not only inform us as to what actions are moral, but also show why we should perform only the moral actions. The other observes that "moral facts" are simply our moral intuitions, and that those have been shaped by evolution into a pretty good caricature of rational self-interest. A 'morality" which takes into account the power of others may be un-Christian, but it is hardly inhuman.

The potential to enhance the information complexity of another agent. Where the degree of this potential and the degree of the complexity provided indicates the degree of moral significance.

Which reduces the problem to the somewhat less difficult one of estimating complexity and so estimating potential complexity influences among agents. By this, I means something more nuanced than algorithmic or Kolmogorov complexity. We need something that takes into account fun theory and how both simple systems and random noise are innately less complex than systems w... (read more)

2jimmy
Can you explain why "the inverse of complexity enhancement" would be a good definition of "suffering" that would share the other features we mean by the word?
0MendelSchmiedekamp
Possibly, could you list some of the features you had in mind?
0jimmy
Well, I just don't see any connection at all, and I assume that has something to do with the -1 karma status of the comment. People usually use "suffering" to mean something along the lines of "experiencing subjectivly unpleasant qualia" and having negative utility associated with it. Where does complexity come in?
0MendelSchmiedekamp
Building on some of the more non-trivial theories of fun - specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia. Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it's better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations. One nice effect of having suffering be defined as the sapping of complexity is that it deals with the question of which pain is suffering fairly elegantly - "subjectively" interesting pain is not suffering, but "subjectively" uninteresting pain is suffering. Of course, that is only a small part of the process of making these distinctions. It's important to estimate both the subject of the qualia, and the structure of the sequence of qualia as it relates to the current state of the entity in question before you can estimate whether the stream of qualia will induce suffering or not. It is a very powerful approach. But it is by no means simple. So I don't begrudge some karma loss in trying to explain it to folks here. But it's at least some feedback from unclear explanations.
0jimmy
I don't mean to suggest that anything that subtracts a karma point isn't worth doing, just that it's evidence that you're not accomplishing what you'd like. You've made some claims (in other comments too) which would be very interesting if true, but weren't backed up enough for me to make the inferential jump. I'd like to see a full top level post on this idea, as it seems quite interesting if true, but it also seems to need more space to give the details and full supporting arguments.
0MendelSchmiedekamp
You're right in that this, among other topics, I owe a top level post. Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno's paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side. Which means that until I find a way to map people around that phenomena I'm hesitant in giving a large scale treatment. Just because it was the route I took, doesn't mean it's a good way to explain things generally, ala Typical Mind Fallacy born out by evidence. But in any case I will return to it when I have the time.
0Douglas_Knight
Laying out the route you took might be a lot easier than looking for another route. Also, the feedback from comments might be a better way to look for another route than modeling other minds on your own. I suspect that people are voting you down because you sound like you're attempting to show off, rather than attempting to communicate. Several of your posts seem to be simple assertions that you possess knowledge or a theory. I did vote down the comment at the top of this thread, but I don't remember if that's why. I was surprised that I didn't vote down other of your comments where I remember having that reaction, so this theory-from-introspection isn't even a good theory of me. But it might work better for people who vote more. (the simple theory of when I vote you up is 21 May and 6 June, which disturbs me.)