Qiaochu_Yuan comments on Effective Altruism Through Advertising Vegetarianism? - LessWrong

20 Post author: peter_hurford 12 June 2013 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (551)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 13 June 2013 07:17:31PM *  1 point [-]

Why would the suffering of one species be more important than the suffering of another?

Because one of those species is mine?

I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.

Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern's The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?

There is something in human nature that cares about things similar to itself. Even if we're currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we're rebelling within nature.

I care about humans because I think that in principle I'm capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them... I can't do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds "natural resources." And natural resources should be conserved, of course (for the sake of future humans), but I don't assign them moral value.

Comment author: Zack_M_Davis 14 June 2013 08:03:14PM 8 points [-]

Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.

Comment author: Qiaochu_Yuan 14 June 2013 08:10:56PM *  3 points [-]

We know stuff that our ancestors didn't know; we have capabilities that they didn't have.

I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:

I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct? But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.

I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).

If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures

Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.

Comment author: Zack_M_Davis 14 June 2013 08:38:22PM 5 points [-]

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing.

humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead

That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.

Comment author: RobbBB 15 June 2013 11:53:50AM *  1 point [-]

I'm not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren't wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one's affective state.) If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

I don't think this is specifically relevant. I upvoted your 'blue robot' comment because this is an important issue to worry about, but 'that's a black box' can't be used as a universal bludgeon. (Particularly given that it defeats appeals to 'isHuman' even more thoroughly than it defeats appeals to 'isSuffering'.)

Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead)

I assume you're being tongue-in-cheek here, but be careful not to mislead spectators. 'Human life isn't perfect, ergo we are under no moral obligation to eschew torturing non-humans' obviously isn't sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans' welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.

Comment author: Qiaochu_Yuan 15 June 2013 07:06:53PM *  2 points [-]

If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.

I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).

I assume you're being tongue-in-cheek here

Nope.

White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.

I don't think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.

Comment author: RobbBB 15 June 2013 09:21:58PM 1 point [-]

White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.

Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can't do. I don't see a relevant disanalogy. (Other than the question-begging one 'fish aren't human'.)

Comment author: Qiaochu_Yuan 15 June 2013 09:36:27PM 4 points [-]

I guess that should've ended "...that fish can't do and that are important parts of how they interact with other white people." Black people are capable of participating in human society in a way that fish aren't.

A "reversed stupidity is not intelligence" warning also seems appropriate here: I don't think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.

Comment author: RobbBB 15 June 2013 09:59:26PM *  1 point [-]

I don't think we should stop making distinctions altogether either; I'm just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take 'the expanding circle' as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that's improved far beyond contemporary society's hodgepodge of standards.

I think the main lesson from 'expanding circle' events is that we should be relatively cautious about assuming that something isn't a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. 'Black people don't have moral standing because they're less intelligent than us' fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.

(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other.)

On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it's a bit of an explanatory IOU until we know exactly what the neural basis of 'consciousness' is, but 'intelligent' and 'able to participate in human society' are IOUs in the same sense.) Likewise for gods and dead bodies -- the former don't exist, and the latter again fail very general criteria like 'is it conscious?' and 'can it suffer?' and 'can it desire?'. These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap.

Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological 'humanity' and 'inhumanity' are significant, and that makes it dangerous to adopt a policy of 'assume everything with a weird appearance or behavior has no moral rights until we've conclusively proved that its difference from us is only skin-deep'.

Comment author: Qiaochu_Yuan 15 June 2013 11:44:07PM *  1 point [-]

Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.

The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren't so capable.

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).

Plus many fish can participate in their own societies.

I'm skeptical of the claim that any fish have societies in a meaningful sense. Citation?

If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them?

If they're intelligent enough we can still trade with them, and that's fine.

Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other

I don't think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.

These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap. Possibly they fall into a new and different trap, though?

Yes: not capturing complexity of value. Again, morality doesn't behave like science. Looking for general laws is not obviously a good methodology, and in fact I'm pretty sure it's a bad methodology.

Comment author: RobbBB 16 June 2013 01:13:55AM *  1 point [-]

Yes: not capturing complexity of value.

'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.

In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence -- a more detailed map can be wrong about the territory in more ways.

Again, morality doesn't behave like science.

Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. "Looking for general laws" is a good idea here for the same reason it's a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we're not complicating our theory in arbitrary or unnecessary ways.

Knowing at the outset that storms are complex doesn't mean that we shouldn't try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.

Comment author: [deleted] 15 June 2013 11:53:10PM 1 point [-]

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering.

Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you're much bigger than an atom and much slower than light).

Comment author: RobbBB 16 June 2013 01:11:32AM *  0 points [-]

The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society.

Isn't a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don't know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy -- and are likely to become far fuzzier as we take more control of our genetic future. We also know that what's normal for a certain species can vary wildly over historical time. 'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.

It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or 'feels'?) distant, yet completely intolerable in contexts where this external technology is more 'near' on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?

I don't find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering.

Actually, now that you bring it up, I'm surprised by how similar the two are. 'Heuristics' by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.

in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases

I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that's different from claiming that it's an advantage of a moral claim that it gets the right answer less often.

I'm skeptical of the claim that any fish have societies in a meaningful sense.

I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?

If they're intelligent enough we can still trade with them, and that's fine.

If we can't trade with them for some reason, it's still not OK to torture them.

The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.

'The psychological unity of mankind' is question-begging here. It's just a catchphrase; it's not as though there's some scientific law that all and only biologically human minds form a natural kind. If we're having a battle of catchphrases, vegetarians can simply appeal to the 'psychological unity of sentient beings'.

Sure, they're less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I'm looking for is a reason to favor the one unity over an infinite number of rival unities.

I should also reiterate that it's not an advantage of your theory that it requires two independent principles ('being biologically human', 'being able to (be modified without too much difficulty into something that can) socialize with biological humans') to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it's not enough to elevate it to a large probability.

Comment author: Eugine_Nier 16 June 2013 06:40:02AM 0 points [-]

On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum.

What about unconscious people?

Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it.

So what's your position on abortion?

Comment author: RobbBB 16 June 2013 08:46:31AM *  0 points [-]

I don't know why you got a down-vote; these are good questions.

What about unconscious people?

I'm not sure there are unconscious people. By 'unconscious' I meant 'not having any experiences'. There's also another sense of 'unconscious' in which people are obviously sometimes unconscious — whether they're awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for 'bare consciousness', but it's not necessary, since people can experience dreams while 'unconscious'.

Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly 'switches off' — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like 'Do we have a responsibility to make conscious beings come into existence?' and 'Do we have a responsibility to fulfill people's wishes after they die?'. I'd lean toward 'yes' on the former, 'no but it's generally useful to act as though we do' on the latter.

So what's your position on abortion?

Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It's conceivable that there's no true consciousness at all until after birth — analogously, it's possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.

Comment author: SaidAchmiz 14 June 2013 08:25:34PM *  0 points [-]

I've seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:

But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did.

Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that's wrong. What was bad about witch hunts was:

  1. People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the "trial" process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.

  2. Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we'd carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).

So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there's no such crime in the first place), but we shouldn't therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.

Comment author: Qiaochu_Yuan 14 June 2013 08:40:40PM *  1 point [-]

If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.

We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?

If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?

Comment author: [deleted] 14 June 2013 09:30:32PM 2 points [-]

Who knows what kind of things a real witch could do to a jury?

Who knows what kind of things a real witch could do to an executioner, for that matter?

Comment author: SaidAchmiz 14 June 2013 09:17:01PM *  1 point [-]

We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?

There is a difference between "we should take precautions to make sure the witch doesn't blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual" and "let's just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc." Regardless of what you think would happen in practice (fear makes people do all sorts of things), it's clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we're not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.

If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?

That's two questions ("what drives moral progress" and "how can you distinguish moral progress from a random walk"). They're both interesting, but the former is not particularly relevant to the current discussion. (It's an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don't have link to specific posts atm] that it's technological advancement that drives what we think of as "moral progress".)

As for how I can distinguish it from a random walk — that's harder. However, my objection was to Lewis's assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we've made moral progress per se to make my objection.

Comment author: MugaSofer 15 June 2013 10:33:05PM -1 points [-]

Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion.

2 was based on a Bible quote, I think. The state hanged witches.

Comment author: SaidAchmiz 14 June 2013 08:15:34PM *  1 point [-]

If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.

No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).

Comment author: Lukas_Gloor 14 June 2013 10:39:02PM 0 points [-]

No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things?

I'd do it that way. It doesn't strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of "pain". (Subjects report that they notice the sensation of pain, but they claim it doesn't bother them.) I'd define suffering as wanting to get out of the state you're in. If you're fine with the state you're in, it is not what I consider to be suffering.

Comment author: SaidAchmiz 15 June 2013 12:08:52AM 0 points [-]

Ok, that seems workable to a first approximation.

So, a question for anyone who both agrees with that formulation and thinks that "we should care about the suffering of animals" (or some similar view):

Do you think that animals can "want to get out of the state they're in"?

Comment author: Raemon 15 June 2013 12:43:58AM 0 points [-]

Yes?

This varies from animal to animal. There's a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)

Comment author: Raemon 14 June 2013 08:21:35PM 0 points [-]

There's a difference between "it's possible to construct a mind" and "other particular minds are likely to be constructed a certain way." Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.

(I also would define it, not in terms of "pain and suffering" but "preference satisfaction and dissatisfaction". I think I might consider "suffering" as dissatisfaction, by definition, although "pain" is more specific and might be valuable for some minds.)

Comment author: [deleted] 14 June 2013 09:24:28PM 0 points [-]

although "pain" is more specific and might be valuable for some minds

Such as human masochists.

Comment author: SaidAchmiz 14 June 2013 08:37:44PM 0 points [-]

I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don't even have so much as a strong certainty.

I don't know that I'm comfortable with identifying "suffering" with "preference dissatisfaction" (btw, do you mean by this "failure to satisfy preferences" or "antisatisfaction of negative preferences"? i.e. if I like playing video games and I don't get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).

Comment author: TheOtherDave 14 June 2013 08:57:50PM 1 point [-]

I can't speak for Raemon, but I would certainly say that the condition described by "I like playing video games and am prohibited from playing video games" is a trivial but valid instance of the category /suffering/.

Is the difficulty that there's a different word you'd prefer to use to refer to the category I'm nodding in the direction of, or that you think the category itself is meaningless, or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so) , or something else?

I'm usually indifferent to semantics, so if you'd prefer a different word, I'm happy to use whatever word you like when discussing the category with you.

Comment author: SaidAchmiz 14 June 2013 09:28:45PM *  0 points [-]

... or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so)

That one. Also, what term we should use for what categories of things and whether I know what you're talking about is dependent on what claims are being made... I was objecting to Zack_M_Davis's claim, which I take to be something either like:

"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also."

or...

"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also."

I don't think either of those claims are justified. Do you think they are? If you do, I guess we'll have to work out what you're referring to when you say "suffering", and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we're referring to.)

Comment author: TheOtherDave 15 June 2013 02:44:54AM *  -1 points [-]

I don't think either of those claims are justified. Do you think they are?

There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don't. So let me back up a little.

Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.

So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that's strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).

I don't actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.

As long as C is high -- that is, as long as we really are confident that the other brain has a "same or similar implementation", as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I'm pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is "completely identical" to (S1,B1), I'm "certain" I prefer B2 not be in S2.

But I'm not sure that's actually what you mean when you say "same or similar implementation." You might, for example, mean that they have anatomical points of correspondance, but you aren't confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).

Comment author: SaidAchmiz 15 June 2013 03:33:32AM 1 point [-]

Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.

Is brain B1 your brain in this scenario? Or just... some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings' brain states.

Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as "pain" and "suffering" (which, for us, might usefully be operationalized as "brain states we prefer not to be in") are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing "pain" and "suffering" (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...

Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.

Or, he could have been making the claim that we can usefully describe the category of "pain" and/or "suffering" in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don't know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.

I don't think that conclusion is justified either... or rather, I don't think it's instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as "suffering" is by definition. And we all know that arguing "by definition" makes a def out of I and... wait... hm... well, it's bad.

Comment author: TheOtherDave 15 June 2013 03:48:46AM *  -1 points [-]

My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.

My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B's mind antiprefers the experiential correlates of those details. I agree that there's no strict entailment here, though, "merely" evidence.

That said, mere evidence can get us pretty far. I am not inclined to dismiss it.

Comment author: Lukas_Gloor 14 June 2013 01:13:52PM 3 points [-]

On why the suffering of one species would be more important than the suffering of another:

Because one of those species is mine?

Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?

Comment author: Qiaochu_Yuan 14 June 2013 06:12:47PM *  0 points [-]

Does that also apply to race and gender? If not, why not?

I feel psychologically similar to humans of different races and genders but I don't feel psychologically similar to members of most different species.

A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother?

Uh, no. System 1 doesn't know what a species is; that's just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can't, not really.

This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property?

And does it at all bother you that racists or sexists can use an analogous line of defense?

Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can't Say?

Comment author: TheOtherDave 14 June 2013 06:18:44PM 2 points [-]

I should add to this that even if I endorse what you call "prejudice against prejudice" here -- that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence -- it doesn't follow that because racists or sexists can use a particular argument A as a line of defense, there's therefore something wrong with A.

There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.

Comment author: Lukas_Gloor 14 June 2013 08:32:46PM -1 points [-]

This general argument of "the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad" strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn't have this property?

Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don't want to be the sort of person that would have been racist or sexist in previous centuries. If you don't share that premise, there is no way for me to show that you're being inconsistent -- I acknowledge that.

Comment author: Kaj_Sotala 16 June 2013 09:15:40AM 1 point [-]

Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

I should probably clarify - when I said that valuing humans over animals strikes me as arbitrary, I'm saying that it's arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that's not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person's belief in that position, regardless of whether that effect is "logical".)

Comment author: Qiaochu_Yuan 16 June 2013 09:30:29AM *  0 points [-]

I've been meaning to write a post about how I think it's a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.

(You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.)

Comment author: Kaj_Sotala 16 June 2013 10:10:25AM *  5 points [-]

I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I'm not sure if it's that problematic as long as you keep in mind that "axioms" is really just shorthand for something like "moral subprograms" or "moral dynamics".

I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish - in other words, unless your argument builds on things that the mind's decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind's preferences.

You shouldn't regard it as a strength of your moral framework that it can't distinguish humans from non-human animals. That's evidence that it isn't capable of capturing complexity of value.

I'm not really sure of what you mean here. For one, I didn't say that my moral framework can't distinguish humans and non-humans - I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people's feelings of safety, which would contribute to the creation of much more suffering than killing animals would.

Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I'd program into an AI.

Comment author: Vladimir_Nesov 16 June 2013 02:12:59PM *  1 point [-]

Also, whether or not my personal moral framework can capture complexity of value seems irrelevant - CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on [...]

Well, I don't think I should care what I care about. The important thing is what's right, and my emotions are only relevant to the extent that they communicate facts about what's right. What's right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn't hold too much import, on pain of moral wireheading/acceptance of a fake utility function.

(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that's available in practice, but that doesn't mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)

Comment author: Kaj_Sotala 16 June 2013 10:24:12PM 1 point [-]

I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and "in deciding what to do, don't pay attention to what you want" isn't very useful advice. (It also makes any kind of instrumental rationality impossible.)

Comment author: Vladimir_Nesov 16 June 2013 11:08:52PM 1 point [-]

What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn't mean that you expect them to be accurate, they are just the best you have available in practice.

Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.

Comment author: Kaj_Sotala 19 June 2013 07:03:23AM *  2 points [-]

I'm roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:

1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction - if someone switches to an exploitation phase "too early", then over time, their values may actually shift over to what the person thought they were.

2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don't match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.

The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn't use terminology like exploration/exploitation that implies that it would be just one of those.

Comment author: Vladimir_Nesov 22 June 2013 12:26:16PM *  0 points [-]

But to some extent, our conscious models of our values do shape our unconscious values in that direction

This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday's puzzle doesn't address the problem of solving yesterday's puzzle. And idealized values don't describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn't change, even if the tendency to be interested in a particular problem does. The problem doesn't get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem.

Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it

The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any "correction" discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term "values" what I would call intermediate conclusions, but then again I'm interested in you noticing the particular idea that I refer to with this term.)

if we realize that our conscious values don't match our unconscious ones

I don't think "unconscious values" is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I'm talking about.

The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them

This might be true in the sense that humans probably underdetermine the valuation of the world, so that there are some situations that our implicit preferences can't compare even in principle. The choice between such situations is arbitrary according to our values. Or our values might just recursively determine the correct choice in every single definable distinction. Any other kind of "creation" will contradict the implicit answer, and so even if it is the correct thing to do given the information available at the time, later reflection can show it to be suboptimal.

(More constructively, the proper place for creativity is in solving problems, not in choosing a supergoal. The intuition is confused on this point, because humans never saw a supergoal, all sane goals that we formulate for ourselves are in one way or another motivated by other considerations, they are themselves solutions to different problems. Thus, creativity is helpful in solving those different problems in order to recognize which new goals are motivated. But this is experience about subgoals, not idealized supergoals.)

Comment author: Kaj_Sotala 26 June 2013 06:34:26PM *  0 points [-]

I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing "what we want" in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.

Comment author: Osiris 19 June 2013 10:14:18AM 1 point [-]

I'm not a very well educated person in this field, but if I may:

I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They're no more enemies than one's preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human--that is, some things are important only because I'm a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one's mind, even when one KNOWS it is wrong, can be a source of pain, I've found--hypocrisy and indecision are not my friends.

Hope I didn't make a mess of things with this comment.