Zack_M_Davis comments on Effective Altruism Through Advertising Vegetarianism? - LessWrong

20 Post author: peter_hurford 12 June 2013 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (551)

You are viewing a single comment's thread. Show more comments above.

Comment author: Zack_M_Davis 14 June 2013 08:03:14PM 8 points [-]

Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?

Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.

Comment author: Qiaochu_Yuan 14 June 2013 08:10:56PM *  3 points [-]

We know stuff that our ancestors didn't know; we have capabilities that they didn't have.

I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:

I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct? But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.

I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).

If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures

Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.

Comment author: Zack_M_Davis 14 June 2013 08:38:22PM 5 points [-]

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing.

humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead

That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.

Comment author: RobbBB 15 June 2013 11:53:50AM *  1 point [-]

I'm not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren't wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one's affective state.) If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.

If I write a computer program with a variable called isSuffering that I set to true, is it suffering?

I don't think this is specifically relevant. I upvoted your 'blue robot' comment because this is an important issue to worry about, but 'that's a black box' can't be used as a universal bludgeon. (Particularly given that it defeats appeals to 'isHuman' even more thoroughly than it defeats appeals to 'isSuffering'.)

Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead)

I assume you're being tongue-in-cheek here, but be careful not to mislead spectators. 'Human life isn't perfect, ergo we are under no moral obligation to eschew torturing non-humans' obviously isn't sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans' welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.

Comment author: Qiaochu_Yuan 15 June 2013 07:06:53PM *  2 points [-]

If this motivational component isn't what you had in mind as the 'moral', nonfactual component of our judgments, then I don't know what you do have in mind.

I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).

I assume you're being tongue-in-cheek here

Nope.

White slave-owners had plenty of white-person-specific problems to deal with, but that didn't exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.

I don't think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.

Comment author: RobbBB 15 June 2013 09:21:58PM 1 point [-]

White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can't do.

Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can't do. I don't see a relevant disanalogy. (Other than the question-begging one 'fish aren't human'.)

Comment author: Qiaochu_Yuan 15 June 2013 09:36:27PM 4 points [-]

I guess that should've ended "...that fish can't do and that are important parts of how they interact with other white people." Black people are capable of participating in human society in a way that fish aren't.

A "reversed stupidity is not intelligence" warning also seems appropriate here: I don't think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.

Comment author: RobbBB 15 June 2013 09:59:26PM *  1 point [-]

I don't think we should stop making distinctions altogether either; I'm just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take 'the expanding circle' as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that's improved far beyond contemporary society's hodgepodge of standards.

I think the main lesson from 'expanding circle' events is that we should be relatively cautious about assuming that something isn't a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. 'Black people don't have moral standing because they're less intelligent than us' fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.

(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other.)

On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it's a bit of an explanatory IOU until we know exactly what the neural basis of 'consciousness' is, but 'intelligent' and 'able to participate in human society' are IOUs in the same sense.) Likewise for gods and dead bodies -- the former don't exist, and the latter again fail very general criteria like 'is it conscious?' and 'can it suffer?' and 'can it desire?'. These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap.

Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological 'humanity' and 'inhumanity' are significant, and that makes it dangerous to adopt a policy of 'assume everything with a weird appearance or behavior has no moral rights until we've conclusively proved that its difference from us is only skin-deep'.

Comment author: Qiaochu_Yuan 15 June 2013 11:44:07PM *  1 point [-]

Likewise, 'fish can't participate in human society' fails, because extremely pathologically antisocial or socially inept people (of the sort that can't function in society at all) still shouldn't be tortured.

The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren't so capable.

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).

Plus many fish can participate in their own societies.

I'm skeptical of the claim that any fish have societies in a meaningful sense. Citation?

If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them?

If they're intelligent enough we can still trade with them, and that's fine.

Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn't give either civilization the right to oppress the other

I don't think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.

These are fully general criteria, not ad-hoc or parochial ones, so they're a lot less likely to fall into the racism trap. Possibly they fall into a new and different trap, though?

Yes: not capturing complexity of value. Again, morality doesn't behave like science. Looking for general laws is not obviously a good methodology, and in fact I'm pretty sure it's a bad methodology.

Comment author: RobbBB 16 June 2013 01:13:55AM *  1 point [-]

Yes: not capturing complexity of value.

'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.

In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence -- a more detailed map can be wrong about the territory in more ways.

Again, morality doesn't behave like science.

Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. "Looking for general laws" is a good idea here for the same reason it's a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we're not complicating our theory in arbitrary or unnecessary ways.

Knowing at the outset that storms are complex doesn't mean that we shouldn't try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.

Comment author: wedrifid 16 June 2013 08:03:54AM 2 points [-]

'Your theory isn't complex enough' isn't a reasonable objection, in itself, to a moral theory. Rather, 'value is complex' is a universal reason to be less confident about all theories.

If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). "Too simple" is a valid objection if the premise "Not simple" is implied.

Comment author: Qiaochu_Yuan 16 June 2013 02:01:54AM *  1 point [-]

To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it's more complicated is obviously wrong

Obviously that's not what I'm suggesting. What I'm suggesting is that it's both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.

the data

What data?

Comment author: [deleted] 15 June 2013 11:53:10PM 1 point [-]

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering.

Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you're much bigger than an atom and much slower than light).

Comment author: RobbBB 16 June 2013 01:11:32AM *  0 points [-]

The original statement of my heuristic for deciding moral worth contained the phrase "in principle" which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they'd still be capable of participating in human society.

Isn't a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don't know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy -- and are likely to become far fuzzier as we take more control of our genetic future. We also know that what's normal for a certain species can vary wildly over historical time. 'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.

It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or 'feels'?) distant, yet completely intolerable in contexts where this external technology is more 'near' on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?

I don't find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.

I also think the reasoning in this example is bad for general reasons, namely moral heuristics don't behave like scientific theories: falsifying a moral hypothesis doesn't mean it's not worth considering.

Actually, now that you bring it up, I'm surprised by how similar the two are. 'Heuristics' by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.

in general I am skeptical of people who claim to have useful moral heuristics that don't fail on weird edge cases

I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that's different from claiming that it's an advantage of a moral claim that it gets the right answer less often.

I'm skeptical of the claim that any fish have societies in a meaningful sense.

I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?

If they're intelligent enough we can still trade with them, and that's fine.

If we can't trade with them for some reason, it's still not OK to torture them.

The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.

'The psychological unity of mankind' is question-begging here. It's just a catchphrase; it's not as though there's some scientific law that all and only biologically human minds form a natural kind. If we're having a battle of catchphrases, vegetarians can simply appeal to the 'psychological unity of sentient beings'.

Sure, they're less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I'm looking for is a reason to favor the one unity over an infinite number of rival unities.

I should also reiterate that it's not an advantage of your theory that it requires two independent principles ('being biologically human', 'being able to (be modified without too much difficulty into something that can) socialize with biological humans') to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it's not enough to elevate it to a large probability.

Comment author: Qiaochu_Yuan 16 June 2013 01:41:49AM *  4 points [-]

'In principle' we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans

I don't think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)

It seems damningly arbitrary to me.

You're still using a methodology that I think is suspect here. I don't think there's good reasons to expect "everything that feels pain has moral value, period" to be a better moral heuristic than "some complicated set of conditions singles out the things that have moral value" if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.

My objection to the 'only things that can intelligently socialize with humans matter' heuristic isn't that it gets things wrong occasionally; it's that it almost always yields the intuitively wrong answer

Your intuition, not mine.

I should also reiterate that it's not an advantage of your theory that it requires two independent principles ('being biologically human', 'being able to (be modified without too much difficulty into something that can) socialize with biological humans') to explain phenomena

System 1 doesn't know what a biological human is. I'm not using "human" to mean "biological human." I'm using "human" to mean "potential friend." Posthumans and sufficiently intelligent AI could also fall in this category, but I'm still pretty sure that fish don't. I actually only care about the second principle.

that other models can handle with only a single generalization.

While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn't be having this conversation if there were such a thing as a moral experiment; I'd be happy to defer to the evidence in that case, the same as I would in any scientific field where I'm not a domain expert.)

Comment author: Eugine_Nier 16 June 2013 06:40:02AM 0 points [-]

On the other hand, 'rocks aren't conscious' does seem to draw on a good and principled necessary condition -- anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum.

What about unconscious people?

Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it.

So what's your position on abortion?

Comment author: RobbBB 16 June 2013 08:46:31AM *  0 points [-]

I don't know why you got a down-vote; these are good questions.

What about unconscious people?

I'm not sure there are unconscious people. By 'unconscious' I meant 'not having any experiences'. There's also another sense of 'unconscious' in which people are obviously sometimes unconscious — whether they're awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for 'bare consciousness', but it's not necessary, since people can experience dreams while 'unconscious'.

Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly 'switches off' — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like 'Do we have a responsibility to make conscious beings come into existence?' and 'Do we have a responsibility to fulfill people's wishes after they die?'. I'd lean toward 'yes' on the former, 'no but it's generally useful to act as though we do' on the latter.

So what's your position on abortion?

Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It's conceivable that there's no true consciousness at all until after birth — analogously, it's possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.

Comment author: SaidAchmiz 14 June 2013 08:25:34PM *  0 points [-]

I've seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:

But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did.

Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that's wrong. What was bad about witch hunts was:

  1. People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the "trial" process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.

  2. Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we'd carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).

So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there's no such crime in the first place), but we shouldn't therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.

Comment author: Qiaochu_Yuan 14 June 2013 08:40:40PM *  1 point [-]

If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.

We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?

If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?

Comment author: [deleted] 14 June 2013 09:30:32PM 2 points [-]

Who knows what kind of things a real witch could do to a jury?

Who knows what kind of things a real witch could do to an executioner, for that matter?

Comment author: SaidAchmiz 14 June 2013 09:17:01PM *  1 point [-]

We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?

There is a difference between "we should take precautions to make sure the witch doesn't blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual" and "let's just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc." Regardless of what you think would happen in practice (fear makes people do all sorts of things), it's clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we're not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.

If you think humanity as a whole has made substantial moral progress throughout history, what's driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don't have an analogous story about moral progress. How do you distinguish the current state of affairs from "moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral"?

That's two questions ("what drives moral progress" and "how can you distinguish moral progress from a random walk"). They're both interesting, but the former is not particularly relevant to the current discussion. (It's an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don't have link to specific posts atm] that it's technological advancement that drives what we think of as "moral progress".)

As for how I can distinguish it from a random walk — that's harder. However, my objection was to Lewis's assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we've made moral progress per se to make my objection.

Comment author: MugaSofer 15 June 2013 10:33:05PM -1 points [-]

Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion.

2 was based on a Bible quote, I think. The state hanged witches.

Comment author: SaidAchmiz 14 June 2013 08:15:34PM *  1 point [-]

If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.

No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).

Comment author: Lukas_Gloor 14 June 2013 10:39:02PM 0 points [-]

No they don't. Are you saying it's not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things?

I'd do it that way. It doesn't strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of "pain". (Subjects report that they notice the sensation of pain, but they claim it doesn't bother them.) I'd define suffering as wanting to get out of the state you're in. If you're fine with the state you're in, it is not what I consider to be suffering.

Comment author: SaidAchmiz 15 June 2013 12:08:52AM 0 points [-]

Ok, that seems workable to a first approximation.

So, a question for anyone who both agrees with that formulation and thinks that "we should care about the suffering of animals" (or some similar view):

Do you think that animals can "want to get out of the state they're in"?

Comment author: Raemon 15 June 2013 12:43:58AM 0 points [-]

Yes?

This varies from animal to animal. There's a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)

Comment author: Raemon 14 June 2013 08:21:35PM 0 points [-]

There's a difference between "it's possible to construct a mind" and "other particular minds are likely to be constructed a certain way." Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.

(I also would define it, not in terms of "pain and suffering" but "preference satisfaction and dissatisfaction". I think I might consider "suffering" as dissatisfaction, by definition, although "pain" is more specific and might be valuable for some minds.)

Comment author: [deleted] 14 June 2013 09:24:28PM 0 points [-]

although "pain" is more specific and might be valuable for some minds

Such as human masochists.

Comment author: SaidAchmiz 14 June 2013 08:37:44PM 0 points [-]

I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don't even have so much as a strong certainty.

I don't know that I'm comfortable with identifying "suffering" with "preference dissatisfaction" (btw, do you mean by this "failure to satisfy preferences" or "antisatisfaction of negative preferences"? i.e. if I like playing video games and I don't get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).

Comment author: TheOtherDave 14 June 2013 08:57:50PM 1 point [-]

I can't speak for Raemon, but I would certainly say that the condition described by "I like playing video games and am prohibited from playing video games" is a trivial but valid instance of the category /suffering/.

Is the difficulty that there's a different word you'd prefer to use to refer to the category I'm nodding in the direction of, or that you think the category itself is meaningless, or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so) , or something else?

I'm usually indifferent to semantics, so if you'd prefer a different word, I'm happy to use whatever word you like when discussing the category with you.

Comment author: SaidAchmiz 14 June 2013 09:28:45PM *  0 points [-]

... or that you don't understand what the category is (reasonably enough; I haven't provided nearly enough information to identify it if the word "suffering" doesn't reliably do so)

That one. Also, what term we should use for what categories of things and whether I know what you're talking about is dependent on what claims are being made... I was objecting to Zack_M_Davis's claim, which I take to be something either like:

"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also."

or...

"We humans have categories of experiences called 'pain' and 'suffering', which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also."

I don't think either of those claims are justified. Do you think they are? If you do, I guess we'll have to work out what you're referring to when you say "suffering", and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we're referring to.)

Comment author: TheOtherDave 15 June 2013 02:44:54AM *  -1 points [-]

I don't think either of those claims are justified. Do you think they are?

There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don't. So let me back up a little.

Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.

So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that's strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).

I don't actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.

As long as C is high -- that is, as long as we really are confident that the other brain has a "same or similar implementation", as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I'm pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is "completely identical" to (S1,B1), I'm "certain" I prefer B2 not be in S2.

But I'm not sure that's actually what you mean when you say "same or similar implementation." You might, for example, mean that they have anatomical points of correspondance, but you aren't confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).

Comment author: SaidAchmiz 15 June 2013 03:33:32AM 1 point [-]

Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.

Is brain B1 your brain in this scenario? Or just... some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings' brain states.

Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as "pain" and "suffering" (which, for us, might usefully be operationalized as "brain states we prefer not to be in") are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing "pain" and "suffering" (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...

Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.

Or, he could have been making the claim that we can usefully describe the category of "pain" and/or "suffering" in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don't know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.

I don't think that conclusion is justified either... or rather, I don't think it's instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as "suffering" is by definition. And we all know that arguing "by definition" makes a def out of I and... wait... hm... well, it's bad.

Comment author: TheOtherDave 15 June 2013 03:48:46AM *  -1 points [-]

My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.

My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B's mind antiprefers the experiential correlates of those details. I agree that there's no strict entailment here, though, "merely" evidence.

That said, mere evidence can get us pretty far. I am not inclined to dismiss it.