buybuydandavis comments on Ontological Crisis in Humans - Less Wrong

41 Post author: Wei_Dai 18 December 2012 05:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: buybuydandavis 19 December 2012 12:52:54AM 2 points [-]

When is a fetus or baby capable of feeling pain (that has moral disvalue)?

At some point between being a blob of cells and being a fully formed baby popping out of Mommy, with the disvalue on a sliding scale. Where is the crisis?

For non-human animals, I disapprove of torturing animals for kicks, but am fine with using animals for industrial purposes, including food and medical testing. No crisis here either.

In life, I don't kill people, and I also don't alleviate a great deal of death and suffering that I might. I eat meat, I wear leather, and support abortion rights. No crisis. And I don't see a lot of other people in crisis over such things either.

Comment author: Wei_Dai 19 December 2012 05:25:04AM *  5 points [-]

No crisis. And I don't see a lot of other people in crisis over such things either.

I explained in the post why "ontological crisis" is a problem that people mostly don't have to deal with right away, but will have to eventually, in the paragraph that starts with "To fully confront the ontological crisis that we face". Do you have any substantive disagreements with my post, or just object to the term "crisis" as being inappropriate for something that isn't a present emergency for most people? If it's the latter, I chose it for historical reasons, namely because Peter de Blanc already used it to name a similar class of problems in AIs.

Comment author: buybuydandavis 19 December 2012 09:00:50PM *  0 points [-]

In the paragraph you refer to:

Nevertheless, this approach hardly seems capable of being extended to work in a future where many people may have nontraditional mind architectures, or have a zillion copies of themselves running on all kinds of strange substrates, or be merged into amorphous group minds with no clear boundaries between individuals.

Maybe we have no substantive disagreement. If your point is that a million copy super intelligence will have issues in morality because of their ontologies that we don't currently have, then I agree. Me, I think it's kind of cheeky to be prescribing solutions for the million copy super intelligence - I think he's smarter than I am, doesn't need my help much, and may not ever exist anyway. I'm not here to rain on that parade, but I'm not interested in joining it either.

However, you seemed to be using present tense for the crisis, and I just don't see one now. Real people now don't have big complicated ontological problems lacking clear solutions. That was my point.

The abortion example was appropriate, as that is one issue where currently many people have a problem, but their problem is usually just essentialism, and there is a cure for it - just knock it off.

I find discussions of metaethics interesting, particularly in terms of the conceptual confusion involved. It seemed that you were getting at such issues, but I couldn't locate a concrete and currently relevant issue of that type from your post. So I directly asked for the concretes applicable now. You gave a couple. I don't find either particularly problematic.

Comment author: Eugine_Nier 20 December 2012 02:15:16AM 1 point [-]

The abortion example was appropriate, as that is one issue where currently many people have a problem, but their problem is usually just essentialism, and there is a cure for it - just knock it off.

I don't see how this addresses the problem.

Comment author: Jayson_Virissimo 19 December 2012 03:31:21AM 4 points [-]

Are you suggesting that AI will avoid cognitive dissonance by using compartmentalization like humans do?

Comment author: Nisan 19 December 2012 03:49:39AM 2 points [-]

It's easy to decide that the moral significance of a fetus changes gradually from conception to birth; it takes a bit more thought to quantify the significance. Abstractly, at what stage of development is the suffering of 100 fetuses commensurate with the suffering of a newborn? 1 month of gestation? 4? 9? More concretely, if you're pregnant, you'll have to decide not only whether the phenomenological point of view of your unborn child should be taken into account in your decisionmaking, but you'll have to decide in what way and to what degree it should be taken into account.

It's not clear whether your disapproval of animal torture is for consequentialist or virtue-ethics reasons, or whether it is a moral judgment at all; but in either case there are plenty of everyday cases of borderline animal exploitation. (Dogfighting? Inhumane flensing?) And maybe you have a very specific policy about which practices you support and which you don't. But why that policy?

Perhaps "crisis" is too dramatic in its connotations, but you should at least give some thought to the many moral decisions you make every day, and decide whether, on reflection, you endorse the choices you're making.

Comment author: buybuydandavis 19 December 2012 04:13:24AM 2 points [-]

Abstractly, at what stage of development is the suffering of 100 fetuses commensurate with the suffering of a newborn? 1 month of gestation? 4? 9?

Concretely, how many people that you know have faced a situation where that calculation is relevant?

More concretely, if you're pregnant, you'll have to decide not only whether the phenomenological point of view of your unborn child should be taken into account in your decisionmaking, but you'll have to decide in what way and to what degree it should be taken into account.

I don't know how much you'll have to decide on how you decide. You'll decide what to do based on your valuations - I don't think the valuations themselves involve a lot of deciding. I don't decide that ice cream is yummy; I taste it and it is.

And yes, I think it's a good policy to review your decisions and actions to see if you're endorsing the choices you're making. But that's not primarily an issue of suspect ontologies, but of just paying attention to your choices.

Comment author: Nisan 21 December 2012 05:47:52PM 0 points [-]

I don't know how much you'll have to decide on how you decide. You'll decide what to do based on your valuations - I don't think the valuations themselves involve a lot of deciding. I don't decide that ice cream is yummy; I taste it and it is.

I don't know what you mean by this, but maybe there's no point in further discussion because we seem to agree that one should reflect on one's moral decisions.