Comment author: drethelin 27 January 2013 10:19:53AM 1 point [-]

I'm not strongly emotionally motivated to reduce suffering in general but I realize that my and other instances of suffering are examples of suffering in general so I think it's a good policy to try to reduce world-suck. This is reasonably approximated by saying I would like to reduce unhappiness or increase happiness or some such thing.

Comment author: Adriano_Mannino 28 January 2013 01:16:27PM *  1 point [-]

What is it that you are strongly motivated to do in this world, then? Are you strongly motivated to reduce/prevent drethelin_tomorrow's suffering, for instance?

Comment author: Benito 26 January 2013 11:50:44PM -1 points [-]

Okay.

Either, if we all knew more, thought faster, understood ourselves better, we would decide to farm animals, or we wouldn't. For people to be so fundamentally different that there would be disagreement, they would need massively complex adaptations / mutations, which are vastly improbable. Even if someone sits down, and thinks long and hard about an ethical dilemma, they can very easily be wrong. To say that an AI could not coherently extrapolate our volition, is to say we're so fundamentally unlike that we would not choose to work for a common good if we had the choice.

Comment author: Adriano_Mannino 28 January 2013 01:04:57PM 7 points [-]

But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?

So why not just go for utilitarianism? By definition, that's the safest option for everyone to whom things can matter/be valuable.

I still don't see what could justify coherently extrapolating "our" volition only. The only non-arbitrary "we" is the community of all minds/consciousnesses.

Comment author: Adriano_Mannino 18 August 2012 04:45:45PM *  6 points [-]

It's been asserted here that "the core distinction between avoidance, pain, and awareness of pain works" or that "there is such a thing as bodily pain we're not conciously aware of". This, I think, blurs and confuses the most important distinction there is in the world - namely the one between what is a conscious/mental state and what is not. Talk of "sub-conscious/non-conscious mental states" confuses things too: If it's not conscious, then it's not a mental state. It might cause one or be caused by one, but it isn't a mental state.

Regarding the concept of "being aware of being in pain": I can understand it as referring to a second-order mental state, a thought with the content that there is an unpleasant mental state going on (pain). But in that sense, it often happens that I am not (second-order) aware of my stream of consciousness because "I" am totally immersed in it, so to speak. But the absence of second-order mental states does not change the fact that first-order mental states exist and that it feels like something (and feels good or bad) to be in them (or rather: to be them). The claim that "no creature was ever aware of being in pain" suggests that for most non-human animals, it doesn't feel like anything to be in pain and that, therefore, such pain-states are ethically insignificant. As I said, I reject the notion of "pain that doesn't consciously feel like anything" as confused: If it doesn't feel like anything, it's not a mental state and it can't be pain. And there is no reason for believing that first-order (possibly painful and thus ethically significant) mental states require second-order awareness. At the very least, we should give non-human animals the benefit of the doubt and assign a significant probability to their brain states being mental and possibly painful and thus ethically significant.

Last but not least, there is also an argument (advanced e.g. by Dawkins) to the effect that pain intensity and frequency might even be greater in less intelligent creatures: "Isn't it plausible that a clever species such as our own might need less pain, precisely because we are capable of intelligently working out what is good for us, and what damaging events we should avoid? Isn't it plausible that an unintelligent species might need a massive wallop of pain, to drive home a lesson that we can learn with less powerful inducement? At very least, I conclude that we have no general reason to think that non-human animals feel pain less acutely than we do, and we should in any case give them the benefit of the doubt."

Comment author: Adriano_Mannino 04 July 2012 01:23:15AM *  11 points [-]

Hi all, I'm a lurker of about two years and have been wanting to contribute here and there - so here I am. I specialize in ethics and have further interests in epistemology and the philosophy of mind.

LessWrong is (by far) the best web resource on step-by-step rationality. I've been referring all aspiring rationalists to this blog as well as all the people who urgently need some rationality training (and who aren't totally lost). So thanks, you're doing an awesome job with this rationality dojo!

View more: Prev