In response to .
Comment author: Wei_Dai 28 August 2013 10:59:32PM 3 points [-]

Do you have an ethical theory that tells you, given a collection of atoms, how much hedonic value it contains? I guess the answer is no, since AFAIK nobody is even close to having such a theory. Going from our current state of knowledge to having such a theory (and knowing that you're justified in believing in it) would represent a huge amount of philosophical progress. Don't you think that this progress would also give us a much better idea of which of various forms of consequentialism is correct (if any of them are)? Why not push for such progress, instead of your current favorite form of consequentialism?

In response to comment by Wei_Dai on .
Comment author: Jabberslythe 29 August 2013 07:28:23AM 1 point [-]

I currently have a general sense of what it would look like but definitely not a naturalistic definition of what I value. I can think of a couple of different ways that suffering and happiness could turn out to work that would alter what sort of wireheading or hedonium I would want to implement, but not drastically. i.e it would not make me reject the idea.

I'm not sure that people would generally start wanting the same sorts of things I do if they had this knowledge and encouraging other people to do research so that I could later get access to it would have a poor rate of return. And so it seems like a better idea encourage people to implement these somewhat emotionally salient things when they are able to, rather than working on very expensive science myself. I'm not sure I'd be around to see the time when it might be applied and even then I'm not sure how likely most people would be to implementing it.

Having said that, since many scientists don't have these values there will be some low hanging fruit in looking at and applying previous research and I intend to do that. I just won't make a career out of it.

I think that moral realism is a non starter so I ignored that part of your question, but I can go into detail on that if you would like.

In response to comment by Ruairi on .
Comment author: peter_hurford 28 August 2013 09:25:18PM 2 points [-]

However, if you value several things why not have wireheads experience them in succession?

I value "genuinely real" experiences. Or, rather, I want sufficiently self-aware and intelligent people to interact with other sufficiently self-aware and intelligent people (though I am fine if these people are computer simulations). This couldn't be replaced by wireheading, though I do think it could be done (optimally, in fact) via some "utilitronium" or "computronium".

In response to comment by peter_hurford on .
Comment author: Jabberslythe 28 August 2013 10:24:50PM 2 points [-]

Would be up for creating wireheaded minds if they didn't care about interacting with other people?

Not sure that interacting with people is the most impotent part of my life and I'd be fine living a life without that feature providing it otherwise good.

Comment author: Manfred 28 July 2013 09:34:35PM *  4 points [-]

Some may be tempted to think about the concept of "species" as if it were a fundamental concept, a Platonic form.

The biggest improvement to this post I would like to see is the engagement with opposing arguments more realistic than "humans are a platonic form." Currently you just knock down a very weak argument or two and then rush to conclusion.

EDIT: whoops, I missed the point, which is to only argue against speciesm. My bad. Edited out a misplaced "argument from future potential," which is what Jabberslythe replied to.

However, you really do only knock down weak arguments. What if we simply define categories more robustly than "platonic forms," like philosophers have done just fine since at least Wittgenstein and as is covered on this very blog. Then there's no point in talking about platonic forms.

For the argument from "one will be human and the next will be not" how do you deal with the unreliability of the sorites paradox as a philosophical test? Or what if we use the more general continuous model of speciesm, thus eliminating sharp lines? You don't just have to avoid deliberately strawmanning, you have to actively steelman :)

Comment author: Jabberslythe 28 July 2013 09:48:58PM *  0 points [-]

Those two babies differ in that they have different futures so it would be wrong to treat them differently such that suffering is minimized (and you should). And it would not be speciesist to do so because there is that difference.

Comment author: jkaufman 28 July 2013 09:09:16PM 0 points [-]

do the two situations not seem equivalent

I'm sorry, I'm confused. Which two situations?

we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor

I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.

Comment author: Jabberslythe 28 July 2013 09:21:58PM *  1 point [-]

I'm sorry, I'm confused. Which two situations?

A) Being tortured as you are now

B) Having your IQ and cognitive abilities lowered then being tortured.

EDIT:

I am asking because it is useful to consider pure self interest because it seems like a failure of a moral theory if it suggests people act outside of their self interest without some compensating goodness. If I want to eat an apple but my moral theory says that shouldn't even though doing so wouldn't harm anyone else, that seems like a point against that moral theory.

I see. Makes sense. I was giving long term memory formation an example of a way you could remove part of my self and decrease how much I objected to being tortured, but it's not the only way.

Different cognitive abilities would matter in some ways for how much suffering is actually experienced but not as much as most people think. There are also situations where it seems like it could increase the amount an animal suffers by. While a chicken is being tortured it would not really be able to hope that the situation will change.

Comment author: jkaufman 28 July 2013 08:48:19PM *  2 points [-]

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

What if you were killed immediately afterwards

If you offered me the choice between:

A) 50% chance you are tortured and then released, 50% chance you are killed immediately

B) 50% chance you are tortured and then killed, 50% chance you are released immediately

I would strongly prefer B. Is that what you're asking?

Comment author: Jabberslythe 28 July 2013 09:04:24PM 0 points [-]

If I had the mental capacity of a chicken it would not be bad to torture me, both because I wouldn't matter morally. I also wouldn't be "me" anymore in any meaningful sense.

If not morally, do the two situations not seem equivalent in terms of your non-moral preference for either? In other words, would you prefer one over the other in purely self interested terms?

I would strongly prefer B. Is that what you're asking?

I was just making the point that if your only reason for thinking that it would be worse for you to be tortured now was that you would suffer more overall through long term memories we could just stipulate that you would be killed after in both situations so long term memories wouldn't be a factor.

Comment author: jkaufman 28 July 2013 08:30:16PM *  19 points [-]

Some might be willing to bite the bullet at this point, trusting some strongly held ethical principle of theirs (e.g. A, B, C, D, or E above), to the conclusion of excluding humans who lack certain cognitive capacities from moral concern. One could point out that people's empathy and indirect considerations about human rights, societal stability and so on, will ensure that this "loophole" in such an ethical view almost certainly remains without consequences for beings with human DNA. It is a convenient Schelling point after all to care about all humans (or at least all humans outside their mother's womb).

This is pretty much my view. You dismiss it as unacceptable and absurd, but I would be interested in more detail on why you think that.

a society in which some babies were (factory-)farmed would be totally fine as long as the people are okay with it

This definitely hits the absurdity heuristic, but I think it is fine. The problem with the Babyeaters in Three Worlds Collide is not that they eat their young but that "the alien children, though their bodies were tiny, had full-sized brains. They could talk. They protested as they were eaten, in the flickering internal lights that the aliens used to communicate."

If I was told that some evil scientist would first operate on my brain to (temporarily) lower my IQ and cognitive abilities, and then torture me afterwards, it is not like I will be less afraid of the torture or care less about averting it!

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

(Speciesism has always seemed like a straw-man to me. How could someone with a reductionist worldview think that species classification matters morally? The "why species membership really is an absurd criterion" section is completely reasonable, reasonable enough that I have trouble seeing non-religious arguments against.)

Comment author: Jabberslythe 28 July 2013 08:38:54PM 4 points [-]

I would. Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

Is this because you expect the torture wouldn't be as bad if that happened or because you would care less about yourself in that state? Or a combination?

Similarly if I were going to undergo torture I would be very glad if my capacity to form long term memories would be temporarily disabled.

What if you were killed immediately afterwards, so long term memories wouldn't come into play?

In response to Why Eat Less Meat?
Comment author: TabAtkins 27 July 2013 03:42:32PM 0 points [-]

As a mostly-vegetarian person myself, I find this article's primary moral point very unconvincing.

Yes, factory farms are terrible, and we should make them illegal. But not all meat is raised on factory farms. Chickens and cattle who are raised ethically (which can still produce decent yields, though obviously less than factory farms) have lower levels of stress hormones than comparable wild animals. We can't measure happiness directly in these low-light animals, but stress hormones are a very good analogue for an enjoyable life, and we know that high levels are directly linked to poorer health outcomes (and thus likely suffering).

It's simply not that hard to raise food animals in a way that makes them better off than wild animals, and so unless you're strongly in the "reform nature" transhumanist strain, ethical animal farming is at least somewhat of a positive over not farming at all.

(I'm personally vegetarian by ecological reasons, and abstain from eating some animals due to moral compunction against eating things likely to be sentient.)

Comment author: Jabberslythe 28 July 2013 07:42:49PM 0 points [-]

Chickens and cattle who are raised ethically (which can still produce decent yields, though obviously less than factory farms) have lower levels of stress hormones than comparable wild animals.

Do you happen to have a source for this? Not that I particularly doubt this, but it would be useful information.

Comment author: Ruairi 26 July 2013 12:08:42PM *  1 point [-]

They are often given substances to make them grow fast and big, this often leads to problems like their legs breaking.

In response to comment by Ruairi on Why Eat Less Meat?
Comment author: Jabberslythe 27 July 2013 12:05:56AM 0 points [-]

They are also bred to mature faster and this can lead to similar problems I think. Manipulating the lighting to affect their circadian rhythm also helps make them mature faster.

In response to Why Eat Less Meat?
Comment author: aelephant 26 July 2013 11:47:41AM 4 points [-]

If dogs & cats were raised specifically to be eaten & not involved socially in our lives as if they were members of the family, I don't think I'd care about them any more than I care about chickens or cows.

This article seems to assume that I oppose all suffering everywhere, which I'm not sure is true. Getting caught stealing causes suffering to the thief and I don't think there's anything wrong with that. I care about chickens & cows significantly less than I care about thieves because thieves are at least human.

Comment author: Jabberslythe 26 July 2013 11:22:22PM -2 points [-]

If you found that you cared much more about your present self than your future self, you might reflect on that and decide that because those two things are broadly similar you would want to change your mind about this case. Even if those selves are not counted as such by your sentiments right now.

This article is trying to get you to undertake similar reflections about pets and humans vs. other animals.

In response to comment by Alicorn on Why Eat Less Meat?
Comment author: MileyCyrus 24 July 2013 08:05:22PM -2 points [-]

What vegetarian things can I eat that won't leave me hungry an hour later?

Comment author: Jabberslythe 26 July 2013 08:15:38PM -1 points [-]

It could be that the vegetarian stuff you are eating doesn't have much protein in it. Or that the protein source doesn't have all the amino acids. There is certainly vegetarian stuff that does have these things, it just takes more knowledge and meal design that for meat diets.

Protein powder can also be helpful for vegetarians (and everyone). I recommend pea protein powder.

View more: Prev | Next