Comment author: rohern 24 February 2011 05:25:07AM 0 points [-]

I think an important part of our disagreement, at least for me, is that you are interested in people generally and morality as it is now --- at least your examples come from this set --- while I am trying to restrict my inquiry to the most rational type of person, so that I can discover a morality that all rational people can be brought to through reason alone without need for error or chance. If such a morality does not exist among people generally, then I have no interest for the morality of people generally. To bring it up is a non sequitur in such a case.

I do not see that people coming to agree on things that are demonstrably false is a point against me. This fact is precisely why I am turned-off by the current state of ethical thought, as it seems infested with examples of this circumstance. I am not impressed by people who will agree to an intellectual point because it is convenient. I take truth first, at least that is the point of this inquiry.

I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?

Comment author: Prolorn 25 February 2011 07:23:48AM 0 points [-]

I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?

Perhaps you've already encountered this, but your question calls to mind the following piece by Yudkowsky: No Universally Compelling Arguments, which is near the start of his broader metaethics sequence.

I think it's one of Yudkowsky's better articles.
(On a tangential note, I'm amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, though I had no conscious recollection of the connection when I got around to reading it myself.)

Comment author: aausch 04 December 2009 05:21:13AM 0 points [-]

Do you consider a mind that has been tortured identical to one that has not? Won't the torture process add non-trivial differences, to the point where the minds don't count as identical?

Comment author: Prolorn 04 December 2009 08:11:44AM 0 points [-]

It's not a binary distinction. If an identical copy was made of one mind and tortured, while the other instance remained untortured, they would start to differentiate into distinct individuals. As rate of divergence would increase with degree of difference in experience, I imagine torture vs non-torture would spark a fairly rapid divergence.

I haven't had opportunity to commit to reading Bostrom's paper, but in the little I did read Bostrom thought it was "prima facie implausible and farfetched to maintain that the wrongness of torturing somebody would be somehow ameliorated or annulled if there happens to exist somewhere an exact copy of that person’s resulting brain-state." That is, it seemed obvious to Bostrom that having two identical copies of a tortured individual must be worse than one instance of a tortured individual (actually twice as bad, if I interpret correctly). That does not at all seem obvious to me, as I would consider two (synchronized) copies to be one individual in two places. The only thing worse about having two copies that occurs to me is a greater risk of divergence, leading to increasingly distinct instances.

Are you asking whether it would be better to create a copy of a mind and torture it rather than not creating a copy and just getting on with the torture? Well, yes. It's certainly worse than not torturing at all, but it's not as bad as just torturing one mind. Initially, the individual would half-experience torture. Fairly rapidly later, the single individual will separate into two minds, one being tortured and one not. This is arguably still better from the perspective of the pre-torture mind than the single-mind-single-torture scenario, since at least half the mind's experiences downstream is not-tortured, vs 100%-torture in other case.

If this doesn't sound convincing, consider a twist: would you choose to copy and rescue a mind-state from someone about to, say, be painfully sucked into a black hole, or would it be ethically meaningless to create a non-sucked-into-black-hole copy? Granted, it would be best to not have anyone sucked into a black hole, but suppose you had to choose?

Comment author: Stuart_Armstrong 03 December 2009 01:56:45PM 0 points [-]

Didn't phrase clearly what I meant by cut-off.

Let D be some objective measure of distance (probably to do with Kologomorov complexity) between individuals. Let M be my moral measure of distance, and assume the cut-off is 1.

Then I would set M(a,b) = D(a,b) whenever D(a,b) < 1, and M(a,b) = 1 whenever D(a,b) >= 1. The discontinuity is in the derivative, not the value.

Comment author: Prolorn 04 December 2009 07:31:23AM 0 points [-]

That doesn't resolve quanticle's objection. Your cutoff still suggests that a reasonably individualistic human is just as valuable as, say, the only intelligent alien being in the universe. Would you agree with that conclusion?

Comment author: Eliezer_Yudkowsky 09 April 2009 01:23:51AM 7 points [-]

Suicides get autopsied automatically, at least in the US.

Comment author: Prolorn 09 April 2009 09:01:04PM 1 point [-]

Does this apply to legal assisted suicide within the US as well?