All of Prolorn's Comments + Replies

Prolorn00

I am asking a single question: Is there (or can we build) a morality that can be derived with logic from first principles that are obvious to everyone and require no Faith?

Perhaps you've already encountered this, but your question calls to mind the following piece by Yudkowsky: No Universally Compelling Arguments, which is near the start of his broader metaethics sequence.

I think it's one of Yudkowsky's better articles.
(On a tangential note, I'm amused to find on re-reading it that I had almost the exact same reaction to The Golden Transcendence, thou... (read more)

Prolorn10

It's not a binary distinction. If an identical copy was made of one mind and tortured, while the other instance remained untortured, they would start to differentiate into distinct individuals. As rate of divergence would increase with degree of difference in experience, I imagine torture vs non-torture would spark a fairly rapid divergence.

I haven't had opportunity to commit to reading Bostrom's paper, but in the little I did read Bostrom thought it was "prima facie implausible and farfetched to maintain that the wrongness of torturing somebody wo... (read more)

0aausch
Looks to me like Bostrom is trying to make the point that duplication of brain-states, by itself and devoid of other circumstances, is not sufficient to make the act of torture moral, or less harmful. After reading through the paper, it looks to me like we've moved outside of what Bostrom was trying to address, here. If synchronized brains lose individuality, and/or an integration process takes place, leading to a brain-state which has learned from the torture experience but remains unharmed, move the argument outside the realm of what Bostrom was trying to address. I agree with Bostrom on this point. It looks to me like, if Yorik is dismissing 49 tortured copies as inconsequential, he must also show that there is a process where the knowledge accumulated by each of the 49 copies is synchronized and integrated into the remaining one copy, without causing that one copy (or anyone else, for that matter) any harm. Or, there must be some other assumptions that he is making about the copies that remove the damage caused by copying - copying alone can't remove responsibility for the killing of the copies. For the black-hole example, copying the person about to be sucked into the hole is not ethically meaningless. The value of the copy, though, comes from its continued existence. The act of copying does not remove moral consequences from the sucking-in-the-black-hole act. If there is an agent X which pushed the copy into the black hole, that agent is just as responsible for his actions if he doesn't copy the individual at the last minute, as he would be if he does make a copy.
0aausch
Can you please point me to Bostrom's paper? I can't seem to find the reference. I'm very curious if the in-context quote is better fleshed out. As it stands here, it looks a lot like it's affected by anthropomorphic bias (or maybe references a large number of hidden assumptions that I don't share, around both the meaning of individuality and the odds that intelligences which regularly undergo synchronization can remain similar to ours). I can imagine a whole space of real-life, many-integrated-synchronized-copies scenarios, where the process of creating a copy and torturing it for kicks would be accepted, commonplace and would not cause any sort of moral distress. To me, there is a point where torture and/or destruction of a synchronized, integrated, identical copy transition into the same moral category as body piercings and tatoos.
Prolorn00

That doesn't resolve quanticle's objection. Your cutoff still suggests that a reasonably individualistic human is just as valuable as, say, the only intelligent alien being in the universe. Would you agree with that conclusion?

1Stuart_Armstrong
No. I grant special status to exceedingly unique minds, and to the last few of a given species. But human minds are very similar to each other, and granting different moral status to different humans is a very dangerous game. Here, I am looking at the practical effects of moral systems (Eliezer's post on "running on corrupted hardware" is relevant). The thoeretical gains of treating humans as having varrying moral status are small; the practical risks are huge (especially as our societies, though cash, reputation and other factors, is pretty good at distinguishing between people without having to further grant them different moral status). One cannot argue: "I agree with moral system M, but M has consequence S, and I disagree with S". Hence I cannot agree with granting people different moral status, once they are sufficiently divergent.
Prolorn30

Does this apply to legal assisted suicide within the US as well?