Alicorn comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (368)
Um, I'll suggest that. Killing: generally wrong.
Do you agree with EY on Torture vs Dust Specks? If you agree, would killing one person be justified to save 3^^^3 from being killed? If you agree, would you call killing to be right in that case?
I say bring on the specks.
I find that topic troubling. I find it comforting to know how others would decide here. So please allow me to ask another question. Would you personally die to save 3^^^3 from being killed? I thought about it myself and I would probably do it. But what is the lower bound here? Can I find an answer to such a question if I read the sequences, or at least how I can come up with my own answer?
I would personally die to save 3^^^3 persons' lives if that were the option presented to me.
The sequences do not comprise a DIY guide to crafting an ethical theory. I came up with mine while I was in grad school for philosophy.
I realize I might have misunderstood moral realism. I thought moral realism proposes that there do exist agent-independent moral laws. What I meant is that nobody would suggest that the propostion 'Killing: generally wrong' is a subvenient property.
I'm pretty sure you are wrong. You have realism confused with 'universality'. Moral realism applies to the situation when you say "It is forbidden that Mary hit John" and I say "It is permissible that Mary hit John". If realism holds, then one of us is in error - one of those two statements is false.
Compare to you thinking Mary is pretty and my disagreeing. Here, neither of us may be in error, because there may be no "fact of the matter" regarding Mary's prettiness. It is just a difference of opinion.
Moral realism states that moral judgments are not just matters of opinion - they are matters of fact.
If you had said 'observer-independent' rather than 'agent-independent', then you would have been closer to the concept of moral realism.
So moral realism is a two-valued logic?
I didn't know there was a difference.
More like "Moral realism is the doctrine stating that moral questions should be addressed using a two-valued logic. As opposed, say, to aesthetic questions."
So moral realism proposes that there are sorts of moral formalisms whose truth values are observer independent, because their logic is consistent, but not agent-independent because moral formalisms are weighted subjectively based on the preferences of agents. Therefore we have a set of moral formalisms that are true facts about the world as they are endorsed by some agents but weighted differently by different agents.
If you could account for all moral formalisms and how they are weighted by how many agents, would this constitute some sort of universal utility function and its equilibrium equal a world-state that could be called right?
I'm afraid that I am still not being understood. Firstly, the concepts of universalism and moral realism still make sense even if agent preferences have absolutely no impact on morality. Secondly, the notion that 'moral formalisms' can be true or false makes me squirm with incomprehension. Third, the notion that true formalisms get weighted in some way by agents leads me to think that you fail to understand the terms "true" and "false".
Let me try a different example. Someone who claims that correct moral precepts derive their justification from the Koran is probably a moral realist. He is not a universalist though, if he says that Allah assigns different duties and obligations to men and women - to believers and non-believers.
What do you mean by "agent-independent"?
That two agents can differ in their behavior and perception of actions but that any fundamental difference about a set of moral laws can be considered a failure-mode as those laws are implied by the lower levels of the universe the two agents are part of. I thought that moral realism proposes that 'Killing: generally wrong' is on the same level as 'Faster than light travel: generally wrong', that moral laws are intersubjective verifiability and subject to empirical criticism. I didn't think that anyone actually believes that 'Killing: generally wrong' can be derived as an universal and optimal strategy.
I'm pretty sure I don't understand anything you just said. Sorry.
Could you elaborate on your reasoning behind the propostion 'Killing: generally wrong'? Maybe that would allow me to explain myself and especially reformulate my question if there is anyone who thinks that killing is wrong regardless of an agent's preferences.
Persons have a right not to be killed; persons who have waived or forfeited that right, and non-persons, are still entities which should not be destroyed absent adequate reason. Preferences come in with the "waived" bit, and the "adequate reason" bit, but even if nobody had any preferences (...somehow...) then it would still be wrong to kill people who retain their right not to be killed (this being the default, assuming the lack of preferences doesn't paradoxically motivate anyone to waive their rights), and still be wrong to kill waived-rights or forfeited-rights persons, or non-persons, without adequate reason. I'm prepared to summarize that as "Killing: generally wrong".
Fascinating. This view is utterly incomprehensible to me. I mean, I understand what you are saying, but I just can't understand how or why you would believe such a thing.
The idea of "rights" as things that societies enact makes sense to me, but universal rights? I'd be interested on what basis you believe this. (A link or other reference is fine, too.)
I derived my theory by inventing something that satisfied as many of my intuitive desiderata about an ethical theory as possible. It isn't perfect, or at least not yet (I expect to revise it as I think of better ways to satisfy more desiderata), but I haven't found better.
What's the justification for taking your intuitive desiderata as the most (sole?) important factor in deciding on an ethical theory?
As opposed to any of many other strategies, such as finding the theory which if followed would result in the greatest amount of (human?) fun, or find the theory that would be accepted by the greatest number of people who are almost universally (> 99%) regarded as virtuous people, or ...
If we taboo for a sec the words "right", "wrong", "should" and "should not", how would I best approximate the concept of universal rights?
Here's how: "Nearly everyone has a sense of personal sovereignty, in the sense that there exist elements of the universe that a person considers belonging to said person -- so that if another agent acts to usurp or wrest control of such elements, a strong emotion of injustice is provoked. This sense of personal sovereignty will often conflict with the sense of others, especially if the sense of injustice of inflated to include physical or intellectual property: but if we minimize the territories to certain natural boundaries (like person's bodies and minds), we can aggregate the individual territories to a large map of the universe, so that it will have huge tons of grey disputed areas but also some bright areas clearly labelled 'Alex's body belongs to Alex's sovereignty' or 'Bob's body falls to Bob's sovereignty'. "
What you say seems contrived to me. You could have uttered the exact opposite and it wouldn't change anything about the nature of reality as a whole but solely the substructure that is Alicorn.
Indeed, I have never claimed to have reality-altering superpowers such that I can make utterances that accomplish this. What's your point?
In my original comment I asked if anyone would (honestly) suggest that 'killing is wrong' is a moral imperative, that it is generally wrong. You asserted exactly that in your reply. I thought you misunderstood what I have been talking about. Now I am not so sure anymore. If that is really your opinion then I have no idea how you arrived at that belief.