XiXiDu comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread.

Comment author: XiXiDu 30 January 2011 02:37:22PM 2 points [-]

An off-topic question:

In a sense should always implies if. Can anyone point me to a "should" assertion without an implied if? If humans implicitly assume an if whenever they say should then the term is never used to propose a moral imperative but to indicate an instrumental goal.

You shall not kill if:

  • You want to follow God's law.
  • You don't want to be punished.
  • You want to please me.

It seems nobody would suggest there to be an imperative that killing is generally wrong. So where does moral realism come from?

Comment author: Alicorn 30 January 2011 02:38:42PM 6 points [-]

It seems nobody would suggest there to be an imperative that killing is generally wrong.

Um, I'll suggest that. Killing: generally wrong.

Comment author: XiXiDu 31 January 2011 07:39:03PM *  0 points [-]

Killing: generally wrong.

Do you agree with EY on Torture vs Dust Specks? If you agree, would killing one person be justified to save 3^^^3 from being killed? If you agree, would you call killing to be right in that case?

Comment author: Alicorn 31 January 2011 07:40:25PM 3 points [-]

I say bring on the specks.

Comment author: XiXiDu 31 January 2011 07:48:13PM 0 points [-]

I find that topic troubling. I find it comforting to know how others would decide here. So please allow me to ask another question. Would you personally die to save 3^^^3 from being killed? I thought about it myself and I would probably do it. But what is the lower bound here? Can I find an answer to such a question if I read the sequences, or at least how I can come up with my own answer?

Comment author: Alicorn 31 January 2011 07:52:28PM 0 points [-]

I would personally die to save 3^^^3 persons' lives if that were the option presented to me.

Can I find an answer to such a question if I read the sequences, or at least how I can come up with my own answer?

The sequences do not comprise a DIY guide to crafting an ethical theory. I came up with mine while I was in grad school for philosophy.

Comment author: XiXiDu 30 January 2011 03:23:18PM 0 points [-]

Um, I'll suggest that. Killing: generally wrong.

I realize I might have misunderstood moral realism. I thought moral realism proposes that there do exist agent-independent moral laws. What I meant is that nobody would suggest that the propostion 'Killing: generally wrong' is a subvenient property.

Comment author: Perplexed 30 January 2011 06:25:43PM *  0 points [-]

I thought moral realism proposes that there do exist agent-independent moral laws.

I'm pretty sure you are wrong. You have realism confused with 'universality'. Moral realism applies to the situation when you say "It is forbidden that Mary hit John" and I say "It is permissible that Mary hit John". If realism holds, then one of us is in error - one of those two statements is false.

Compare to you thinking Mary is pretty and my disagreeing. Here, neither of us may be in error, because there may be no "fact of the matter" regarding Mary's prettiness. It is just a difference of opinion.

Moral realism states that moral judgments are not just matters of opinion - they are matters of fact.

If you had said 'observer-independent' rather than 'agent-independent', then you would have been closer to the concept of moral realism.

Comment author: XiXiDu 30 January 2011 06:36:28PM 0 points [-]

If realism holds, then one of us is in error - one of those two statements is false.

So moral realism is a two-valued logic?

If you had said 'observer-independent' rather than 'agent-independent', then you would have been closer to the concept of moral realism.

I didn't know there was a difference.

Comment author: Perplexed 30 January 2011 07:33:02PM 0 points [-]

So moral realism is a two-valued logic?

More like "Moral realism is the doctrine stating that moral questions should be addressed using a two-valued logic. As opposed, say, to aesthetic questions."

Comment author: XiXiDu 30 January 2011 08:24:34PM *  0 points [-]

So moral realism proposes that there are sorts of moral formalisms whose truth values are observer independent, because their logic is consistent, but not agent-independent because moral formalisms are weighted subjectively based on the preferences of agents. Therefore we have a set of moral formalisms that are true facts about the world as they are endorsed by some agents but weighted differently by different agents.

If you could account for all moral formalisms and how they are weighted by how many agents, would this constitute some sort of universal utility function and its equilibrium equal a world-state that could be called right?

Comment author: Perplexed 30 January 2011 11:02:31PM 0 points [-]

I'm afraid that I am still not being understood. Firstly, the concepts of universalism and moral realism still make sense even if agent preferences have absolutely no impact on morality. Secondly, the notion that 'moral formalisms' can be true or false makes me squirm with incomprehension. Third, the notion that true formalisms get weighted in some way by agents leads me to think that you fail to understand the terms "true" and "false".

Let me try a different example. Someone who claims that correct moral precepts derive their justification from the Koran is probably a moral realist. He is not a universalist though, if he says that Allah assigns different duties and obligations to men and women - to believers and non-believers.

Comment author: Alicorn 30 January 2011 03:26:12PM 0 points [-]

What do you mean by "agent-independent"?

Comment author: XiXiDu 30 January 2011 04:10:26PM 1 point [-]

What do you mean by "agent-independent"?

That two agents can differ in their behavior and perception of actions but that any fundamental difference about a set of moral laws can be considered a failure-mode as those laws are implied by the lower levels of the universe the two agents are part of. I thought that moral realism proposes that 'Killing: generally wrong' is on the same level as 'Faster than light travel: generally wrong', that moral laws are intersubjective verifiability and subject to empirical criticism. I didn't think that anyone actually believes that 'Killing: generally wrong' can be derived as an universal and optimal strategy.

Comment author: Alicorn 30 January 2011 04:17:08PM 0 points [-]

I'm pretty sure I don't understand anything you just said. Sorry.

Comment author: XiXiDu 30 January 2011 04:24:32PM 0 points [-]

Could you elaborate on your reasoning behind the propostion 'Killing: generally wrong'? Maybe that would allow me to explain myself and especially reformulate my question if there is anyone who thinks that killing is wrong regardless of an agent's preferences.

Comment author: Alicorn 30 January 2011 05:31:53PM 1 point [-]

Persons have a right not to be killed; persons who have waived or forfeited that right, and non-persons, are still entities which should not be destroyed absent adequate reason. Preferences come in with the "waived" bit, and the "adequate reason" bit, but even if nobody had any preferences (...somehow...) then it would still be wrong to kill people who retain their right not to be killed (this being the default, assuming the lack of preferences doesn't paradoxically motivate anyone to waive their rights), and still be wrong to kill waived-rights or forfeited-rights persons, or non-persons, without adequate reason. I'm prepared to summarize that as "Killing: generally wrong".

Comment author: [deleted] 30 January 2011 06:57:12PM 5 points [-]

Fascinating. This view is utterly incomprehensible to me. I mean, I understand what you are saying, but I just can't understand how or why you would believe such a thing.

The idea of "rights" as things that societies enact makes sense to me, but universal rights? I'd be interested on what basis you believe this. (A link or other reference is fine, too.)

Comment author: Alicorn 30 January 2011 07:28:49PM 3 points [-]

I derived my theory by inventing something that satisfied as many of my intuitive desiderata about an ethical theory as possible. It isn't perfect, or at least not yet (I expect to revise it as I think of better ways to satisfy more desiderata), but I haven't found better.

Comment author: ArisKatsaris 02 February 2011 05:30:24PM *  0 points [-]

If we taboo for a sec the words "right", "wrong", "should" and "should not", how would I best approximate the concept of universal rights?

Here's how: "Nearly everyone has a sense of personal sovereignty, in the sense that there exist elements of the universe that a person considers belonging to said person -- so that if another agent acts to usurp or wrest control of such elements, a strong emotion of injustice is provoked. This sense of personal sovereignty will often conflict with the sense of others, especially if the sense of injustice of inflated to include physical or intellectual property: but if we minimize the territories to certain natural boundaries (like person's bodies and minds), we can aggregate the individual territories to a large map of the universe, so that it will have huge tons of grey disputed areas but also some bright areas clearly labelled 'Alex's body belongs to Alex's sovereignty' or 'Bob's body falls to Bob's sovereignty'. "

Comment author: XiXiDu 30 January 2011 07:32:06PM *  1 point [-]

What you say seems contrived to me. You could have uttered the exact opposite and it wouldn't change anything about the nature of reality as a whole but solely the substructure that is Alicorn.

Comment author: Alicorn 30 January 2011 07:54:13PM 2 points [-]

You could have uttered the exact opposite and it wouldn't change anything about the nature of reality as a whole

Indeed, I have never claimed to have reality-altering superpowers such that I can make utterances that accomplish this. What's your point?

Comment author: wedrifid 30 January 2011 04:52:14PM 1 point [-]

In a sense should always implies if. Can anyone point me to a "should" assertion without an implied if? If humans implicitly assume an if whenever they say should then the term is never used to propose a moral imperative but to indicate an instrumental goal.

That is a way you can translate the use of should into a convenient logical model. But it isn't the way humans instinctively use the verbal symbol.