In response to Pain
Comment author: Yvain 02 August 2009 09:36:49PM *  62 points [-]

This is a form of the general question "What's so bad about X?" with pain as X.

For any X, we can ask "What's so bad about X", receive an answer X2, and ask "What's so bad about X2", ad infinitum. The three most common responses are semantic stopsigns, moral nihilism, and admitting you need to ask the question more rigorously.

Once phrased more rigorously, the problem becomes easier, transforming into some combination of:

"Why do people dislike pain?", to which the answer is that it's hard-wired into the brain in some way a neurologist could probably explain, probably similar to how it's hard-wired to dislike things that taste bitter.

"Why do people call pain bad?", to which the answer is that most people think as emotivists, and call pain bad because they dislike it.

"Why is pain bad in Moral System Y?", to which the answer is that you'd have to ask the people in moral system Y, and they'll give you their moral system's answer. I think a lot of the better moral system would have it as an axiom. They probably make it an axiom because most moral systems are linked in some way or another to what people do or don't like, whether they admit it or not.

"Why is there a strong negative qualia for pain instead of it just feeling like a little voice at the back of your head saying 'that's painful'?", to which the answer will remain mysterious until we understand qualia, but no more mysterious than any other sensation.

In response to comment by Yvain on Pain
Comment author: jwdink 03 August 2009 05:19:50PM 3 points [-]

Excellent response.

As a side note, I do suspect that there's a big functional difference between an entity that feels a small voice in the back of the head and an entity that feels pain like we do.

In response to Pain
Comment author: jwdink 03 August 2009 05:16:06PM *  1 point [-]

How does one define "bad" without "pain" or "suffering"? Seems rather difficult. Or: The question doesn't seem so much difficult as it is (almost) tautological. It's like asking "What, if anything, is hot about atoms moving more quickly?"

Comment author: orthonormal 30 July 2009 07:37:58PM *  1 point [-]

I understand your frustration, since we don't seem to be saying much to support our claims here. We've discussed relevant issues of metaethics quite heavily on Less Wrong, but we should be willing to enter the debate again as new readers arrive and raise their points.

However, there's a lot of material that's already been said elsewhere, so I hope you'll pardon me for pointing you towards a few early posts of interest right now instead of trying to summarize it in one go.

Torture vs. Dust Specks kicked off the arguing; Eliezer began arguing for his own position in Circular Altruism and The "Intuitions" Behind "Utilitarianism". Searching LW for keywords like "specks" or "utilitarian" should bring up more recent posts as well, but these three sum up more or less what I'd say in response to your question.

(There's a whole metaethics sequence later on (see the whole list of Eliezer's posts from Overcoming Bias), but that's less germane to your immediate question.)

Comment author: jwdink 30 July 2009 09:26:32PM 0 points [-]

Oh, it's no problem if you point me elsewhere. I should've specified that that would be fine. I just wanted some definition. The only link that was given, I believe, was one defining rationality. Thanks for the links, I'll check them out.

Comment author: Vladimir_Nesov 29 July 2009 10:37:27PM *  0 points [-]

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Comment author: jwdink 30 July 2009 07:00:07PM 0 points [-]

All human preferences, in their exact form, are hidden. The complexity of human value is too great to comprehend it all explicitly with a merely human mind.

Okay... so again, I'll ask... why is it irrational to NOT sacrifice the children? How does it go against hidden preference (which, perhaps, it would be prudent to define)?

Comment author: Vladimir_Nesov 29 July 2009 10:35:03PM 1 point [-]

The problem is a confusion. Human preference is something implemented in the very real human brain.

Comment author: jwdink 30 July 2009 06:58:51PM 0 points [-]

That's not a particularly helpful or elucidating response. Can you flesh out your position? It's impossible to tell what it is based on the paltry statements you've provided. Are you asserting that the "equation" or "hidden preference" is the same for all humans, or ought to be the same, and therefore is something objective/rational?

Comment author: Vladimir_Nesov 29 July 2009 09:27:48PM 0 points [-]

Yes they could care about either outcome. The question is whether they did, whether their true hidden preferences said that a given outcome is preferable.

Comment author: jwdink 29 July 2009 10:25:48PM 0 points [-]

What would be an example of a hidden preference? The post to which you linked didn't explicitly mention that concept at all.

Comment author: Vladimir_Nesov 29 July 2009 09:18:53PM *  0 points [-]

The analogy in the next paragraph was meant to clarify. Do you see the analogy?

A person in this analogy is an equations together with an algorithm for approximately solving that equation. Decisions that the person makes are the approximate solutions, while preference is the exact solution hidden in the equation that the person can't solve exactly. The decision algorithm tries to make decisions as close to the exact solution as it can. The exact solution is what the person should do, while the output of the approximate algorithm is what the person actually does.

Comment author: jwdink 29 July 2009 10:25:07PM 0 points [-]

I suppose I'm questioning the validity of the analogy: equations are by nature descriptive, while what one ought to do is prescriptive. Are you familiar with the Is-Ought problem?

Comment author: eirenicon 29 July 2009 08:44:26PM 1 point [-]

I should have said "decreases personal utility." When I say rationality, I mean rationality. Decreasing personal utility is the opposite of "winning".

Comment author: jwdink 29 July 2009 09:19:01PM 0 points [-]

Instrumental rationality: achieving your values.  Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about.  The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.  On LW we sometimes refer to this as "winning".

Couldn't these people care about not sacrificing autonomy, and therefore this would be a value that they're successfully fulfilling?

Comment author: Vladimir_Nesov 29 July 2009 08:38:01PM *  0 points [-]

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

Compare preference to a solution to an equation: you can see the equation, you can take it apart on the constituent terms, but its solution is nowhere to be found explicitly. Yet this solution is (say) uniquely defined by the equation, and approximate methods for solving the equation (analogized to the actual decisions) tend to give their results in the general ballpark of the exact solution.

Comment author: jwdink 29 July 2009 09:13:34PM 0 points [-]

You can't decide your preference, preference is not what you actually do, it is what you should do, and it's encoded in your decision-making capabilities in a nontrivial way, so that you aren't necessarily capable of seeing what it is.

You've lost me.

Comment author: Vladimir_Nesov 29 July 2009 07:39:56PM 0 points [-]

Some people would say that dying honorably is better than living dishonorably. I'm not endorsing this view, I'm just trying to figure out why it's irrational, while the utilitarian sacrifice of children is more rational.

Utilitarian calculation is a more rational process of arriving at a decision, while for the output of this process (a decision) for a specific question you can argue that it's inferior to the output of some other process, such as free-running deliberation or random guessing. When you are comparing the decisions of sacrifice of children and war to the death, first isn't "intrinsically utilitarian", and the second isn't "intrinsically emotional".

Which of the decision is (actually) the better one depends on the preferences of one who decides, and preferences are not necessarily reflected well in actions and choices. It's instrumentally irrational for the agent to choose poorly according to its preferences. Systematic processes for decision-making allow agents to explicitly encode their preferences, and thus avoid some of the mistakes made with ad-hoc decision-making. Such systematic processes may be constructed in preference-independent fashion, and then given preferences as parameters.

Utilitarian calculation is a systematic process for computing a decision in situations that are expected to break intuitive decision-making. The output of a utilitarian calculation is expected to be better than an intuitive decision, but there are situations when utilitarian calculation goes wrong. For example, the extent to which you value things could be specified incorrectly, or a transformation that computes how much you value N things based on how much you value one thing may be wrong. In other cases, the problem could be reduced to a calculation incorrectly, losing important context.

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

Comment author: jwdink 29 July 2009 08:18:19PM 0 points [-]

Which of the decision is (actually) the better one depends on the preferences of one who decides

So if said planet decided that its preference was to perish, rather than sacrifice children, would this be irrational?

However, whatever the right decision is, there normally should be a way to fix the parameters of utilitarian calculation so that it outputs the right decision. For example, if the right decision in the topic problem is actually war to the death, there should be a way to more formally understand the situation so that the utilitarian calculation outputs "war to the death" as the right decision.

I don't see why I should agree with this statement. I was understanding a utilitarian calculation as either a) the greatest happiness for the greatest number of people or b) the greatest preferences satisfied for the greatest number of people. If a), then it seems like it might predictably give you answers that are at odds with moral intuitions, and have no way of justifying itself against these intuitions. If b), then there's nothing irrational about deciding to go to war with the aliens.

View more: Prev | Next