Eugine_Nier comments on Rationality Quotes January 2013 - Less Wrong

6 Post author: katydee 02 January 2013 05:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (604)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 16 January 2013 06:13:25PM *  5 points [-]

"My baby is dead. Six months old and she's dead."
"Take solace in the knowledge that this is all part of the Corn God's plan."
"Your god's plan involves dead babies?"
"If you're gonna make an omelette, you're gonna have to break a few children."
"I'm not entirely sure I want to eat that omelette."

-- Scenes From A Multiverse

Comment author: Eugine_Nier 17 January 2013 12:55:41AM 2 points [-]

This works equally well as an argument against utilitarianism, which I'm guessing may be your intent.

Comment author: Qiaochu_Yuan 18 January 2013 05:04:03AM 2 points [-]

I have no idea what people mean when they say they are against utilitarianism. My current interpretation is that they don't think people should be VNM-rational, and I haven't seen a cogent argument supporting this. Why isn't this quote just establishing that the utility of babies is high?

Comment author: [deleted] 18 January 2013 05:16:29AM 8 points [-]

I aspire to be VNM rational, but not a utilitarian.

It's all very confusing because they both use the word "utility" but they seem to be different concepts. "Utilitarianism" is a particular moral theory that (depending on the speaker) assumes consequentialism, linearish aggregation of "utility" between people, independence and linearity of utility function components, utility is proportional to "happyness" or "well-being" or preference fulfillment, etc. I'm sure any given utilitarian will disagree with something in that list, but I've seen all of them claimed.

VNM utility only assumes that you assign utilities to possibilities consistently, and that your utilities aggregate by expectation. It also assumes consequentialism in some sense, but it's not hard to make utility assignments that aren't really usefully described as consequentialist.

I reject "utilitarianism" because it is very vague, and because I disagree with many of its interpretations.

Comment author: Qiaochu_Yuan 18 January 2013 06:11:21AM 0 points [-]

Thanks for the explanation. Reading through the Wikipedia article on utilitarianism, it seems like this is one of those words that has been muddled by the presence of too many authors using it. In the future I guess I should refer to the concept I had in mind as VNM-utilitarianism.

Comment author: Sniffnoy 18 January 2013 06:43:05AM 2 points [-]

Probably best not to refer to it with the word "utilitarianism", since it isn't a form of that. Calling it "consequentialism" is arguably enough, since (making appropriate asumptions about what a rational agent must do) a rational consequentialist must use a VNM utility function. But I guess not everyone does in fact agree with those assumptions, so perhaps "utility-function based consequentialism". Or perhaps "VNM-consequentialism".

Comment author: [deleted] 19 January 2013 04:11:57PM 2 points [-]

I have no idea what people mean when they say they are against utilitarianism.

I find these criticisms by Vladimir_M to be really superb.

Comment author: Qiaochu_Yuan 19 January 2013 07:23:14PM 0 points [-]

Okay. So none of that is an argument against VNM-rationality, it's an argument against a bunch of other ideas that have historically been attached to the label "utilitarian," right? The main thing I got out of that post is that utilitarianism is hard, not that it's wrong.

Comment author: [deleted] 19 January 2013 07:56:07PM 1 point [-]

I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.

Comment author: Qiaochu_Yuan 19 January 2013 08:35:48PM *  1 point [-]

I don't know what you have in mind by your allusion to Morgenstern-von Neumann. The theorem is descriptive, right? It says you can model a certain broad class of decision-making entities as maximizing a utility function. What is VNM-rationality, and what does it mean to argue for it or against it?

"People should aim to be VNM-rational." I think of this as a weak claim, which is why I didn't understand why people appeared to be arguing against it. I concluded that they probably weren't, and instead meant something else by utilitarianism, which is why I switched to a different term.

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it.

What do you think of the "interpersonal utility comparison" problem? Vladimir_M regards it as something close to a defeater of utilitarianism.

It seems like a very hard problem, but nobody claimed that ethics was easy. What does Vladimir_M think we should be doing instead?

Comment author: Eugine_Nier 21 January 2013 12:35:31AM *  1 point [-]

"People should aim to be VNM-rational."

What definition of "should" are you using here? Do you mean that people deontologically should aim to be VNM-rational? Or do you mean that people should be VNM-rational in order to maximize some (which?) utility function?

Comment author: [deleted] 19 January 2013 09:24:23PM 1 point [-]

"People should aim to be VNM-rational."

Can you spell this out a little more?

What does Vladimir_M think we should be doing instead?

I don't know. I think this comment reveals a lot of respect for what you might call "folk ethics," i.e. the way normal people do it.

Comment author: Qiaochu_Yuan 19 January 2013 09:39:22PM 1 point [-]

Can you spell this out a little more?

"People should aim for their behavior to satisfy the VNM axioms." I'm not sure how to get more precise than this.

Comment author: [deleted] 19 January 2013 10:10:16PM 1 point [-]

"People should aim for their behavior to satisfy the VNM axioms."

OK. But this seems funny to me as a moral prescription. In fact a standard premise of economics is that people's behavior does satisfy the VNM axioms, or at least that deviations from them are random and cancel each other out at large scales. That's sort of the point of the VNM theorem: you can model people's behavior as though they were maximizing something, even if that's not the way an individual understands his own behavior.

Even if you don't buy that premise, it's hard for me to see why famous utilitarians like Bentham or Singer would be pleased if people hewed more closely to the VNM axioms. Couldn't they do so, and still make the world worse by valuing bad things?

If your goal is "to do the greatest good for the greatest number," or a similar utilitarian goal, I am not sure how the VNM theorem helps you.

Yes, that's why I think of "people should aim to be VNM-rational" as a weak claim and didn't understand why people appeared to be against it.

Is "people should aim for their behavior to satisfy the VNM axioms" all that you meant originally by utilitarianism? From what you've written elsewhere in this thread it sounds like you might mean something more, but I could be misunderstanding.

Comment author: CarlShulman 18 January 2013 05:56:34AM 1 point [-]

A bounded utility function that places a lot of value on signaling/being "a good person" and desirable associate, getting some "warm glow" and "mostly doing the (deontologically) right thing" seems like a pretty good approximation.

Comment author: Eugine_Nier 18 January 2013 05:30:57AM *  1 point [-]

Well, Alicorn is a deontologist.

In any case, as an ultafinitist you should know the problems with the VNM theorem.

Comment author: Qiaochu_Yuan 18 January 2013 05:58:20AM *  4 points [-]

I also have no idea what people mean when they say they are deontologists. I've read Alicorn's Deontology for Consequentialists and I still really have no idea. My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me. I think it's reasonable to argue that deontology and virtue ethics describe heuristics for carrying out moral decisions in practice, but heuristics are heuristics because they break down, and I don't see a reasonable way to judge which heuristics to use that isn't consequentialist / utilitarian.

Then again, it's quite likely that my understanding of these terms doesn't agree with their colloquial use, in which case I need to find a better word for what I mean by consequentialist / utilitarian. Maybe I should stick to "VNM-rational."

I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).

Comment author: Eugine_Nier 18 January 2013 07:26:51AM 1 point [-]

My current interpretation is that a deontologist will make a decision that makes everything worse if it upholds some moral principle, which just seems like obviously a bad idea to me.

Taboo "make everything worse".

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

I also didn't claim to be an ultrafinitist, although I have ultrafinitist sympathies. I haven't worked through the proof of the VNM theorem yet in enough detail to understand how infinitary it is (although I intend to).

Well the continuity axiom in the statement certainly seems dubious from an ultafinitist point of view.

Comment author: Qiaochu_Yuan 18 January 2013 08:08:05AM 1 point [-]

Taboo "make everything worse".

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

Rarely? Isn't this exactly what we're talking about when we talk about paperclip maximizers?

Comment author: Eugine_Nier 19 January 2013 09:16:46AM 1 point [-]

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

When I asked you to taboo "makes everything worse", I meant taboo "worse" not taboo "everything".

Comment author: Qiaochu_Yuan 19 January 2013 09:54:28AM *  1 point [-]

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property." I didn't claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I'm just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don't think it's accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it's a bad idea?

Comment author: Eugine_Nier 21 January 2013 12:28:41AM 1 point [-]

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property."

No, I'm going to respond by asking you "with respect to which utility function?" and "why should I care about that utility function?"

Comment author: [deleted] 18 January 2013 07:26:59PM 0 points [-]

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc", where the list refers to all the good things, as argued in the metaethcis sequence.

Comment author: Eugine_Nier 19 January 2013 09:21:11AM -2 points [-]

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc"

Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don't agree, I challenge you to give a non-deontological definition.

Comment author: Qiaochu_Yuan 19 January 2013 09:56:13AM 1 point [-]

What is a deontological concept and what is a non-deontological concept?

Comment author: Kindly 18 January 2013 02:07:31PM 0 points [-]

For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

A sufficiently crazy consequentialist might want to kill all such agents because he's scared of what the voices in his head might otherwise do. Your argument is not an argument at all.

And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they're not like that. Usually the "don't kill people if you can help it" moral principle tends to be ranked pretty high up there to prevent things like this from happening.

Comment author: Qiaochu_Yuan 18 January 2013 07:34:10PM 1 point [-]

to prevent things like this from happening.

Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they're doing if they don't think they're just using heuristics that approximate consequentialist reasoning.

Comment author: TsviBT 17 January 2013 03:03:43AM 1 point [-]

Huh? How so?

Comment author: Eugine_Nier 19 January 2013 09:22:49AM 1 point [-]

Replace the "corn god" in the quote with a sufficiently rational utiliterian agent.

Comment author: Wei_Dai 20 January 2013 03:33:53AM 0 points [-]

To make sure I understand, do you mean that a sufficiently rational utilitarian agent may decide to kill a 6 month old baby if it decides that would serve its goal of maximizing aggregate utility, and if I'm pretty sure that no 6 month old baby should ever be intentionally killed, I would conclude that utilitarianism is probably wrong?

Comment author: Alicorn 17 January 2013 02:39:36AM 1 point [-]

I hadn't actually thought of that, but that could be part of why I liked the quote.

Comment author: MugaSofer 20 January 2013 03:36:56PM *  -1 points [-]

Nah, it's just a cheap shot at the theists.

EDIT: not sure about the source, but the way it's edited ...