Qiaochu_Yuan comments on Rationality Quotes January 2013 - Less Wrong

6 Post author: katydee 02 January 2013 05:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (604)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 18 January 2013 08:08:05AM 1 point [-]

Taboo "make everything worse".

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value. For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

At the very least I find it interesting how rarely an analogous objection against VNM-utiliterians with different utility functions is raised. It's almost as if many of the "VNM-utiliterians" around here don't care what it means to "make everything worse" as long as one avoids doing it, and avoids doing it following the mathematically correct decision theory.

Rarely? Isn't this exactly what we're talking about when we talk about paperclip maximizers?

Comment author: Eugine_Nier 19 January 2013 09:16:46AM 1 point [-]

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

When I asked you to taboo "makes everything worse", I meant taboo "worse" not taboo "everything".

Comment author: Qiaochu_Yuan 19 January 2013 09:54:28AM *  1 point [-]

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property." I didn't claim that I reject deontologists but accept VNM-rational agents even if they have different utility functions from me. I'm just trying to explain that my current understanding of deontology makes it seem like a bad idea to me, which is why I don't think it's accurate. Are you trying to correct my understanding of deontology or are you agreeing with it but disagreeing that it's a bad idea?

Comment author: Eugine_Nier 21 January 2013 12:28:41AM 1 point [-]

You want me to say something like "worse with respect to some utility function" and you want to respond with something like "a VNM-rational agent with a different utility function has the same property."

No, I'm going to respond by asking you "with respect to which utility function?" and "why should I care about that utility function?"

Comment author: [deleted] 18 January 2013 07:26:59PM 0 points [-]

Have worse consequences for everybody, where "everybody" means present and future agents to which we assign moral value.

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc", where the list refers to all the good things, as argued in the metaethcis sequence.

Comment author: Eugine_Nier 19 January 2013 09:21:11AM -2 points [-]

You've assumed vague-utilitarianism here, which weakens your point. I would taboo "make everything worse" as "less freedom, health, fun, awesomeness, happyness, truth, etc"

Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don't agree, I challenge you to give a non-deontological definition.

Comment author: Qiaochu_Yuan 19 January 2013 09:56:13AM 1 point [-]

What is a deontological concept and what is a non-deontological concept?

Comment author: Eugine_Nier 21 January 2013 05:59:16PM 3 points [-]

After thinking about it some more, I think I have a better way to explain what I mean.

What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else's freedom is a constraint on one's decision algorithm not just on one's outcome, thus it doesn't satisfy the VNM axioms.

Comment author: Qiaochu_Yuan 21 January 2013 09:24:54PM 2 points [-]

It sounds to me like you're implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.

Comment author: Eugine_Nier 22 January 2013 09:44:50PM 1 point [-]

I don't see why I'm relying in it anymore than than the VNM-utiliterian is.

Comment author: Eugine_Nier 21 January 2013 12:30:03AM 0 points [-]

I thought I had made that clear in my second sentence:

If you don't agree, I challenge you to give a non-deontological definition [of freedom].

Comment author: Qiaochu_Yuan 21 January 2013 04:27:43AM 1 point [-]

Um, no. I can't respond to a challenge to give a non-X definition of Y if I don't know what X means.

Comment author: Kindly 18 January 2013 02:07:31PM 0 points [-]

For example, a sufficiently crazy deontologist might want to kill all such agents in the name of some sacred moral principle.

A sufficiently crazy consequentialist might want to kill all such agents because he's scared of what the voices in his head might otherwise do. Your argument is not an argument at all.

And if the sacred moral principle leads to the deontologist killing everyone, that is a pretty terrible moral principle. Usually they're not like that. Usually the "don't kill people if you can help it" moral principle tends to be ranked pretty high up there to prevent things like this from happening.

Comment author: Qiaochu_Yuan 18 January 2013 07:34:10PM 1 point [-]

to prevent things like this from happening.

Smells like consequentialist reasoning. Look, if I had a better example I would give it, but I am genuinely not sure what deontologists think they're doing if they don't think they're just using heuristics that approximate consequentialist reasoning.