Peterdjones comments on Policy Debates Should Not Appear One-Sided - Less Wrong

102 Post author: Eliezer_Yudkowsky 03 March 2007 06:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 18 January 2013 04:38:52PM 4 points [-]

So the consequentialist notion of good and bad actions doesn't translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise.

What I want out of a moral theory is to know what I ought to do.

As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.

Comment author: Peterdjones 18 January 2013 05:40:43PM 0 points [-]

What I want out of a moral theory is to know what I ought to do.

So you don't want to be able to understand how punishments and rewards are morally justified--why someone ought, or not, be sent to jail?

Comment author: [deleted] 18 January 2013 05:53:00PM *  5 points [-]

It seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.

I don't think a moral theory has to have special cases built in for judging other people's actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.

Comment author: Peterdjones 18 January 2013 06:03:08PM -1 points [-]

Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.

But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.

I don't think a moral theory has to have special cases built in for judging other people's actions, and then prescribing rewards/punishments

Universalisability rides again.

Comment author: JGWeissman 18 January 2013 06:25:23PM 4 points [-]

But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.

The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn't seem very meaningful. In general, I don't want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don't see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people's actions is just another sort of action you can choose, it is not fundamentally a special case.

Comment author: Peterdjones 18 January 2013 07:11:30PM *  -1 points [-]

The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn't seem very meaningful.

So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They're either in jail or they are not.

Nyan is exactly right, judging other people's actions is just another sort of action you can choose, it is not fundamentally a special case.

But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.

Comment author: DaFranker 18 January 2013 07:35:40PM *  4 points [-]

So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do.

No. It's about what JGWeissman in general ought to do, including "JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman".

Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we're having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate "Give fish or not?"

But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.

This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.

Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn't directly impact or impacts it in a non-obvious way.

For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.

Also note that "judging something good" and "giving praise and rewards", as well as "judging something bad" and "attributing blame and giving punishment", are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.

Your mental judgments are actions, in the useful sense when discussing metaethics.

Comment author: Peterdjones 18 January 2013 07:53:08PM *  -2 points [-]

No. It's about what JGWeissman in general ought to do, including "JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman".

Is it? That isn't relevant to me. It isn't relevant to interaction between people, it isn't relevant to society as a whole, and it isn't relevant to criminal justice. I don't see why I should call anything so jejune "morality".

Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we're having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate "Give fish or not?"

Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don't know what you think is blocking that off.

But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.

This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.

Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone's wallet although the money is morally neutral.

Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones.

That is not an fact about morality that is a implication of the naive consequentualist theory of morality -- and one that is often used as an objection against it.

For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.

Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).

Also note that "judging something good" and "giving praise and rewards", as well as "judging something bad" and "attributing blame and giving punishment", are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.

Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions

Comment author: DaFranker 18 January 2013 08:16:38PM *  2 points [-]

(...)

Is it? That isn't relevant to me. It isn't relevant to interaction between people, it isn't relevant to society as a whole, and it isn't relevant to criminal justice. I don't see why I should call anything so jejune "morality".

(...)

Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don't know what you think is blocking that off.

Indeed. "Judge actions of Person X" leads to better consequences than not doing it as far as they can predict. "Judging past actions of others" is an action that can be taken. "Judging actions of empirical cluster Y" is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include "punish the idiot who did that" and "blame the person" and whatever other moral judgments are appropriate).

Did I somehow communicate that something was blocking that off? If you hadn't said "I don't know what you think is blocking that off.", I'd have assumed you were perfectly agreeing with me on those points.

(...)

Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).

If you want to put your own labels on everything, then yes, that's exactly what my theory is and that's exactly how it works.

It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.

So yes, by your words, I'm being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some  incredible coincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.

<s> How incredibly coincidental and curious! </sarcasm>

Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions

Your mental judgments are actions, in the useful sense when discussing metaethics

Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you'll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it's more moral.

I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called "morally good" themselves.

Comment author: Peterdjones 18 January 2013 08:30:17PM *  -1 points [-]

Indeed. "Judge actions of Person X" leads to better consequences than not doing it as far as they can predict. "Judging past actions of others" is an action that can be taken. "Judging actions of empirical cluster Y" is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include "punish the idiot who did that" and "blame the person" and whatever other moral judgments are appropriate).

The point being what? That moral judgments have an instrumental value? That, they don't have a moral value? That morality collapses into instrumentality.

It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.

Yes, but the idiosyncratic disposition of your values doesn't make egoism into standard c-ism.

How incredibly coincidental and curious!

That was mean sarcastically: so it isn't coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.

Your mental judgments are actions, in the useful sense when discussing metaethics

What is the point of that comment?

Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant.

That is not obvious.

To return to your previous words, I believe you'll agree that someone who

That is incomplete.

Comment author: [deleted] 18 January 2013 06:26:05PM *  1 point [-]

Universalisability rides again.

If I'm parsing that right, you misunderstood my point. Sorry.

I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I'm saying, though, that this is a matter of normative ethics, not metaethics.

As a matter of metaethics, I don't think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on "you". As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics.

Do you understand what I'm getting at better now?

Comment author: Peterdjones 18 January 2013 07:17:33PM -1 points [-]

I don't think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on "you"

What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind that ought to be done. Those are surley different ways of saying the same thing.

Why would you differ? Maybe it's the "double emphasis on you", The situations in which I morally ought not do something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.

Comment author: DaFranker 18 January 2013 07:43:57PM *  1 point [-]

Soooo...

Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I'm so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being "Rape people" and "Kill people".

By the argument you're giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they "always" praise it, without diminishing returns or habituation effects or desensitization).

Clearly this is not the same as what you ought to do.

(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)

For more exploration into this, suppose I'm always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.

On the other hand, if I'm a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?

Comment author: Peterdjones 18 January 2013 07:58:00PM -1 points [-]

By the argument you're giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they "always" praise it, without diminishing returns or habituation effects or desensitization).

No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don't change that kind of relationship by re-arranging atoms..

Comment author: DaFranker 18 January 2013 08:20:37PM *  0 points [-]

And what's the rule, the algorithm, then, for deciding which acts should be praised?

The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don't have access to and try to estimate using our intuitions.

Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-"consequentialism" as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others

Comment author: Peterdjones 18 January 2013 08:38:54PM 0 points [-]

Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.

Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions,

Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.

and points towards some form of something-close-to-what-I-would-call-"consequentialism" as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others

But that wasn't what you were saying before. Before you were saying it was all abut JGWeissman.

Comment author: shminux 18 January 2013 08:25:17PM 0 points [-]

However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally.

How can you hate something yet praise it internally? I'm having trouble coming up with an example.

Comment author: DaFranker 18 January 2013 08:30:25PM 4 points [-]

I know a very good one, very grounded in reality, that millions if not billions of people have and do this.

Death.

Comment author: [deleted] 18 January 2013 08:35:26PM *  0 points [-]

I don't see what you're getting at. I'll lay out my full position to see if that helps.

First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I'm asking about whether 4 is an integer.

So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is "what should I do?", because it's the only one I can act on. I don't think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.

Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I've decided answers the question "what ought to be done"), I claim that I ought to act in such a way as achieves the "best" outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don't care which is done. You can call this "consequentialism" if you like. Then, unpacking "best" a bit, we find all the good things like fun, happiness, freedom, life, etc.

Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like "he didn't know any better" and "can we really expect people to...", which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you've worked out what is terminally valueable.

So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.

Comment author: Peterdjones 18 January 2013 08:48:55PM 0 points [-]

What's wrong with sticking with "what ought to be done" as formulation?

I claim that I ought to act in such a way as achieves the "best" outcome,

Meaning others shouldn't? Your use of the "I" formulation is making your theory unclear.

I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like "he didn't know any better" and "can we really expect people to...",

They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.

So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.

I don't see why. Do you think you are much better at making predictions?