by [anonymous]
3 min read

-2

Preface

I have trouble expressing myself in such a way that my ideas come out even remotely like they sound in my head. So please apply the principle of charity and try to read how you think I thought of it.

Tit for Tat

Tit for Tat is usually presented in a game between two players where each chooses to either cooperate or defect. The real world game however differs in two important ways.

First, it's not a two player game. We make choices not only on our single instance of interaction but also on observed interactions between other players. Thus the Advanced Tit For Tat not only defects if the other player defected against itself but also if it could observe the other player defecting against any other player that employs a similar enough algorithm.

Second, there is a middle ground between cooperating and defecting, you could stay neutral. Thus you can harm your opponent, help him or do neither. The question of the best strategy in this real life prisoners dilemma is probably still unanswered. If I see my opponent defecting against some of my peers and cooperating with others, what do I choose?

Agency

The reason why there even is a game is because we can deliberate on our action and can take abstract thoughts into account that do not directly pertain to the current situation, which I think is the distinguishing factor of higher animals from lower. This ability is called agency. In order to be an agent a subject must be able to perceive the situation, have a set of possible actionsmodel the outcomes of these actions, value the outcomes, and then act accordingly.

We could act in such a way that infringes on these abilities in others. If we limit their ability to perceive or model the situation we call this fraud, if we limit their set of possible actions or their ability to choose between them, we call it coercion, if we infringe on their ability to value an outcome, we call it advertising.

Ethics

I propose that the purpose of our moral or ethical intuitions (I use the two words interchangeably, if there is a distinction please let me know) is to tell us whether some player defected, cooperated or stayed neutral, and to tell us who we should consider as having a close enough decision algorithm to ourselves to 'punish' third players for defecting against them. And I further propose that infringing on someones agency is what we consider as defecting.

Value Ethics

Utilitarians tend to see defecting or cooperating as pertaining to creation or destruction of values.  (Edit:) Three things bother me about value ethics:

1. Valuations between different people can't really be compared. If we shut up and multiply, we value the lives of everybody exactly the same no matter how they themselves value their own life. If there are chores to be done and one person claims to "not mind too much" while the other claims to "hate it with a passion" we can't tell if the emotional effect on them is really any different or maybe even the other way round.

2. It makes you torture someone to avoid an insanely huge number of dust specs.

3. It makes you push a fat man to his death.

Agency ethics

Instead I propose that defecting in the real world game is all about infringing on someone's agency. Thus we intuit bankers who destroy an insane amount of wealth while not as good people still as neutral because they do not infringe on agency. At least that is my moral intuition.

So infringing on agency would make you a bad person, while not infringing on agency doesn't make you a good person. What makes you a good person is increasing value. Maybe agency is more fundamental and you cannot be a good person if you are a bad person, but maybe you can be both. That would create cognitive dissonance in people who consider ethics to be a singular thing and don't see the distinction, and that might be at the root of some ethics discussions. 

Evil

In my version of ethics it counts as evil to push the fat man or to switch the tracks, as that would mean deliberately causing a death of someone who doesn't want to die. I would let the five die and not feel guilty about it, because I am not the cause of their deaths. I make a fundamental distinction between acting and not acting. If I hadn't been there the five would still die, so how could I be responsible for their deaths? I am aware that this view makes me evil in the eye of utilitarians. But I see less people acting consistent with utilitarianism than I see people arguing that way. Then again, this perception is probably heavily biased.

Conclusion

I don't really have a conclusion except of noticing that there exists a disagreement in fundamental morality and to inform you that there exists at least one person who considers infringing on someone's agency as defecting in a prisoner's dilemma.

New Comment
60 comments, sorted by Click to highlight new comments since:
[-]PDH70

"I would let the five die and not feel guilty about it, because I am not the cause of their deaths."

A more charitable way of phrasing the consequentialist PoV here is that we care more about stopping the deaths than avoiding feelings of guilt. Yes, it's true that on certain accounts of morality you can't be held responsible for the deaths of five people in a Trolley Problem-esque scenario but the people will still be dead and consequentialism is the view that consequences trump all other considerations, like adherence to a deontological moral code, acting as a virtuous person would act, acting in a way that is perfecting of one's teleological nature etc.

Whether or not to hold people responsible for certain actions, like everything else on consequentialism, is, for us, a matter of determining whether that would lead to the best consequences.

Now, having said that, in practice consequentialists will behave like deontologists, virtue ethicists etc. quite a lot of the time. The reason for this is that having everyone go around making individual act utilitarian calculations for every potentially moral decision would likely be catastrophic and consequentialists are committed to avoiding terrible consequences whenever possible. It is usually better to have rules that are held to be exceptionless (but which can be changed to some degree, as is exactly the case with law) and teach people to have certain virtuous qualities so that they won't be constantly looking for loopholes in the rules and so on.

Does this mean that deontology, virtue ethics etc. are correct after all? No! Because it's still all being justified on consequentialist grounds, which is how we decide which rules to have and what counts as a virtue etc. in the first place. They will be the rules and virtues that lead to the best real world consequences. Because the kind of philosophical scenarios under which it is morally correct to push a fat man off a bridge are carefully constructed to be as inconvenient as possible the rules and virtues that we use in practice will probably forbid pushing people off bridges or raising the kind of person who would do that. This will mean that once in a blue moon someone really will find themselves in such a scenario and that person will likely make the wrong decision. It will also mean that the majority of the time, people will be making the right decisions and the consequences, overall, will be better than they would be if we let people decide for themselves on a case-by-case basis whether murdering someone maximised utility or not.

Consequentialism will still have counter-intuitive results. It should! There's no reason to think that our intuitions are an infallible guide to what's right. However, the kind of consequentialism attacked in a lot of philosophical arguments is a pretty naive version and it would be much more productive for everyone if we focussed on the stronger versions.

[-][anonymous]20

Because it's still all being justified on consequentialist grounds, which is how we decide which rules to have and what counts as a virtue etc. in the first place. They will be the rules and virtues that lead to the best real world consequences.

The problem here is that what you need to justify is why you call some consequences better than others, because I might beg to differ. If you say "I just do" I would have to pull out my gun and say "well, I don't". In this scenario morality is reduced to might makes right, but then why call it morality? I think the purpose of morality is to give me a guideline to decide even when I consider some consequences to be much more preferable than others to not act on this preference because it would negate our ability to peacefully coexist. In which case you might respond that our inability to peacefully coexist is a consequence that I am taking into account, which I think means we either talk about different things and don't actually disagree, or your reasoning is circular.

If it is the case that we merely talk about different things, I still think it is a good thing to make what I call agency ethics explicit so that we don't forget to take its consequences into account.

If you meet a paperclip maximizer, pulling out your gun could be a moral response. No, it wouldn't mean "might makes right"; the causality goes in the opposite direction: in this specific situation, force could be the best way to achieve a moral outcome. We use violence against e.g. viruses or bacteria all the time.

With humans it's complicated because we actually don't know our own values. What we feel are approximations, or deductions based on potentially wrong premises. So there is a very real possibility that we will do something to maximize our values, only to realize later that we actually acted against our values. Imagine an atheist reflecting on a memory where as a former believer he burned a witch. (What is the strategy he could have followed as a believer, to avoid this outcome?)

So we have some heuristics about moral reasonings that are more likely to change, or less likely to change, and we kinda try to take this into account. It's usually not explicit, because, well, being open about a possibility that your values may change in the future (and debating which ones are most likely to) does not bring you much applause in a community built around those values. But still, our moral judgement of "hurting random people is evil" is much more stable than our moral judgement of "we must optimize for what Lord Jehovah wants". Therefore, we hesitate to torture people in the name of Lord Jehovah, even when, hypothetically, it should be the right thing to do. There are people who don't do this discounting and always do the right thing; we call them fanatics, and we don't like them, although it may be difficult or impossible to explain explicitly why. But in our minds, there is this intuition that we might be wrong about what the right thing is, and that in some things we are more likely to be wrong than in some other things. In some way we are hedging our moral judgements against possible future changes of our values. And it's not some kind of Brownian motion of values; we can feel that some changes are more likely than other changes.

And this is probably the reason why we don't pull a gun on a person living immoral, but not too horrible life. (At some level, we do: like, if there is a terrorist saying that he will execute his hostages, then obviously shoot him if you can.)

[-][anonymous]-20

I'm not quite sure what your point is and how that relates to what I have written.

The paperclip maximizer, the fanatic and the terrorist all violate agency ethics and the virus is not even an agent.

If you are opposed to my explanations, can you find an example where retribution is justified without the other party violating agency or where someone violates agency while a retribution in kind is unjustified?

Sorry. I have reacted only to a part of your previous comment, but not to your original argument. So, uhm, here is a bit silly scenario that examines agency:

There is a terrorist somewhere, holding your family as hostages. He announced that he is going to execute them in five minutes.

There are no policemen nearby, and they can't get there in five minutes. Luckily, there is one former soldier. Unfortunately, he doesn't have a gun with him. Fortunately, there is some other guy, who has a gun, but is not interested in the situation.

So, this soldier goes to the guy with a gun and asks silently: "Excuse me. We have this situation here, with only one terrorist, who is not paying good attention to what happens around him. Luckily, I was trained exactly for this kind of situation, and could reliably kill him with one shot. Could I borrow your gun, please? Of course, unless you want to do this yourself."

And the guy says: "Uhm, I don't care. I have no big problem with giving you my gun, but right at this moment I am watching a very interesting kitten video on youtube. It only takes ten minutes. So please don't disturb me, I really enjoy watching this video. We can discuss the gun later."

So the soldier, respecting this guy's agency, waits respectfully. Ten minutes later (after your family was executed, and the terrorist now has some new hostages), the video ends, the guy asks: "Sorry, what did you need that gun for?" "To kill a terrorist." "Yeah, no problem, take it." The soldier kills the terrorist and everyone goes home. I mean, except for the terrorist and your family; they are dead.

How happy you are about the fact that the soldier respected that guy's decision to finish watching the kitten video undisturbed? Imagine that there was an option that the soldier could inconspicuously turn off the wi-fi, so the guy would have paid him attention sooner; would that be an ethically preferable option?

[-][anonymous]30

The terrorist would be an agent diminishing the value of your scenario, so let's say a bear is mauling a friend of mine while the guy watching cats on the internet is sitting on his bear repellant. I could push the guy away and save my friend, which of course I would do. However, I'm still committing an infraction against the guy who's bear repellant I stole, I cannot argue that it would have been his moral duty to hand it over to me, and the guy has the right to ask for compensation in return. So I'm still a defector and society would do well to defect against me in proportion, which in this scenario I am of course perfectly willing to accept.

Now let's say that two people are being mauled by the bear and the guy's brain is somehow a bear repellant. Should I kill the guy? The retribution I deserve for that would be proportionally worse than in the first case. I might choose to, but I'd be a murderer and deserve to die in return.

So I'm still a defector and society would do well to defect against me in proportion

Which, of course, they wouldn't do. They wouldn't have much sympathy for the guy sitting one bear repellant, who chose not to help. In fact, refusing to help can be illegal.

I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it's okay to defect against him.

[-][anonymous]00

I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it's okay to defect against him.

No. My point is that the guy is not a defector. He merely refuses to cooperate which is an entirely different thing. So I am the defector whether or not society chooses to defect in return. And I really mean that society would do well to defect against me proportionally in return in order to discourage defection. Or to put it differently if I want to help and the guy does not, why should he have to bear (no pun intended) the cost and not me?

Societies often punish people that refuse to help. Why not consider people that break the law as defectors?

In fact, that would be an alternative (and my preferred) way to fix you second and third objection to value ethics. Consider everyone who breaks the laws and norms within your community as a defector. Where I live, torture is illegal and most people think it's wrong to push the fat man, so pushing the fat man is (something like) breaking a norm.

Have you read Whose Utilitarianism?? Not sure if it addresses any of your concerns, but it's good and about utilitarianism.

Okay, makes sense. There could be a technical problem with evaluating a punishment "in proportion", because some things could be difficult to evaluate, but that is also a (much greater) problem in consequentialist ethics.

[-]Jiro10

Perhaps precommitting works here. It's a bad idea to make a rule "you must respect people's agency except when you really need to violate it". Adopting that rule would be beneficial in specific situations (like the one above) but generally would end in disaster.

If you instead make a rule "you must respect people's agency unconditionally", that rule is more practical. But you can't make that rule and then change your mind when you're in one of the rare situations where the other way happens to be better--if you did that, so would everyone else and you'd be screwed, on the average. So instead you precommit to following the rule and always respect people's agency, even when not doing so is beneficial.

It's a counterfactual mugging where instead of having to be the kind of person who would give Omega $100, you precommit to be the kind of person who would let his family die in this scenario because it benefits him in counterfactual scenarios. Thus, letting your family die here is ethical (although it may not be something people could realistically be expected to follow.)

(I don't believe this, by the way, because while you can't make a rule "respect people's agency unless you really need to violate it" you can have a rule that says "respect people's agency unless your excuse is good enough to convince a jury".)

If infringing on agency is bad, then would infringing on someone's agency to keep five people from having their agency infringed be good or bad?

[-][anonymous]00

Depends on whether the person who's agency you consider infringing is involved in the infringement of the five's agency. The rule is something like "As long and as far as you do not infringe on someone's agency your agency should not be infringed on."

Assume they're not involved.

Does the agency of the five people not matter? Is it perfectly okay with someone else to infringe on the agency of an innocent, and it's only a problem if you do it?

[-][anonymous]00

I'm not sure what your point is. Infringing on someone's agency is defecting and we defect not only against people who defect against us but also against those we observe defecting against others. So it's most certainly not Ok. But defecting against a defector does not sanction my defecting against a non-defector.

Edit: maybe an example scenario would help.

My point is that if defecting against innocents is bad, it would be reasonable to minimize the amount of defection against innocents.

[-][anonymous]00

if defecting against innocents is bad, it would be reasonable to minimize the amount of defection against innocents.

By defecting against an innocent which makes defecting against an innocent not bad in contradiction to the assumption. You're running into a paradox.

All choices involve someone defecting against an innocent. I figured I'd go with the least bad choice.

I'm not very good at moral philosophy, but any decision-process that results in 5 people dying in order to avoid "feeling guilty" isn't something I can accept.

Especially since I'm not sure I follow your reasoning. If infringing on someone's agency (e.g. killing someone) is wrong infringing on the agency of five people's agency is at least equally wrong, but probably more wrong. By not killing the one person, you infringe on 5 time as many agency.

[-][anonymous]00

I don't think that failing to act is infringing on the five's agency since they would die anyways if I wasn't there. So I'm not limiting their choices, I'm merely failing to increase them.

And try to see it from my perspective, where any decision-process that results in one innocent person being tortured in order to somewhat improve the lives of others isn't something I can accept.

But in the Trolley Problem thought-experiment you are there. If you aren't there, your actions won't have any relevance to the situation anyway.

You state that infringing on someone's agency is evil (makes someone a bad guy). How doeares your actions (just like not choosing is also a choice, not acting can also be an action) leading to the loss of 5 agency as opposed to 1 agency justified in this?

And torture vs. dust-specks is a different beast from the Trolley Problem, in my opinion, so I'm not going to respond to that part.

[-][anonymous]00

If I enter your house and clean your dishes and then leave, when you come home you see your clean dishes and can deduce that someone was in your house. If on the other hand I don't change anything, you cannot. So acting and not acting are in fact different and axiomatically denying that difference makes you draw a in my opinion wrong conclusion.

I'm more troubled by utilitarianism prescriptions about the real world rather than extreme hypotheticals.

Under utilitarianism it is the highest moral good for everyone to try to satisfy everyone else's preference equally - because the most moral action is the one that maximizes global utility. In other words, one should value one's own life no more than a random child in the middle of Botswana. In order to act consistent with this value system, it would be necessary to devote virtually all of your time to helping other people. No entertainment or free time or luxury allowed except the minimum amount necessary to keep you productive enough to help other people. This is the utilitarian ideal.

[-][anonymous]40

One of my professors once gave a lecture on how utilitarianism was originally intended to be an ethical theory specifically for government. I haven't done enough reading to be able to argue either for or against this, but it makes sense -- avoids this failure mode and feels like it fits with the "if not divine right, then what?" philosophical upheaval.

I can sort of agree with utilitarianism in principal, but in real cases, it is exceedingly hard or impossible to work out the total net result of any action. Someone might watch some totally contrived cases on "24" and then think "Make a note to Justice Department: Torture OK". You don't have the omniscience that the screenwriter passes to the viewers, and you can't count on secrecy (i.e. on not setting a precedent for the rest of the world). Lots of people are liable to imagine, in a crisis, that torturing this person will have an overall good effect, and I think they are at least 99% of the time wrong.

And yes, given the hundreds of millions of people involved, I wouldn't argue against the proposition that any tax will cause somebody's death. But that's trivial. Any change will likely cause somebody's death due to some butterfly effect or other, and NOT taxing is clearly no exception, since taxes pay for police, firemen, etc. I'm not making this argument for taxes; I'm just saying the argument is no good.

IMHO most non-religious ethical principles have something to recommend them, and should probably be invoked in the most extreme and obvious cases that don't involve serious harm to an innocent person. It's one-principle people who think they can do all the moral math that scare me.

So, if the government is forcing you to pay taxes it is infringing on your agency and is therefore evil?

...we can't tell if the emotional effect on them is really any different or maybe even the other way round.

So what? You make decisions in conditions of uncertainty. Use Bayesian expected utility.

It makes you torture someone to avoid an insanely huge number of dust specs.

For one thing, it is conceivable that the right thing to do in a counterintuitive situation is counterintuitive. For another, if you believe (like myself) in bounded utility (diminishing returns) then it's not necessarily a correct conclusion.

It makes you push a fat man to his death.

Why?!

Maybe agency is more fundamental and you cannot be a good person if you are a bad person, but maybe you can be both.

So, is being neutral better, worse or incomparable with being good & bad?

I would let the five die and not feel guilty about it, because I am not the cause of their deaths.

OK, so you value your own innocence more than you value the lives of other people. But to what extent? What if it's 50 instead of 5? 5000? 5 million?

[-][anonymous]00

So, if the government is forcing you to pay taxes it is infringing on your agency and is therefore evil?

The way it is mostly done today, probably yes. However, taxing use of natural resources or land ownership / stewardship could still be done morally. This is not an argument for anarchy.

So what? You make decisions in conditions of uncertainty. Use Bayesian expected utility.

It's just something that bothers me about utilitarianism, not something I consider indefensible.

The pushing the fat man to death is the second part of the trolley problem where you need to do it to save the five.

So, is being neutral better, worse or incomparable with being good & bad?

That's an open question. I just wanted to point out that in this case there would be a cognitive dissonance between one part of the brain telling us to defect while the other tells us to cooperate, and my argument is that we should be aware of this cognitive dissonance to make a grounded moral decision.

OK, so you value your own innocence more than you value the lives of other people. But to what extent? What if it's 50 instead of 5? 5000? 5 million?

Your trying to push my concept of ethics back into the utilitarian frame while I was trying to free it from that. Of course there is a point where I value life more than my innocence and my brain would just act and rationalize it with the 'At least I feel guilty' self delusion. But that is exactly my point, even then it would still be morally wrong to kill one person. My more realistic version of the interaction game does not account for that kind of asymmetric payout and it might turn out that defecting against the defector in this situation is no longer the best strategy. I personally would not defect even against someone killing the one to safe the five other than pointing out the possible immorality and refusing to cooperate, staying neutral.

...taxing use of natural resources or land ownership / stewardship could still be done morally.

Why? How is that not a violation of agency?

Also, what about children? Is forbidding them from eating too much candy evil because it violates their agency?

Your trying to push my concept of ethics back into the utilitarian frame while I was trying to free it from that.

Well, either your ethics can be formulated as maximizing a utility function, in which I case I want to understand that utility function, or your ethics conflicts with the VNM axioms in which case I want to make the conflict explicit.

Of course there is a point where I value life more than my innocence and my brain would just act and rationalize it with the 'At least I feel guilty' self delusion. But that is exactly my point, even then it would still be morally wrong to kill one person.

I don't get it. Are you saying not killing the one person is the right decision even if millions of lives are at stake?

My more realistic version of the interaction game does not account for that kind of asymmetric payout and it might turn out that defecting against the defector in this situation is no longer the best strategy.

How is it "more realistic" if it neglects to take asymmetry into account?

I personally would not defect even against someone killing the one to safe the five other than pointing out the possible immorality and refusing to cooperate, staying neutral.

OK, but are there stakes high enough for you to cooperate?

[-][anonymous]00

Why? How is that not a violation of agency?

Land and natural resources are just there and not a product of your agency. If many people want to make use of them neither can as they will be at odds, so the natural state is that nobody can act using natural resources. If we prohibit their use we're not limiting agency as there is none to begin with, but if all but one person agree to not use these resources that one person's agency is being increased as he has now more options. The land tax would be a compensation for the other people's claim they'd have to give up, which is a perfectly fine trade.

I think to make that argument sufficiently detailed would require a new top-level post or at least its own comment thread.

what about children?

Children don't have full agency which is why we need to raise them. I think the right of the parent to decide for their children diminishes as the child's agency increases, and that government has a right to take children away from parents that don't raise them to agency.

either your ethics can be formulated as maximizing a utility function, [...]

I have a utility function because I value morality, but using that utility function to explain the morality that I value would be circular reasoning.

I don't get it. Are you saying not killing the one person is the right decision even if millions of lives are at stake?

It would be the moral decision, not necessarily the right decision. I'm using morality to inform my utility function but I can still make a utility tradeoff. The whole point of agency ethics vs. value ethics is to separate the morality consideration from the utility consideration. Killing the one would as I put it make me a both bad and good person and people could still think that the good in this instance outweighs the bad. My point is that when we mash the two together into a single utility consideration we get wrong results like killing the organ donor because we neglect the underlying agency consideration.

How is it "more realistic" if it neglects to take asymmetry into account?

I meant 'more realistic' than the simple prisoners' dilemma but it's not realistic enough to show how defecting against a defector might not always be the best strategy with asymmetrical payoff.

OK, but are there stakes high enough for you to cooperate?

I don't know what you mean.

The land tax would be a compensation for the other people's claim they'd have to give up, which is a perfectly fine trade.

OK, let's do a thought experiment. On planet K, labor is required to keep the air temperature around 25C: let's say, in the form of operating special machines. The process cannot be automated and if an insufficient number of machines is manned, the temperature starts to rise towards 200C. The phenomenon is global and it is not possible to use the machines to cool a specific area of the surface. Now, 10% of the population are mutants that can survive the high temperature. The temperature resistance mutation also triggers highly unusual dreams. This allows the mutants to know themselves as such but there is no way to determine that a given person is a mutant (except subjecting her to high temperature for a sufficient amount of time, which seems to constitute a violation of agency if done forcibly). Normal people (without the mutation) would be die if the cooling machines cease operation.

Is the government within their rights to collect tax from the entire population to keep the machines operating?

Children don't have full agency which is why we need to raise them. I think the right of the parent to decide for their children diminishes as the child's agency increases, and that government has a right to take children away from parents that don't raise them to agency.

Why does the government have this right? If the children don't have agency the parents cannot defect against them therefore the government has no right to defect against the parents.

It would be the moral decision, not necessarily the right decision.

If moral decision =/= right decision, how do you define "moral"? Why is it this concept interesting at all? Maybe instead you should just say "my utility function has a component which assigns negative value to violating agency of other people" (btw this would be something that holds for me too). Regarding the discussion above, it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.

[-][anonymous]00

Maybe instead you should just say "my utility function has a component which assigns negative value to violating agency of other people"

Let's say I value paper clips. And you are destroying a bunch of paper clips that you created. What I see is that you are destroying value and I refuse to cooperate in return. But, you are not infringing on my agency, so I don't infringe on yours. That is an entirely separate concern not related to your destruction of value. So merely saying I value agency hides that important distinction.

I'm concerned that you are mixing two different things. One thing is that I might hold "not violating other people's agency" as a terminal value (and I indeed believe I and many other people have such a value). This wouldn't apply to a paperclip maximizer. Another is a game theoretic phenomenon in which I (either causally or acausally) agree to cooperate with another agent. This would apply to any agent with a "sufficiently good" decision theory. I wouldn't call it "moral", I'd just call it "bargaining". The last point is just a matter of terminology, but the distinction between the two scenarios is principal.

[-][anonymous]00

Although I've come to expect this result, it still baffles me.

I'm concerned that you are mixing two different things.

Those two things would be value ethics and agency ethics and I'm the one trying to hold them apart while you are conflating them.

I wouldn't call it "moral", I'd just call it "bargaining".

But we're not bargaining. This works even if we never meet. If agency is just another terminal value you can trade it for whatever else you value and by that you are failing to make the distinction that I'm trying to show. Only because agency is not just a terminal value can I make a game theoretic consideration outside the mere value comparison.

Agency thus becomes a set of guidelines that we use to judge right from wrong outside of mere value calculations. How is that not what we call 'morality'?

but the distinction between the two scenarios is principal.

And that would be exactly my point.

But we're not bargaining. This works even if we never meet.

Yeah, which would make it acausal trade. It's still bargaining in the game theoretic sense. The agents have a "sufficiently advanced" decision theory to allow them to reach a Pareto optimal outcome (e.g. Nash bargaining solution) rather than e.g. Nash equilibrium even acausally. It has nothing to do with "respecting agency".

[-][anonymous]00

Is the government within their rights to collect tax from the entire population to keep the machines operating?

It's about what you tax, not what for.

If the children don't have agency the parents cannot defect against them therefore the government has no right to defect against the parents.

The children have potential agency. I didn't account for that in my original post but I consider it relevant.

If moral decision =/= right decision, how do you define "moral"? Why is it this concept interesting at all?

It is interesting precicely because it is not already covered by some other concept. In my original phrasing morality would be about determining that someone is a defector, while the right decision would be about whether or not defecting against the defector is the dominant strategy. Killing one guy to save millions is the right decision because I can safely assume that no one will defect against me in return. Killing one to save five is not so clear cut. In that case people might kill me in order to not be killed by me.

it would also mean that e.g. collecting tax can be the right thing to do even if it violates agency.

That would be the 'necessary evil' argument. However since I believe taxes can be raised morally I don't consider the evil that is current forms of taxation to be necessary.

It's about what you tax, not what for.

But then the tax can exceed the actual value of the land, in which case the net value of the land becomes negative. This is troubling. Imagine for example that due to taxes increasing or your income decreasing you no longer have the means to pay for your land. But you can't sell it either because it's value is negative! So you have to pay someone to take it away, but you might not have enough money. Moreover, if the size of the tax is disconnected from the actual value of the land, your "moral" justification for the tax falls apart.

The children have potential agency. I didn't account for that in my original post but I consider it relevant.

OK, so you need to introduce new rules about interaction with "potential agents".

It is interesting precisely because it is not already covered by some other concept...

I don't object to the concept of "violating agency is bad", I'm objecting to equating it with "morality" since this use of terminology is confusing. On the other hand, names are not a matter of great importance.

However since I believe taxes can be raised morally I don't consider the evil that is current forms of taxation to be necessary.

Even if taxes can be raised consistently with your agency rule (assuming it receives a more precise formulation), it doesn't follow it is the correct way to raise taxes since there are other considerations that have to be taken into account, which might be stronger.

Your second and third objections to value ethics assume your conclusion. If utilitarianism is false, then it may be wrong to push a fat man to his death, but you can't assert that pushing a fat man to his death is wrong and therefore utilitarianism is false.

[-][anonymous]00

It was not meant as a premise to derive a conclusion, but rather to show how my moral intuitions oppose value ethics.

Maybe I should have written "Three things bother me about value ethics"?

Who says that ethics has to be intuitive?

[-][anonymous]10

If what you call 'ethics' is unrelated to what we intuitively understand as 'ethics', why call it that?

It's important to distinguish between the definition of the word "ethics" and the content of ethics. The definition of "ethics" is roughly something like "How one should act", "How one should be" , "What rules (if any) one should adopt", etc - it's an intuitive definition. The content of ethics is the specifics of how one should act, which need not be intuitive. If how one should act happens to be counterintuitive, that doesn't prevent it from being part of ethics because "ethics" means "how one should act".

For example, suppose it is true that you should push the fat man in front of a trolley, but your moral intuitions tell you otherwise. In that case, pushing the fat man is ethical, because ethics is what one should do, not what one intuits to be ethical.

[-][anonymous]00

I'm not so sure about that. However, I'm also unsure about how to properly express my objection. I will try and maybe then we need to accept that we don't have the same information and Aumann's agreement theorem doesn't apply.

Our brain recognizes a pattern and informs us about that in the form of an intuitive understanding of a part of reality. Our language system then slaps a name onto that concept. However the content of the concept is the part that is formed by our intuition, not the name. If different people use different names for the same concept, that is preferable to different people using the same name to refer to different concepts.

We intuitively understand that people ought or ought not to do something and label that concept ethics or morality. If you think you discovered another concept, it should be named differently, or we could take these as the distinction between ethics and morality and agree that I am referring to whatever word we choose to mean what I intended it to mean.

The other side of that is that I may be mistaken about what people think when they talk about utilitarian ethics or morality, in which case we might have stumbled upon the point where we could dissolve a disagreement which is a good thing. In that case I would like to ask: Is it in your opinion relevant to the discussion of what people should or should not do that I consider certain actions as defecting to which I feel a strong inclination to defect in return including hurting or killing these people in order to stop their defecting? If no, then we are talking about different things and don't actually disagree. If yes, then we probably talk about the same thing and I maintain, that my moral intuitions do play a significant role in the argument.

We intuitively understand that people ought or ought not to do something and label that concept ethics or morality.

We intuitively have ideas that people ought or ought not to do something, and some people end their investigation of morality there, without further looking into what people ought to do, but that doesn't mean that ethics is limited to what people intuitively think people ought to do - ethics is what people should actually do, whether it's intuitive or not. For example, it may seem counterintuitive to push an innocent man in front of a trolley - he's innocent, so it's wrong to kill him, right? But assuming that the fat man and each person tied to the track have equal value, by not pushing the fat man, you're choosing the preservation of lesser value over the preservation of greater value, so even though pushing the fat man may seem unethical (because of your intuitions), it's actually the the thing that you should do, and therefore it's ethical.

Is it in your opinion relevant to the discussion of what people should or should not do that I consider certain actions as defecting to which I feel a strong inclination to defect in return including hurting or killing these people in order to stop their defecting?

Maybe? I think I need an example to understand what you're asking.

[-][anonymous]00

But assuming that the fat man and each person tied to the track have equal value, by not pushing the fat man, you're choosing the preservation of lesser value over the preservation of greater value,

I understand that is the definition of utilitarianism. But I'm saying that is not how we decide what should or should not be done. So how do we decide which of us is right? I'd say by going back to our intuitions. You seem to be assuming your conclusion by taking maximization of value as axiom.

I think I need an example to understand what you're asking.

The fat man happens to be my friend. I see a guy trying to push him and shoot that guy (edit: fully aware that he tried to save the five). Would you call me a murderer (in the moral sense not the legal)? Or would you acknowledge that I acted defending my friend and just point out that the five who are now dead might have made this outcome less preferable?

You seem to be assuming your conclusion by taking maximization of value as axiom.

It's contradictory to say that something of lesser value is what should be preferred - if so, then what does "lesser value" mean? If something has greater value, we prefer it - that's what it means for something to have greater value.

The fat man happens to be my friend. I see a guy trying to push him and shoot that guy (edit: fully aware that he tried to save the five). Would you call me a murderer (in the moral sense not the legal)? Or would you acknowledge that I acted defending my friend and just point out that the five who are now dead might have made this outcome less preferable?

I wouldn't call you a murderer because you didn't kill the five people, and only killed in the defense of another. As for whether the five being dead makes the outcome less preferable, it's important to remember that value is agent-relative, and "less preferable" presumes an agent who has those preferences. Assuming everyone involved is a stranger to me, I have no reason to value any of them more highly than the others, and since I assign a positive value to human life, I would prefer the preservation of the greater number of lives. On the other hand, you value the life of your friend more than you value the lives of five strangers (and the person trying to save them), so you act based on what you value more highly. There is no requirement that we should agree about what is more valuable - what is more valuable for me may be less valuable for you. (This is why I'm not a utilitarian, despite being a consequentialist.) Since I value the lives of five strangers more highly, I should save them. You value your friend more highly, so you should save him.

[-][anonymous]00

I wouldn't call you a murderer because you didn't kill the five people, and only killed in the defense of another.

Then I guess we actually agree.

We agree on this point. But suppose that the fat man is a stranger to you, and the five people tied to the tracks are strangers as well. If you assign a positive value to strangers' lives, the five people have a greater value than the one person. So in this case you should push the fat stranger, even though you shouldn't push your friend.

[-][anonymous]00

So if the fat man was not my friend but just as much a stranger as the five you would call me a murderer? Because if not, I guess on some level you acknowledge that I operate under a different moral framework that I tried to explicate as agency ethics.

Whether you're a murderer depends on whether you caused the situation, i.e. tied the five to the tracks. If you discover the situation (not having caused it) and then do nothing and don't save the five, you're not a murderer. Once you discover the situation, you should save whomever you value more. If the fat man is your friend, you should save him, if everyone is a stranger, then you should save the five and kill the fat man.

[-][anonymous]00

What if there are no five people on the track but a cat and I just happen to value the cat more than the fat man? Should I push him? If not, what makes that scenario different, i.e. why does it matter if a human life is at stake?

You should save whatever you value more, whether it's a human, a cat, a loaf of bread (if you're a kind of being who really really likes bread and/or doesn't care about human life), or whatever.

The trolley version of the problem isn't the strongest argument of its kind against utilitarianism. The unwilling organ donor is better.

Suppose that a fat man wanders into a hospital in the middle of nowhere with 5 patients terminally ill, each of whom needs a different organ to live. Is is moral to kill the fat man in order to save the 5 patients? Assume that the fat man is a homeless orphan who has no relatives or connections to other people. Also assume that the hospital is very good at keeping secrets (this takes care of the objection that it would be terrible if everyone going to a hospital was afraid of being killed and having their organs stolen).

[-]Jiro20

My answer is that if instead of killing a fat man we were levying taxes, the tax probably incrementally harms a lot of people enough to cause at least one death when summed up over the whole tax base. If you're not permitted to kill one innocent person to save five, you're also not permitted to have nontrivial taxes. It's just a case of seen versus unseen--the fat person is standing in front of you and you probably can't ever figure out which person is killed by the tax--but seen versus unseen is a bias, not an ethical standard.

(My answer is also that in order to get this answer from me, you have to make enough unrealistic assumptions about the fat man scenario that I would never do that in real life.)

You're still biting the bullet? Then lets do it. Replace the fat man with farmed babies. Doesn't satisfy the secret condition but since only farmed babies are used, no one has to worry about getting kidnapped and having their organs harvested - so no worries about society collapsing into anarchy over fear. The utilitarian credentials of this world are impeccable - the loss of utility from killing farmed humans is way overshadowed by the advances in biomedical technology and research.

If you're still willing to bite this bullet, I'm curious what you would say about my other objection. Even if utilitarianism is accepted as a global theory of ethics (evaluating what worlds are better than others), no utilitarian is self consistent enough to apply this theory to himself/herself. Does anyone actually think that the morally ideal life is one spent barely subsisting while focusing almost all of their efforts on others? Its quite easy to say yes from then comfort of your current life, to affirm a belief in belief, but to actually believe in this ideal and try to live up to it is much harder - not even Peter Singer comes close.

[-]Jiro00

I'm not arguing for utilitarianism, I'm arguing against a specific objection to utilitarianism. I would object to the farmed babies scenario, but I wouldn't object to it on grounds that would apply to the fat man scenario.

(If you think I'm arguing for utilitarianism, see my recent comment on immigration. I pointed out that letting in unlimited immigrants to increase their utility basically is the scenario of barely subsisting and spending everything to help others, except you're demanding that the nation spend it instead of yourself.)

Okay, I'm not arguing that utilitarianism is self defeating or anything like that. Its perfectly self consistent, and I find its basic conclusions repellent.

My answer is that if instead of killing a fat man we were levying taxes, the tax probably incrementally harms a lot of people enough to cause at least one death when summed up over the whole tax base.

That seems iffy, money is fungible in a way that organs are not, and taxes are usually set up to be progressive. If you take a few more dollars from someone, they don't avoid a life saving surgery to save money, they purchase one less starbucks latte or the like.

Its possible someone is spending 100% of their income on life-sustaining needs, but not that plausible.

[-]Jiro00

It still works at the margins. There are people below some boundary who can't afford some operation. There are also people above the boundary. With a large population, there must also be people at the boundary. Saying "they can just spend less on frivolous items" is equivalent to saying "there's nobody at the boundary", and there is.

Not to mention that even if nobody just cancels an operation because of a $50 tax bill, they might postpone a doctor's visit a week, leading to an 0.1% greater chance of death, or postpone a car repair with the same result, or decide to buy the slightly more rickety ladder because their balance between price and safety comes at a slightly different point, etc. Over a large enough population someone will fall victim to that 0.1% chance and die.

It still works at the margins.

Only if there are people who are spending less than the tax increase on leisure goods, this seems unlikely to me.

Not to mention that even if nobody just cancels an operation because of a $50 tax bill, they might postpone a doctor's visit a week, leading to an 0.1% greater chance of death, or postpone a car repair with the same result, or decide to buy the slightly more rickety ladder because their balance between price and safety comes at a slightly different point, etc

But now cause is weaker- if people have less money they may choose to be riskier, sure. But now they are choosing the risk, which is a very different case then killing the man for his organs.

[-]Jiro00

Only if there are people who are spending less than the tax increase on leisure goods, this seems unlikely to me.

The point is that there are people who aren't spending anything at all. If there's a gradation between people who aren't spending anything at all and people who are spending a small amount, there have to be people in the middle who are just spending a tiny amount.

By your reasoning, even a large tax couldn't possibly have this effect. Just divide the large tax increase into a series of smaller taxes and argue that each individual small increase has no effect. In fact, the smaller taxes will have an effect at some point, after you've added enough of them and the next one goes over some limit. Now, consider that there already are taxes--there will be people for which this new tax is the one that sends them over the limit.

if people have less money they may choose to be riskier, sure

If people have less money, then choosing a riskier option may be the rational thing to do. If it wouldn't have been a rational thing to do if you hadn't taken away their money, then the increase in risk is your fault. You can't launder the effect of taking the money by saying that they chose to take the risk--you're the one who changed the balance to favor a higher risk.