All of Breakfast's Comments + Replies

No kidding! Haligonian lurker here too.

0fburnaby
Very cool! I figured Canadians wouldn't be very well represented here, let alone Haligonians!

What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought.

I'd object to this simplification of the meaning of the word (I'd argue that 'ought' means lots of different things in different contexts, most of which aren't only reducible to categorically imperative moral claims), but I suppose it's not really relevant here.

I'm pretty sure we agree and are jus... (read more)

3Psy-Kosh
What's failing? "what is 2+3?" has an objectively true answer. The fact that some other creature might instead want to know the answer to the question "what is 6*7?" (which also has an objectively true answer) is irrelevant. How does that make "what is 2+3?" less real? Similarly, how does the fact that some other beings might care about something other than morality make questions of the form "what is moral? what should I do?" non objective? It's nothing to do with agreement. When you ask "ought I do this?", well... to the extent that you're not speaking empty words, you're asking SOME specific question. There is some criteria by which "oughtness" can be judged... that is, the defining criteria. It may be hard for you to articulate, it may only be implicitly encoded in your brain, but to the extent that word is a label for some concept, it means something. I do not think you'd argue too much against this. I make an additional claim: That that which we commonly refer to in these contexts by words like "Should", "ought" and so on is the same thing we're referring to when we say stuff like "morality". To me "what should I do?" and "what is the moral thing to do?" are basically the same question, pretty much. "Ought I be moral?" thus would translate to "ought I be the sort of person that does what I ought to do?" I think the answer to that is yes. There may be beings that agree with that completely but take the view of "but we simply don't care about whether or not we ought to do something. It is not that we disagree with your claims about whether one ought to be moral. We agree we ought to be moral. We simply place no value in doing what one 'ought' to do. Instead we value certain other things." But screw them... I mean, they don't do what they ought to do! (EDIT: minor changes to last paragraph.)

Well, my "moral reasons are to be..." there was kind of slippery. The 'strong moral realism' Roko outlined seems to be based on a factual premise ("All...beings...will agree..."), which I'd agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of ... moral imperative to accept moral imperatives -- by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.

3Psy-Kosh
What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought. I'm saying"this oughtness, whatever it is, is the same thing that you mean when you talk about 'morality'. So "ought I be moral?" directly translates to "is it moral to be moral?" I'm not saying "only morality has the authority to answer this question" but rather "uh... 'is X moral?' is kind of what you actually mean by ought/should/etc, isn't it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn't it going to be pointing/labeling the same algorithms that "morality" labels in your brain? So basically it amounts to "yes, there're things that one ought to do... and there can exist beings that know this but simply don't care about whether or not they 'ought' to do something." It's not that another being refuses to recognize this so much as they'd be saying "So what? we don't care about this 'oughtness' business." It's not a disagreement, it's simply failing to care about it.

What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"?

By that I mean rationally motivating reasons. But I'd be willing to concede, if you pressed, that 'rationality' is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say 'rationality' is incongruent with the set of values you mean when you say 'morality,' then it appears you have no grounds on which to persuade me to be directed by morality.

This is a very uns... (read more)

3Psy-Kosh
Rationality is basically "how to make an accurate map of the world... and how to WIN (where win basically means getting what you "want" (where want includes all your preferences, stuff like morality, etc etc...) Before rationality can tell you what to do, you have to tell it what it is you're trying to do. If your goal is to save lives, rationality can help you find ways to do that. If your goal is to turn stuff into paperclips, rationality can help you find ways to do that too. I'm not sure I quite understand you mean by "rationally motivating" reasons. As far as objectively compelling to any sentient (let me generalize that to any intelligent being)... Why should there be any such thing? "Doing this will help ensure your survival" "But... what if I don't care about this?" "doing this will bring joy" "So?" etc etc... There are No Universally Compelling Arguments
2Douglas_Knight
According to the original post, strong moral realism (the above) is not held by most moral realists.

Well, that's not necessarily a moral sense of 'should', I guess -- I'm asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.

It's generally the contention of moralists and paperclipists that there's always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn't seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don't already.

5Psy-Kosh
What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"? As far as your statement about both moralists and paperclippers thinking there are "good reasons"... the catch is that the phrase "good reasons" is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well... good, as opposed to evil. A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips. It's not that it should do so, but simply that it doesn't care what it should do. Being evil doesn't bother it any more than failing to maximize paperclips bothers you. Being evil is clearly worse (where by "worse" I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn't care. But you do (as far as I know. If you don't, then... I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?

But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you're already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.

5Psy-Kosh
"should" What do you mean by "should"? Do you actually mean anything by it other than an appeal to morality in the first place?

I'm newish here too, JenniferRM!

Sure, I have an impact on the behaviour of people who encounter me, and we can even grant that they are more likely to imitate/approve of how I act than disapprove and act otherwise -- but I likely don't have any more impact on the average person's behaviour than anyone else they interact with does. So, on balance, my impact on the behaviour of the rest of the world is still something like 1/6.5 billion.

And, regardless, people tend to invoke this "What if everyone ___" argument primarily when there are no clear ill... (read more)

Err... I suspect our priors on this subject are very different.

From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti.

The only way your comments are intelligible to me is that you are enmeshed i... (read more)

0[anonymous]
Err... I suspect our priors on this subject are very different. From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti. The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference. If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy. In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood. In this scenario, the cos
0[anonymous]
Err... I suspect our priors on this subject are very different. From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti. The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference. If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy. In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood. In this scenario, the cos

It suits some intuitions very nicely.

I suppose that's about as good as we're going to get with moral theories!

Well, I hope I haven't caused you too much corner-sobbing; thanks for explaining.

Actually, Kant only defended the duty not to lie out of philanthropic concerns.

Huh! Okay, good to know. ... So not-lying-out-of-philanthropic-concerns isn't a mere context-based variation?

A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if etc.

Okay, I get that. But what does it inform you of? Why should one care in particular about the universalizability of one's actions?

I don't want to just come down to asking "Why should I be moral?", because I already think there is no good answer to that question. But why this particular picture of morality?

6Alicorn
I don't have an arsenal with which to defend the universalizeability thing; I don't use it, as I said. Kant seems to me to think that performing only universalizeable actions is a constraint on rationality; don't ask me how he got to that - if I had to use a CI formulation I'd go with the "treat people as ends in themselves" one. It suits some intuitions very nicely. If it doesn't suit yours, fine; I just want people to stop trying to cram mine into boxes that are the wrong shape.

Huh? To be fair, I don't think you were setting out to make the case for deontology here. All I am saying about its "use" is that I don't see any appeal. I think you gave a pretty good description of what deontologists are thinking; the North Pole - reindeer - haunting paragraph was handily illustrative.

Anyway, I think Kant may be to blame for employing arguments that consider "what would happen if others performed similar acts more frequently than they actually do". People say similar things all the time -- "What if everyone did that?" -- as though there were a sort of magical causal linkage between one's individual actions and the actions of the rest of the world.

There is a "magical causal connection" between one's individual actions and the actions of the rest of the world.

Other people will observe you acting and make reasonable inferences on the basis of their observation. Depending on your scientific leanings, it's plausible to suppose that these inferences have been so necessary to human survival that we may have evolutionary optimizations that make moral reasoning more effective than general reasoning.

For example, if they see you "get away with" an act they will infer that if they repeat... (read more)

6Alicorn
I wasn't trying to make the case for deontology, no - just trying to clear up the worst of the misapprehensions about it. Which is that it's not just consequentialism in Kantian clothing, it's a whole other thing that you can't properly understand without getting rid of some consequentialist baggage. There does not have to be a causal linkage between one's individual actions and those of the rest of the world. (Note: my ethics don't include a counterfactual component, so I'm representing a generalized picture of others' views here.) It's simply not about what your actions will cause! A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if the world is about to end and your act will have no consequences at all beyond being the act it is. It can be informative even if you'd never have dreamed of performing the act were it a common act type (in fact, especially then!). The counterfactual is a place to stop. It is, if justificatory at all, inherently justificatory.

I'm (obviously) no Kant scholar, but I wonder if there is any possible way to flesh out a consistent and satisfactory set of such context-invariant ethical injunctions.

For example, he infamously suggests not lying to a murderer who asks where your friend is, even if you reasonably expect him to go murder your friend, because lying is wrong. Okay -- even if we don't follow our consequentialist intuitions and treat that as a reductio ad absurdum for his whole system -- that's your 'not lying' principle satisfied. But what about your 'not betraying your friends' principle? How many principles have we got in the first place, and how can we weigh them against one another?

2bogus
Actually, Kant only defended the duty not to lie out of philanthropic concerns. But if the person inquired of was actually a friend, then one might reasonably argue that you have a positive duty not to reveal his location to the murderer, since to do otherwise would be inconsistent with the implied contract between you and your friend. To be fair, you might also have a duty to make sure that your friend is not murdered, and this might create an ethical dilemma. But ethical dilemmas are not unique to deontology. ETA: It has also been argued that Kant's reasoning in this case was flawed since the murderer engages in a violation of a perfect duty, so the maxim of "not lying to a known murderer" is not really universalizable. But the above reasoning would go through if you replaced the murderer with someone else whom you wished to keep away from your friend out of philanthropic concerns.

Sorry. But then I said:

Maybe this is to beg the question of consequences mattering in the first place.

And added,

But I suppose I have no idea what use deontology is if it doesn't boil down to consequentialism at some level.

?

6Alicorn
Yeah, if you have no idea what "use" deontology is unless it's secretly just tarted-up consequentialism, I have failed.

Certainly, many theists immediately lump atheism, utilitarianism and nihilism together. There are heaps of popular depictions framing utilitarian reasoning as being too 'cold and calculating' and not having 'real heart'. Which follows from atheists 'not having any real values' and from accepting the nihilistic, death-obsessed Darwinian worldview, etc.

What has never stopped bewildering me is the question of why anyone should consider such a possible world relevant to their individual decision-making. I know Kant has some... tangled, Kantian argument regarding this, but does anyone who isn't a die-hard Kantian have any sensible reason on hand for considering the counterfactual "What if everyone did the same"?

Everyone doing X is not even a remotely likely consequence of me doing X. Maybe this is to beg the question of consequences mattering in the first place. But I suppose I have no idea what u... (read more)

5Kaj_Sotala
I thought of one possible reason that would make deontology "justifiable" in consequentialist terms. Those classic "my decision has negligible effect by itself, but if everyone made the same decision, it would be good/bad" situations, like "should I bother voting" or "is okay if I shoplift". If everyone were consequentialists, each might individually decide that the effect of their action is negligible, and thus end up not voting or deciding that shoplifting was okay, with disastrous effects for society. In contrast, if more people were deontologists, they'd do the right thing even if the effect of their individual decision probably didn't change anything.
7bogus
Kant's point is not that "everyone doing X" matters, it's that ethical injunctions should be indexically invariant, i.e. "universal". If an ethical injunction is affected by where in the world you are, then it's arguaby no ethical injunction at all. Wei_Dai and EY have done some good work in reformulating decision theory to account for these indexical considerations, and the resulting theories (UDT and TDT) have some intuitively appealing features, such as cooperating in the one-shot PD under some circumstances. Start with this post.
Alicorn130

Everyone doing X is not even a remotely likely consequence of me doing X.

AAAAAAAAAAAAH

*ahem* Excuse me.

I meant: Wow, have I ever failed at my objective here! Does anyone want me to keep trying, or should I give up and just sob quietly in a corner for a while?

That sounds pretty confusing. You might as well just not have officially sanctioned factions in the first place, right? People who agree on a given issue will naturally band together on it, but they won't be so afflicted with the bias or the pressure that comes of being on a well-defined Side, to have their whole range of opinions cohere with those held by the group. There are already de facto 'factions' on any issue we might discuss, and everyone is already felt to be continually obliged to examine the rationality of their positions, so it kind of seems like we're already there!

2Jack
I took bogus's point to be that we can avoid some of the harms of bad faith arguments if we make motivations explicit with clearly defined factions. That would be a reason to prefer official factions to de facto factions. But my proposal might be too convoluted a solution for a problem that I haven't really noticed here. And I'm not sure how much officially sanctioned factions actually would prevent bad faith arguments.

Hrm. Well, if politics itself is any example to judge by, that may make for a resilient institution -- but the mess of allegiances and biases created by splitting people into well-defined factions probably entails that the institution would be much worse off in terms of truth-finding, because devoting too much of its energies to internecine squabbling.

I suppose you need to strike a balance between unproductive antagonism, and ending up as a group of like-minded folks just patting each other on the back. Thankfully, LW seems to have a strong dose of "Let's get to the bottom of this"-type norms, and the appropriately rigorous/persnickety personalities, to stop it from getting too back-patty.

2Jack
Still I think we'd need some measure to prevent becoming permanently entrenched into factions. Maybe have an artificial time-limit for clearly defined factions. Every two weeks we tell everyone to give up factional loyalties and consider the evidence given. Then after a couple days re-form the factions along new boundaries.

Well, thank you again!

Done, thanks. (That was my first ever comment here)

3wedrifid
Welcome, Breakfast.

But parents — probably the vast majority of them — routinely make tremendous sacrifices in every area of their lives for their children, which seems to come pretty darn close.

0taw
Evidence of these "tremendous sacrifices" being... ?
Breakfast190

"But people are sometimes authoritarian and cruel! Just for fun! And the only people who you can be consistently cruel to without them slugging you, shunning you, suing you, or calling the police on you are your children. This is a reason for more than the usual amount of skepticism of arguments that say that strict parenting is necessary."

That's a very good point. But there may be a parallel counterpoint: "Sometimes parents are indulgent and too lazy or exhausted or undisciplined to enforce an appropriate degree of discipline in their ow... (read more)

2Jonathan_Graehl
I was going to make exactly that point. There are biases in both directions; the author's argument should be that the bias toward harshness dominates. Also, it's likely that much seemingly frivolous cruelty actually increases the status of its perpetrator. I don't think there's much gain when the victim is so far from you in status as your child, but it's quite believable to me that at least a few million adults are broken enough that it's a possibility.
4David_J_Balan
I guess people who can't control their kids might make a virtue of necessity and say that they did it on purpose b/c it's good for the kid. Nice twist. But the amount of harm that comes from this strikes me as way smaller than what comes from "it's for their own good." Abusing context slightly, I will quote The Souce: Bart: No offense, Homer, but your half-assed underparenting was a lot more fun than your half-assed overparenting Homer: But I'm using my whole ass.
8DanArmak
This is a good point. One problem with legal oppression of young people is that the age of majority varies from 16-21, but most people stop adoring their parents (and, technically, stop being children) in adolescence, age 11-13.
3wedrifid
To enhance the reading experience quote slabs of text by placing a '>' at the start of the line.