Posts

Sorted by New

Wiki Contributions

Comments

No kidding! Haligonian lurker here too.

What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought.

I'd object to this simplification of the meaning of the word (I'd argue that 'ought' means lots of different things in different contexts, most of which aren't only reducible to categorically imperative moral claims), but I suppose it's not really relevant here.

I'm pretty sure we agree and are just playing with the words differently.

There are certain things one ought to do -- and by 'ought' I mean you will be motivated to do those things, provided you already agree that they are among the 'things one ought to do'

and

There is no non-circular answer to the question "Why should I be moral?", so the moral realists' project is sunk

seem to amount to about the same thing from where I sit. But it's a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that "there are things one ought to do".

Well, my "moral reasons are to be..." there was kind of slippery. The 'strong moral realism' Roko outlined seems to be based on a factual premise ("All...beings...will agree..."), which I'd agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of ... moral imperative to accept moral imperatives -- by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.

What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"?

By that I mean rationally motivating reasons. But I'd be willing to concede, if you pressed, that 'rationality' is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say 'rationality' is incongruent with the set of values you mean when you say 'morality,' then it appears you have no grounds on which to persuade me to be directed by morality.

This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I'm not sure if the position you're espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn't seem to get us any more traction when it comes to answering "Why should I be moral?"

But you do (as far as I know. If you don't, then... I think you scare me).

Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?

Well, that's not necessarily a moral sense of 'should', I guess -- I'm asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.

It's generally the contention of moralists and paperclipists that there's always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn't seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don't already.

But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you're already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.

I'm newish here too, JenniferRM!

Sure, I have an impact on the behaviour of people who encounter me, and we can even grant that they are more likely to imitate/approve of how I act than disapprove and act otherwise -- but I likely don't have any more impact on the average person's behaviour than anyone else they interact with does. So, on balance, my impact on the behaviour of the rest of the world is still something like 1/6.5 billion.

And, regardless, people tend to invoke this "What if everyone ___" argument primarily when there are no clear ill effects to point out, or which are private, in my experience. If I were to throw my litter in someone's face, they would go "Hey, asshole, don't throw your litter in my face, that's rude." Whereas, if I tossed it on the ground, they might go "Hey, you shouldn't litter," and if I pressed them for reasons why, they might go "If everyone littered here this place would be a dump." This also gets trotted out in voting, or in any other similar collective action problem where it's simply not in an individual's interests to 'do their part' (even if you add in the 1/6.5-billion quantity of positive impact they will have on the human race by their effect on others).

"You may think it was harmless, but what if everyone cheated on their school exams like you did?" -- "Yeah, but, they don't; it was just me that did it. And maybe I have made it look slightly more appealing to whoever I've chosen to tell about it who wasn't repelled by my doing so. But that still doesn't nearly get us to 'everyone'."

It suits some intuitions very nicely.

I suppose that's about as good as we're going to get with moral theories!

Well, I hope I haven't caused you too much corner-sobbing; thanks for explaining.

Actually, Kant only defended the duty not to lie out of philanthropic concerns.

Huh! Okay, good to know. ... So not-lying-out-of-philanthropic-concerns isn't a mere context-based variation?

A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if etc.

Okay, I get that. But what does it inform you of? Why should one care in particular about the universalizability of one's actions?

I don't want to just come down to asking "Why should I be moral?", because I already think there is no good answer to that question. But why this particular picture of morality?

Load More