Comment author: fburnaby 19 April 2010 03:35:46PM *  7 points [-]

Hi.

I'm a 24 yo male grad student (in Halifax, Nova Scotia) studying ecological math modelling.

This site is a gold-mine for clear thinking on the relationship between maps (models) and territories (systems). I'm interested in understanding and dealing with the trade-off between fidelity of the map to the territory and its 'legibility'. I've been lurking for about a year after coming across an article by Eliezer via Hacker News and got hooked.

Comment author: Breakfast 19 April 2010 09:03:21PM 2 points [-]

No kidding! Haligonian lurker here too.

Comment author: Psy-Kosh 01 February 2010 03:18:49PM 2 points [-]

What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought.

I'm saying"this oughtness, whatever it is, is the same thing that you mean when you talk about 'morality'. So "ought I be moral?" directly translates to "is it moral to be moral?"

I'm not saying "only morality has the authority to answer this question" but rather "uh... 'is X moral?' is kind of what you actually mean by ought/should/etc, isn't it? ie, if I do a bit of a trace in your brain, follow the word back to its associated concepts, isn't it going to be pointing/labeling the same algorithms that "morality" labels in your brain?

So basically it amounts to "yes, there're things that one ought to do... and there can exist beings that know this but simply don't care about whether or not they 'ought' to do something."

It's not that another being refuses to recognize this so much as they'd be saying "So what? we don't care about this 'oughtness' business." It's not a disagreement, it's simply failing to care about it.

Comment author: Breakfast 01 February 2010 04:29:11PM *  0 points [-]

What I'm saying is that when you say the word "ought", you mean something. Even if you can't quite articulate it, you have some sort of standard for saying "you ought do this, you ought not do that" that is basically the definition of ought.

I'd object to this simplification of the meaning of the word (I'd argue that 'ought' means lots of different things in different contexts, most of which aren't only reducible to categorically imperative moral claims), but I suppose it's not really relevant here.

I'm pretty sure we agree and are just playing with the words differently.

There are certain things one ought to do -- and by 'ought' I mean you will be motivated to do those things, provided you already agree that they are among the 'things one ought to do'

and

There is no non-circular answer to the question "Why should I be moral?", so the moral realists' project is sunk

seem to amount to about the same thing from where I sit. But it's a bit misleading to phrase your admission that moral realism fails (and it does, just as paperclip realism fails) as an affirmation that "there are things one ought to do".

Comment author: Douglas_Knight 01 February 2010 07:44:32AM 1 point [-]

This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being.

According to the original post, strong moral realism (the above) is not held by most moral realists.

Comment author: Breakfast 01 February 2010 02:41:15PM *  1 point [-]

Well, my "moral reasons are to be..." there was kind of slippery. The 'strong moral realism' Roko outlined seems to be based on a factual premise ("All...beings...will agree..."), which I'd agree most moral realists are smart enough not to hold. The much more commonly held view seems to amount instead to a sort of ... moral imperative to accept moral imperatives -- by positing a set of knowable moral facts that we might not bother to recognize or follow, but ought to. Which seems like more of the same circular reasoning that Psy-Kosh has been talking about/defending.

Comment author: Psy-Kosh 01 February 2010 06:46:51AM *  3 points [-]

What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"?

As far as your statement about both moralists and paperclippers thinking there are "good reasons"... the catch is that the phrase "good reasons" is being used to refer to two distinct concepts. When a human/moralist uses it, they mean, well... good, as opposed to evil.

A paperclipper, however, is not concerned at all about that standard. A paperclipper cares about what, well, maximizes paperclips.

It's not that it should do so, but simply that it doesn't care what it should do. Being evil doesn't bother it any more than failing to maximize paperclips bothers you.

Being evil is clearly worse (where by "worse" I mean, well, immoral, bad, evil, etc...) that being good. But the paperclipper doesn't care. But you do (as far as I know. If you don't, then... I think you scare me). What sort of standard other than morality would you want to appeal to for this sort of issue in the first place?

Comment author: Breakfast 01 February 2010 07:05:18AM *  0 points [-]

What do you mean by "should" in this context other than a moral sense of it? What would count as a "good reason"?

By that I mean rationally motivating reasons. But I'd be willing to concede, if you pressed, that 'rationality' is itself just another set of action-directing values. The point would still stand: if the set of values I mean when I say 'rationality' is incongruent with the set of values you mean when you say 'morality,' then it appears you have no grounds on which to persuade me to be directed by morality.

This is a very unsatisfactory conclusion for most moral realists, who believe that moral reasons are to be inherently objectively compelling to any sentient being. So I'm not sure if the position you're espousing is just a complicated way of expressing surrender, or an attempt to reframe the question, or what, but it doesn't seem to get us any more traction when it comes to answering "Why should I be moral?"

But you do (as far as I know. If you don't, then... I think you scare me).

Duly noted, but is what I happen to care about relevant to this issue of meta-ethics?

Comment author: Psy-Kosh 01 February 2010 06:05:32AM 4 points [-]

"should"

What do you mean by "should"? Do you actually mean anything by it other than an appeal to morality in the first place?

Comment author: Breakfast 01 February 2010 06:37:15AM *  0 points [-]

Well, that's not necessarily a moral sense of 'should', I guess -- I'm asking whether I have any sort of good reason to act morally, be it an appeal to my interests or to transcendent moral reasons or whatever.

It's generally the contention of moralists and paperclipists that there's always good reason for everyone to act morally or paperclippishly. But proving that this contention itself just boils down to yet another moral/paperclippy claim doesn't seem to help their case any. It just demonstrates what a tight circle their argument is, and what little reason someone outside of it has to care about it if they don't already.

Comment author: Psy-Kosh 01 February 2010 03:54:53AM 6 points [-]

It's more almost, well, I hate to say this, but more a matter of definitions.

ie, what do you MEAN by the term "right"?

Just keep poking your brain about that, and keep poking your brain about what you mean by "should" and what you actually mean by terms like "morality" and I think you'll find that all those terms are pointing at the same thing.

It's not so much "there's this criteria of 'rightness' that only morality has the ability to measure" but rather an appeal to morality is what we mean when we say stuff like "'should' we do this? is it 'right'?" etc...

The situation is more, well, like this:

Humans: "Morality says that, among other things, it's more better and moral to be, well, moral. It is also moral to save lives, help people, bring joy, and a whole lot of other things"

Paperclipers: "having scanned your brains to see what you mean by these terms, we agree with your statement."

Paperclippers: "Converting all the matter in your system into paperclips is paperclipish. Further, it is better and paperclipish to be paperclipish."

Humans: "having scanned your minds to determine what you actually mean by those terms, we agree with your statement."

Humans: "However, we don't care about paperclipishness. We care about morality. Turning all the matter of our solar system (including the matter we are composed of) into paperclips is bad, so we will try to stop you."

Paperclippers: "We do not care about morality. We care about paperclipishness. Resisting the conversion to paperclips is unpaperclipish. Therefore we will try to crush your resistance."

This is very different from what we normally think of as circular arguments, which would be of the form of "A, therefore B, therefore A, QED", while the other side would be "no! not A"

Here, all sides agree about stuff. It's just that they value different things. But the fact of humans valuing the stuff isn't the justification for valuing that stuff. The justification is that it's moral. But the fact is that we happen to be moved by arguments like "it's moral", rather than the wicked paperclippers that only care about whether it's paperclipish or not.

Comment author: Breakfast 01 February 2010 05:56:54AM *  0 points [-]

But why should I feel obliged to act morally instead of paperclippishly? Circles seem all well and good when you're already inside of them, but being inside of them already is kind of not the point of discussing meta-ethics.

Comment author: JenniferRM 01 February 2010 12:16:28AM *  15 points [-]

There is a "magical causal connection" between one's individual actions and the actions of the rest of the world.

Other people will observe you acting and make reasonable inferences on the basis of their observation. Depending on your scientific leanings, it's plausible to suppose that these inferences have been so necessary to human survival that we may have evolutionary optimizations that make moral reasoning more effective than general reasoning.

For example, if they see you "get away with" an act they will infer that if they repeat your action the will also avoid reprisal (especially if you and they are in similar social reference classes). If they see you act proudly and in the open they will infer that you've already done the relevant social calculations to determine that no one will object and apply sanctions. If they see you defend the act with words, they will assume that they can cite you as an authority and you'll support them in a factional debate in order not to look like a hypocrite... and so on ad nauseum.

There are various reasons people might deny that they function as role models in society. Perhaps they are hermits? Or perhaps they are not paying attention to how social processes actually happen? Or it may also be the case that they are momentarily confabulating excuses because they've been caught with blood on their hands?

Not that I'm a big deontologist, but I think deontologists say things that are interesting, worthwhile, and seem unlikely to be noticed from other theoretical perspectives. Several apologists for deontology who I've known from a distance (mostly in speech and debate contexts) were super big brains.

Their pitch, to get people into the relevant deliberative framework, frequently involved an epistemic argument at the beginning. Basically they pointed out that it was silly to make moral judgments with instantaneous behavioral consequences based on things you can't see or measure or know in the present. There is more to it than that (like there are nice ways to update and calculate deontic moral theories based on morality estimates, subsequent acts, and independent "retrospective moral feelings" about how the things turned out) but we're just in the comment section, and I'd rather not have my fourth post in this community spend a lot of time articulating the upsides a moral theory that I don't "fully endorse" :-)

Comment author: Breakfast 01 February 2010 02:38:45AM *  1 point [-]

I'm newish here too, JenniferRM!

Sure, I have an impact on the behaviour of people who encounter me, and we can even grant that they are more likely to imitate/approve of how I act than disapprove and act otherwise -- but I likely don't have any more impact on the average person's behaviour than anyone else they interact with does. So, on balance, my impact on the behaviour of the rest of the world is still something like 1/6.5 billion.

And, regardless, people tend to invoke this "What if everyone _" argument primarily when there are no clear ill effects to point out, or which are private, in my experience. If I were to throw my litter in someone's face, they would go "Hey, asshole, don't throw your litter in my face, that's rude." Whereas, if I tossed it on the ground, they might go "Hey, you shouldn't litter," and if I pressed them for reasons why, they might go "If everyone littered here this place would be a dump." This also gets trotted out in voting, or in any other similar collective action problem where it's simply not in an individual's interests to 'do their part' (even if you add in the 1/6.5-billion quantity of positive impact they will have on the human race by their effect on others).

"You may think it was harmless, but what if everyone cheated on their school exams like you did?" -- "Yeah, but, they don't; it was just me that did it. And maybe I have made it look slightly more appealing to whoever I've chosen to tell about it who wasn't repelled by my doing so. But that still doesn't nearly get us to 'everyone'."

Comment author: Alicorn 31 January 2010 05:41:47PM *  5 points [-]

I don't have an arsenal with which to defend the universalizeability thing; I don't use it, as I said. Kant seems to me to think that performing only universalizeable actions is a constraint on rationality; don't ask me how he got to that - if I had to use a CI formulation I'd go with the "treat people as ends in themselves" one.

But why this particular picture of morality?

It suits some intuitions very nicely. If it doesn't suit yours, fine; I just want people to stop trying to cram mine into boxes that are the wrong shape.

Comment author: Breakfast 31 January 2010 06:00:36PM 3 points [-]

It suits some intuitions very nicely.

I suppose that's about as good as we're going to get with moral theories!

Well, I hope I haven't caused you too much corner-sobbing; thanks for explaining.

Comment author: bogus 31 January 2010 05:18:48PM *  1 point [-]

For example, he infamously suggests not lying to a murderer who asks where your friend is

Actually, Kant only defended the duty not to lie out of philanthropic concerns. But if the person inquired of was actually a friend, then one might reasonably argue that you have a positive duty not to reveal his location to the murderer, since to do otherwise would be inconsistent with the implied contract between you and your friend.

To be fair, you might also have a duty to make sure that your friend is not murdered, and this might create an ethical dilemma. But ethical dilemmas are not unique to deontology.

ETA: It has also been argued that Kant's reasoning in this case was flawed since the murderer engages in a violation of a perfect duty, so the maxim of "not lying to a known murderer" is not really universalizable. But the above reasoning would go through if you replaced the murderer with someone else whom you wished to keep away from your friend out of philanthropic concerns.

Comment author: Breakfast 31 January 2010 05:36:17PM *  1 point [-]

Actually, Kant only defended the duty not to lie out of philanthropic concerns.

Huh! Okay, good to know. ... So not-lying-out-of-philanthropic-concerns isn't a mere context-based variation?

Comment author: Alicorn 31 January 2010 05:23:18PM 4 points [-]

I wasn't trying to make the case for deontology, no - just trying to clear up the worst of the misapprehensions about it. Which is that it's not just consequentialism in Kantian clothing, it's a whole other thing that you can't properly understand without getting rid of some consequentialist baggage.

There does not have to be a causal linkage between one's individual actions and those of the rest of the world. (Note: my ethics don't include a counterfactual component, so I'm representing a generalized picture of others' views here.) It's simply not about what your actions will cause! A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if the world is about to end and your act will have no consequences at all beyond being the act it is. It can be informative even if you'd never have dreamed of performing the act were it a common act type (in fact, especially then!). The counterfactual is a place to stop. It is, if justificatory at all, inherently justificatory.

Comment author: Breakfast 31 January 2010 05:32:05PM *  0 points [-]

A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if etc.

Okay, I get that. But what does it inform you of? Why should one care in particular about the universalizability of one's actions?

I don't want to just come down to asking "Why should I be moral?", because I already think there is no good answer to that question. But why this particular picture of morality?

View more: Next