I quit trying to read the post half way through. Don't spent at least half of a long post explaining how you came to think about the topic in question!
If it's true that moral decisions cannot be made on a rational basis, then it should be impossible for me to find an example of a moral decision which was made rationally, right?
All decisions are in a sense "moral decisions". You should distinguish the process of decision-making from the question of figuring out your values. You can't define values "on rational basis", but you use a rational process to figure out what your values actually are, and to construct a plan towards achieving given values (based, in particular, on epistemically rational understanding of the world).
I'm not sure I follow. Are you using "values" in the sense of "terminal values"? Or "instrumental values"? Or perhaps something else?
I gave you an upvote because the topics you consider are important ones, things I have been thinking about myself recently. But I have to agree with the other commenters that you might have made the posting a bit shorter and the reasoning a bit tighter. But that is enough about you and your ideas. Lets talk about me and my ideas. :)
The remainder of this comment deals with my take on a couple of issues you raise.
The first issue is whether moral-value opinions, judgments, and reasonings can be evaluated as "rational" vs "irrational". I think they can be. Compare to epistemic opinions, judgments, and reasonings. We define a collection of probability assignments to be rational if they are consistent; if they are Bayesian updates from a fairly arbitrary set of priors; updates based on evidence. We may suspect, with Jaynes, that there is some rational objective methodology for choosing priors, but since we don't yet know of any perfect such methodology, we don't insist upon it.
Similarly, in the field of values (even moral values) we can define moral rationality as a kind of consistency of moral judgments, even if we do not yet know of a valid and objective methodology for choosing "moral priors" or "fundamental moral preferences". That is, we may not yet be able to recognize moral rationality, but, like Potter Stewart regarding pornography, we certainly know moral irrationality when we see it.
Your second major theme seems to be whether we can criticize conversations as rational or irrational. My opinion is that if we want to extend "rational" from agents and their methods to conversations, then maybe we need to view a conversation as a method of some agent. That is, we need to see the conversation as part of the decision-making methodology of some collective entity. And then we need to ask whether the conversation does, in fact, lead to the consequence that the collective entity in question makes good decisions.
Although this approach forces us into a long and difficult research program regarding the properties of collectives and their decision making (Hmmm. Didn't they give Ken Arrow a Nobel prize for doing something related to this?), I think that it is the right direction to go on this question, rather than just putting together lists of practices that might improve public policy debate in this country. As much as I agree that public policy debate sorely needs improvement.
I don't think I have anything to add to your non-length-related points. Maybe that's just because you seem to be agreeing with me. You've spun my points out a little further, though, and I find myself in agreement with where you ended up, so that's a good sign that my argument is at least coherent enough to be understandable and possibly in accordance with reality. Yay. Now I have to go read the rest of the comments and find out why at least seven people thought it sucked...
I gave you an upvote because the topics you consider are important ones, things I have been thinking about myself recently. But I have to agree with the other commenters that you might have made the posting a bit shorter and the reasoning a bit tighter. But that is enough about you and your ideas. Lets talk about me and my ideas. :)
The remainder of this comment deals with my take on a couple of issues you raise.
The first issue is whether moral-value opinions, judgments, and reasonings can be evaluated as "rational" vs "irrational". I think they can be. Compare to epistemic opinions, judgments, and reasonings. We define a collection of probability assignments to be rational if they are consistent; if they are Bayesian updates from a fairly arbitrary set of priors; updates based on evidence. We may suspect, with Jaynes, that there is some rational objective methodology for choosing priors, but since we don't yet know of any perfect such methodology, we don't insist upon it.
Similarly, in the field of values (even moral values) we can define moral rationality as a kind of consistency of moral judgments, even if we do not yet know of a valid and objective methodology for choosing "moral priors" or "fundamental moral preferences". That is, we may not yet be able to recognize moral rationality, but, like Potter Stewart regarding pornography, we certainly know moral irrationality when we see it.
Your second major theme seems to be whether we can criticize conversations as rational or irrational. My opinion is that if we want to extend "rational" from agents and their methods to conversations, then maybe we need to view a conversation as a method of some agent. That is, we need to see the conversation as part of the decision-making methodology of some collective entity. And then we need to ask whether the conversation does, in fact, lead to the consequence that the collective entity in question makes good decisions.
Although this approach forces us into a long and difficult research program regarding the properties of collectives and their decision making (Hmmm. Didn't they give Ken Arrow a Nobel prize for doing something related to this?), I think that it is the right direction to go on this question, rather than just putting together lists of practices that might improve public policy debate in this country. As much as I agree that public policy debate sorely needs improvement.
Yes, it could have been shorter, and that would probably have been clearer.
It also could have been a lot longer; I was somewhat torn by the apparent inconsistency of demanding documentation of thought-processes while not documenting my own -- but I did manage to convince myself that if anyone actually questioned the conclusions, I could go into more detail. I cut out large chunks of it after deciding that this was a better strategy than trying to Explain All The Things.
It could probably have been shorter still, though -- I ended up arriving at some fairly simple conclusions after a very roundabout process, and perhaps I didn't need to leave as much of the scaffolding and detritus in place as I did. I was already on the 4th major revision, though, having used up several days of available-focus-time on it, and after a couple of peer-reviews I figured it was time to publish, imperfections or no... especially when a major piece of my argument is about the process of error-correction through rational dialogue.
Will comment on your content-related points separately.
I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):
you must use only documented reasoning processes: using the best known process(es) for a given class of problem stating clearly which particular process(es) you use documenting any new processes you use
making every reasonable effort to verify that:
your inputs are reasonably accurate, and there are no other reasoning processes which might be better suited to this class of problem, and there are no significant flaws in in your application of the reasoning processes you are using, and there are no significant inputs you are ignoring
This definition seems to imply that something can only be rational if an immense amount of time and research is dedicated to it. But I can say something off the cuff, with no more of a reasoning process than "this was the output of my black-box intuition", and be rational. All that's required is that my intuition was accurate in that particular instance, and I reasonably expected it to be accurate with high enough probability relative to the importance of the remark. See How Much Thought.
"Immense" wouldn't be "reasonable" unless the problem was of such magnitude as to call for an immense amount of research. That's why I qualify pretty much every requirement with that word.
I'm fine with that distinction but it doesn't change my point. Why do external terminal values have to be rational? What does it mean for a value to be rational?
Can you just answer those two questions?
Here's my answer, finally... or a more complete answer, anyway.
I'm not sure I understand your issue. If this response doesn't work you may have to reexplain.
If you have some values-- say happiness-- then there can be irrational ways of evaluating actions in terms of those values. So if I'm concerned with happiness but only look at the effects of the action on my sneakers, and not the emotions of people, well that seems irrational if happiness is really what I care about. Certainly there are actions which can either be consistent or inconsistent with some set of values and taking actions that are inconsistent with your values is irrational. What I don't see is what it could mean for those values to be rational or irrational in the first place. I don't think people "decide" on terminal values in the way they decide on breakfast or to give to some charity over another.
Does that address your concern?
See my comment about "internal" and "external" terminal values -- I think possibly that's where we're failing to communicate.
Internal terminal values don't have to be rational -- but external ones (goals for society) do, and need to take individual ones into account. Violating an individual internal TV causes suffering, which violates my proposed universal external TV.
For instance... if I'm a heterosexual male, then one of my terminal values might be to form a pair-bond with a female of my species. That's an internal terminal value. This doesn't mean that I think everyone should do this; I can still support gay rights. "Supporting gay rights" is an external value, but not a terminal one for me. For a gay person, it probably would be a terminal value -- so prohibiting gays from marrying would be violating their internal terminal values, which causes suffering, which violates my proposed universal external terminal value of "minimizing suffering / maximizing happiness" -- and THAT is why it is wrong to prohibit gays from marrying, not because I personally happen to think it is wrong (i.e. not because of my external intermediate value of supporting gay rights).
So how do you rationally decide if an action is right or wrong? -- or are you saying you can't do this?
There is no such thing as "rationally deciding if an action is right or wrong". This has nothing to do with particularism. It's just a metaethical position. I don't know what can be rational or irrational about morality.
Again though, I'm not a particularist, I do have principles I can apply if I don't have strong intuitions. A particularist only has her intuitions.
Also, just to be clear: you are saying that you do not believe rightness or wrongness of an action ultimately derives from whether or not it does harm? ("Harm" being the more common term; I tried to refine it a bit as "personally-defined suffering", but I think you're disagreeing with the larger idea -- not my refinement of it.)
I don't believe my own morality can be reduced to language about harm. I'm not sure what "ultimately derives" means but I suspect my answer is no. My morality happens to have a lot to do with harm (again, I'm a Haidtian liberal). But I don't think that makes my morality more rational than a morality that is less about harm. There is no such thing a "rational" or "irrational" morality only moralities I find silly or abhorrent.
I tried to make it quite clear that I do care about the rest of the world; the fact that I don't yet have a solution for them (and am therefore not offering one) does not negate this.
If it's the case that you care about the rest of the world then I don't think you realize how non-ideal your prescriptions are. You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.
I also tried to make it quite clear that my solution for Americans must not come at the price of harming others in the world, and that (further) I believe that as long as it avoids this, it may be of some benefit to the rest of the world as we will not be allowing unused resources to languish in the hands of the very richest people (who really don't need them) -- leaving the philanthropists among us free to focus on poverty worldwide rather than domestically.
But of course it comes at the price of harming the rest of the world. You're advocating sacrificing political resources to pass legislation. Those resources are to some extent limited which means you're decreasing the chances of or at least delaying changes in policy which would actually benefit the poorest. Moreover, social entitlements are notoriously impossible to overturn which means you're putting all this capital in a place we can't take it from to give to the people who really need it. Shoot, at least the mega-rich are sometimes using their money to invest in developing countries.
This doesn't even get us into preventing existential risk. When ever you have a utility-like morality using resources inefficiently is about as bad as actively doing harm.
You seem to be arguing, however, that actions can be wrong without causing any demonstrable harm. Can you give an example?
None you'll agree with! You've already said your morality is about preventing harm! But like it or not there are people who really don't care about suffering outside their own country. There are people who thing gay marriage is wrong no matter what effects it has on society (just as there are those, like me, who think it should be legal even if it damages society). There are those who do not believe we should criticize our leader under certain circumstances. There are those who believe our elders deserve respect above and beyond what they deserve as humans. There are those who believe sex outside of marriage is wrong. There are those who believe eating cow is immoral; there are others who believe eating cow is delicious. None of these people are necessarily rational or irrational.
I'll reiterate one question: What do you mean by rational in "rational morality"?
You're basically advocating for redistributing wealth from part of the global upper class to part of the global middle class and ignoring those experiencing the most pain and the most injustice.
I've explained repeatedly -- perhaps not in this subthread, so I'll reiterate -- that I'm only proposing reallocating domestic resources within the US, not resources which would otherwise be spent on foreign aid of any kind. I don't see how that can be harmful to anyone except (possibly) the extremely rich people from whom the resources are being reallocated.
(Will respond to your other points in separate comments, to maximize topic-focus of any subsequent discussion.)
The classic one is euthanasia.
Your example exposes the flaw in the "destroy everything instantly and painlessly" pseudo-solution: the latter assumes that life is more suffering than pleasure. (Euthanasia is only performed -- or argued for, anyway -- when the gain from continuing to live is believed to be outweighed by the suffering.)
I think this shows that there needs to be a term for pleasure/enjoyment in the formula...
...or perhaps a concept or word which equates to either suffering and pleasure depending on signage (+/-), and then we can simply say that we're trying to maximize that term -- where the exact aggregation function has yet to be determined, but we know it has a positive slope.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This post was long and winding, and didn't seem to deliver much. This might just be because I was tired. Either way it certainly didn't deliver on either of its titles.
My main conclusions are, oddly, enough, in the final section:
[paste]
I propose that the key elements of a rational conversation are (where "you" refers collectively to all participants):
P.S. The list refuses to format nicely in comment mode; I did what I could.