Morality should be Moral
This article is just some major questions concerning morality, then broken up into sub-questions to try to assist somebody in answering the major question; it's not a criticism of any morality in particular, but rather what I hope is a useful way to consider any moral system, and hopefully to help people challenge their own assumptions about their own moral systems. I don't expect responses to try to answer these questions; indeed, I'd prefer you don't. My preferred responses would be changes, additions, clarifications, or challenges to the questions or to the objective of this article.
First major question: Could you morally advocate other people adopt your moral system?
This isn't as trivial a question as it seems on its face. Take a strawman hedonism, for a very simple example. Is a hedonist's pleasure maximized by encouraging other people to pursue -their- pleasure? Or would it be better served by convincing them to pursue other people's (a class of people of which our strawman hedonist is a member) pleasure?
It's not merely selfish moralities which suffer meta-moral problems. I've encountered a few near-Comtean altruists who will readily admit their morality makes them miserable; the idea that other people are worse off than them fills them with a deep guilt which they cannot resolve. If their goal is truly the happiness of others, spreading their moral system is a short-term evil. (It may be a long-term good, depending on how they do their accounting, but non-moral altruism isn't actually a rare quality, so I think an honest accounting would suggest their moral system doesn't add much additional altruism to the system, only a lot of guilt about the fact that not much altruistic action is taking place.)
Note: I use the word "altruism" here in its modern, non-Comtean sense. Altruism is that which benefits others.
Does your moral system make you unhappy, on the whole? Does it, like most moral systems, place a value on happiness? Would it make the average person less or more happy, if they and they alone adopted it? Are your expectations of the moral value of your moral system predicated on an unrealistic scenario of universal acceptance? Maybe your moral system isn't itself very moral.
Second: Do you think your moral system makes you a more moral person?
Does your moral system promote moral actions? What percentage of your actions concerning your morality are spent feeling good because you feel like you've effectively promoted your moral system, rather than promoting the values inherent in it?
Do you behave any differently than you would if you operated under a "common law" morality, such as social norms and laws? That is, does your ethical system make you behave differently than if you didn't possess it? Are you evaluating the merits of your moral system solely on how it answers hypothetical situations, rather than how it addresses your day-to-day life?
Does your moral system promote behaviors you're uncomfortable with and/or could not actually do, such as pushing people in the way of trolleys to save more people?
Third: Does your moral system promote morality, or itself as a moral system?
Is the primary contribution of your moral system to your life adding outrage that other people -don't- follow your moral system? Do you feel that people who follow other moral systems are immoral even if they end up behaving in exactly the same way you do? Does your moral system imply complex calculations which aren't actually taking place? Is the primary purpose of your moral system encouraging moral behavior, or defining what the moral behavior would have been after the fact?
Considered as a meme or memeplex, does your moral system seem better suited to propagating itself than to encouraging morality? Do you think "The primary purpose of this moral system is ensuring that these morals continue to exist" could be an accurate description of your moral system? Does the moral system promote the belief that people who don't follow it are completely immoral?
Fourth: Is the major purpose of your morality morality itself?
This is a rather tough question to elaborate with further questions, so I suppose I should try to clarify a bit first: Take a strawman utilitarianism where "utility" -really is- what the morality is all about, where somebody has painstakingly gone through and assigned utility points to various things (this is kind of common in game-based moral systems, where you're just accumulating some kind of moral points, positive or negative). Or imagine (tough, I know) a religious morality where the sole objective of the moral system is satisfying God's will. That is, does your moral system define morality to be about something abstract and immeasurable, defined only in the context of your moral system? Is your moral system a tautology, which must be accepted to even be meaningful?
This one can be difficult to identify from the inside, because to some extent -all- human morality is tautological; you have to identify it with respect to other moralities, to see if it's a unique island of tautology, or whether it applies to human moral concerns in the general case. With that in mind, when you argue with other people about your ethical system, do they -always- seem to miss the point? Do they keep trying to reframe moral questions in terms of other moral systems? Do they bring up things which have nothing to do with (your) morality?
Personal Evidence - Superstitions as Rational Beliefs
I'll start with a confession:
The evidence I have personally seen suggests haunted houses are, in fact, real, without given any particular credence to any particular explanation of what the haunting is. In particular, I own a house in which bizarre crap has happened since I first moved into it. Persistently. I've moved into another house, and have been making repairs in preparation to sell it; most recently, in a room with almost no furniture, in a space with absolutely no furniture, a key was dropped by myself. Four people searched the area for significant periods of time on three different occasions with no luck. I found it on the floor a week or two ago on top of something that wasn't there when it fell. Which is the straw that broke the camel's back in terms of my skepticism.
Other bizarre things that have happened include such things as my waking up to discover my recently-purchased bottle of key lime juice had been placed in the oven, and the oven turned on; the plastic bottle had just started to melt when I made the discovery. Another situation involved my sister, who one morning (while home alone) walked into the living room and discovered on a previously empty floor three sonograms of the previous occupant's baby. (There were -many- other things; I'm choosing for the purposes of this post the most unusual and least prone-to-outside-explanation occurrences. Night terrors, for example, are easily explained.)
Up until the last incident, the key, I was inclined to attribute the events to, say, sleepwalking and confirmation bias. At this point, I do not think the evidence really supports that conclusion anymore. My skepticism has been broken by personal experience; I'm not going to attribute anything to any -particular- explanation, but there is definitely something -not normal- about that house, whatever it may be; it has been the (nearly) sole repository of such experiences in my life. (The only other such experience was the day my grandfather (with whom I was extremely close) died, and given the mental turmoil I was experiencing, I'm disinclined to give that particular experience too much credit. For the curious, I was taking a shower, and the hot water repeatedly (3 times) turned off. As in, the knob was completely rotated to shut off the flow of hot water to the faucet.)
A key point of rationality is that evidence can in fact change your mind. Well, the evidence has changed my mind.
From a reader's perspective, this is all anecdotal evidence. So I don't expect to change anybody -else's- mind - indeed, you're probably making a mistake if you -do- change your mind, because out of millions of people, you -should- expect to see a few weird things being related by other people. The odds of somebody else relating an entirely factual series of anecdotes that suggest something unlikely are probably significantly higher than the odds of that unlikely thing being true. However, the odds of such things happening to you personally are considerably -lower- than the odds of hearing about the events from somebody else. Which all leads into a central conclusion: It's possible for the evidence to support one person believing something, while at the same time -not- supporting that anybody else believe that thing. If you win the lottery, that may be evidence for you believing you're living in a simulation or that some other mechanism "forced" the outcome - while at the same time the evidence doesn't suggest anything for somebody -else- winning the lottery.
I have a different purpose in mind: Making the claim that objectively irrational beliefs can, in fact, be subjectively rational. Prior to these experiences, I regarded the idea of a haunted house - I use the idea without prejudice for what "haunted" is or refers to - was that it was just superstitious people scaring themselves. At this point I'm forced by the evidence I've seen to conclude that there's something to the idea, even if it's not what people think it is. Maybe EMFs subtly messing with my brain (there is some weak evidence for the idea that electromagnetic fluctuations can induce metabolic changes in neurons - see http://jama.jamanetwork.com/article.aspx?articleid=645813 ), maybe something else.
If a pattern-recognition algorithm doesn't produce false positives, it's probably getting false negatives, and given that we can test false positives but do not know to test false negatives, pattern-recognition should favor false positives over false negatives. What does this have to do with anything? Well, it means superstitions aren't a product of a poor mind, only -untested- superstitions are. A good intelligence should develop superstitions. It should, when capable, discard them.
But it should only discard such superstitions as it has evidence to do so.
Now, the skeptical reader might ask what odds I place on each of these events occurring. My answer is as follows: Each event was highly unlikely in itself, explainable as an independent event only by positing pretty unlikely circumstances (what odds would I place on me or my housemate sleepwalking multiple times when neither of us have any history of such behavior, and such behavior has entirely ceased since leaving that house? Keep in mind that neither I nor my sister were initially inclined to regard such events as even needing explanation; it's only been until the most recent episode that I've decided the evidence suggests anything at all, so the possible explanation that the sleepwalking was a product of disturbance at the first few unusual events seems unlikely). Further evidence has rendered each event less likely as an independent phenomenon - since moving to a different house, the occurrences have ceased. When returning to the house, occurrences resume within its context. My control, while hardly blind, is controlling. But meaningfully, the same evidence doesn't mean the same thing if it is coming from somebody else; out of millions of people, I would expect such things to occur. I simply cannot expect them to occur -to me-. (And I wasn't the only one who found the house to be... off. There's a sense of not-quite-rightness to one basement room which I cannot explain without resorting to Lovecraftian cliches about alien geometries. The house was burgled several times; the only room that was left completely untouched, even when the copper piping was stolen (and subsequently the water meter - I got a waterfall in my basement!), was that room, which is conveniently where I left a thousand or so dollars worth of building materials for a project I hadn't finished yet.)
Evidence is personal. The odds of something happening are not equal to the odds of that something happening to you. Therefore, while we should not be surprised if miracles (that is, really unlikely and contextually significant events) occur, it is still legitimate to be surprised when they occur to us individually. The qualitative rationality of an individual belief is not equal to the qualitative rationality of the same belief on a social scale; individuals get different evidence than society, even when the same evidence is apparently present both for the individual and the community.
And just as it is a mistake for people to judge the beliefs of others based on the community standard of evidence, rather than the individual standard of evidence, it is likewise a mistake for an individual to judge society based on the individual standard of evidence, rather than the community. Just as it is possible for the individual to rationally believe something that society should not rationally believe as a whole, it is possible for society to rationally reject something the individual has overwhelming personal evidence for.
Aumann was, in short, wrong, because Aumann Updating is based on the belief that two individuals -can- share evidence. Evidence is incompletely transferable.
(Note: Anthropic reasoning can potentially remedy this at least to some extent for -past- experiences; reproducible and continuing experiences somewhat less.)
Why Politics are Important to Less Wrong...
...and no, it's not because of potential political impact on its goals. Although that's also a thing.
The Politics problem is, at its root, about forming a workable set of rules by which society can operate, which society can agree with.
The Friendliness Problem is, at its root, about forming a workable set of values which are acceptable to society.
Politics as a process (I will use "politics" to refer to the process of politics henceforth) doesn't generate values; they're strictly an input, by which the values of society are converted into rules which are intended to maximize them. While this is true, it is value agnostic; it doesn't care what the values are, or where they come from. Which is to say, provided you solve the Friendliness Problem, it provides a valuable input into politics.
Politics is also an intelligence. Not in the "self aware" sense, or even in the "capable of making good judgments" sense, but in the sense of an optimization process. We're each nodes in this alien intelligence, and we form what looks, to me, suspiciously like a neural network.
The Friendliness Problem is equally applicable to Politics as it is to any other intelligence. Indeed, provided we can provably solve the Friendliness Problem, we should be capable of creating Friendly Politics. Friendliness should, in principle, be equally applicable to both. Now, there are some issues with this - politics is composed of unpredictable hardware, namely, people. And it may be that the neural architecture is fundamentally incompatible with Friendliness. But that is discussing the -output- of the process. Friendliness is first an input, before it can be an output.
More, we already have various political formations, and can assess their Friendliness levels, merely in terms of the values that went -into- them.
Which is where I think politics offers a pretty strong hint to the possibility that the Friendliness Problem has no resolution:
We can't agree on which political formations are more Friendly. That's what "Politics is the Mindkiller" is all about; our inability to come to an agreement on political matters. It's not merely a matter of the rules - which is to say, it's not a matter of the output: We can't even come to an agreement about which values should be used to form the rules.
This is why I think political discussion is valuable here, incidentally. Less Wrong, by and large, has been avoiding the hard problem of Friendliness, by labeling its primary functional outlet in reality as a mindkiller, not to be discussed.
Either we can agree on what constitutes Friendly Politics, or not. If we can't, I don't see much hope of arriving at a Friendliness solution more broadly. Friendly to -whom- becomes the question, if it was ever anything else. Which suggests a division in types of Friendliness; Strong Friendliness, which is a fully generalized set of human values, and acceptable to just about everyone; and Weak Friendliness, which isn't fully generalized, and perhaps acceptable merely to a plurality. Weak Friendliness survives the political question. I do not see that Strong Friendliness can.
(Exemplified: When I imagine a Friendly AI, I imagine a hands-off benefactor who permits people to do anything they wish to which won't result in harm to others. Why, look, a libertarian/libertine dictator. Does anybody envisage a Friendly AI which doesn't correspond more or less directly with their own political beliefs?)
Politics Discussion Thread February 2013
- Top-level comments should introduce arguments; responses should be responses to those arguments.
- Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
- A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
- In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.
Politics Discussion Thread January 2013
- Top-level comments should introduce arguments; responses should be responses to those arguments.
- Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
- A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
- In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.
Politics Discussion Thread December 2012
I skipped October and November owing to election season, but opening back up:
- Top-level comments should introduce arguments; responses should be responses to those arguments.
- Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
- A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
- In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.
Subsuming Purpose, Part 1
Summary:
The purpose of this entry is to establish the existence of local equilibriums which introduce deviations from an ends-driven organization (an organization whose primary focus is a particular purpose) to transform it into a means-driven organization (an organization whose primary focus is the means to achieve its purpose, rather than the purpose itself).
Subsuming Purpose, Part 1
Imagine you run a charity, and you have two star employees; one shares your goals without any emphasis on a means, the other believes in the cause but believes firmly in fundraising as the best means to that end. Both contribute to your charity, but the fundraiser does more good overall. The fundraiser enables your organization. Who do you set as your successor?
Who will your successor choose as their successor?
The person who believes in the purpose will choose the best person for achieving that purpose. The person who believes in a specific means to achieve that ends will choose the best person for those means. The means will subsume the ends. A person who values specific means, say, fundraising, is more likely to promote fellow fundraisers; he values their contributions more. Specialists, and in particular the lines of thinking which lead to specialization, create rigidity in the organization.
Suppose that you choose the fundraiser. The fundraiser, by dint of having chosen to specialize in fundraising, probably believes that fundraising is more important than the alternative means of supporting the organization: he will probably choose to promote other effective fundraisers over their alternatives.
And now people who don't agree that fundraising will start protesting, seeing their charity becoming increasingly subverted; fundraising is rewarded over the charitable purpose of the organization. They will leave, or protest; if their protests aren't heeded, for example because fundraisers who believe in fundraising do already run the company, they may be marginalized. Such individuals may be selected out, either self-selectively, or by explicit opposition by management to introducing people who are likely to cause trouble for them in the future.
Generalized:
In the example above, I made one particular assumption: That somebody who possesses some choice-driven characteristic X (competency at fundraising in the example) is more likely to believe that X is important, and will favor X over alternative characteristics. It's not necessary that this is always the case; a generalist may also possess some characteristic X. It's only necessary that p(XY) > p(X!Y), where X is possession of characteristic X, and Y is belief that X is an important characteristic to have (belief that fundraising is the most valuable pursuit for the charitable organization in the example).
Any preference, once established, which follows a tendency such that p(XY) > p(X!Y) will concrete itself into the organization once given a foothold; those who are selected based on X will also have, on average, a preference for X. They will select individuals with X.
The danger of organization specialization, as opposed to individual specialization, arises when that preference extends to preference; when, given two people X, those who have a preference for X (those who have characteristic Y) are preferred over those who do not. This is the point at which selecting people for X and Y becomes a runaway process, a process which may subsume the original purpose of the organization.
When those who do not have a preference for X begin to believe that X has already overtaken the original purpose of the organization, the meaningful possibilities are that they will either fight it or leave. If they simply leave, they harden the preference for X; there are fewer individuals in the organization who oppose Y. If they fight it and win, they've won for a day; an equilibrium has not yet been reached. If they fight it and lose, they establish a preference for preference; people who disagree with the orthodoxy of X begin to be seen as potential conflict creators in the organization, and just as problematically, revealing the preference for X may alter the decisions of those who might enter the organization otherwise; a non-Y individual may choose another organization which better suits their preferences.
Every Cause Wants to be a Cult. Every belief wants to be an orthodoxy. Orthodoxy is a stable equilibrium, the pit surrounding the gently sloped hill of idea diversity.
Politics Discussion Thread August 2012
In line with the results of the poll here, a thread for discussing politics. Incidentally, folks, I think downvoting the option you disagree with in a poll is generally considered poor form.
1.) Top-level comments should introduce arguments; responses should be responses to those arguments.
2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
3.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
4.) In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.
If anybody thinks the rules should be dropped here, now that we're no longer conducting a test - I already dropped the upvoting/downvoting limits I tried, unsuccessfully, to put in - let me know. The first rule is the only one I think is strictly necessary.
Debiasing attempt: If you haven't yet read Politics is the Mindkiller, you should.
Is Politics the Mindkiller? An Inconclusive Test
Or is the convention against discussing politics here silly?
I propose a test. I'm going to try to lay down some rules on voting on comments for the test here (not that I can force anybody to abide by them):
1.) Top-level comments should introduce arguments (or ridicule me and/or this test); responses should be responses to those arguments.
2.) Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised. This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it. If it's a convincing argument, and the counterargument is also convincing, upvote both. If both arguments are unconvincing, downvote both.
3.) Try not to downvote particular comments excessively, if they're legitimate lines of argument. A faulty line of argument provides opportunity for rebuttal, and so for our test has value even then; that is, I want some faulty lines of argument here. If you disagree, please downvote me, instead of the faulty comments, because this post is what you want less of, not those comments. This necessarily implies, for balance, that we not excessively upvote comments. I'd suggest fairly arbitrary limits of 3/-3?
Edit: 4.) A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate. (My apologies about missing this, folks.)
I'm going to try really hard not to get personally involved, except to lay down a leading comment posing an argument against abortion, a position I don't hold, for the record. The core of the argument isn't disingenuous, and I hold that this argument is true, it just doesn't lead to my opposing abortion. I do not hold the moral axiom by which I extend the basic argument to argue against abortion, however; I'm playing the devil's advocate to try to help me from getting sucked into the argument while providing an initial point of discussion.
Which leads me to the next point: If you see a hole in an argument, even if it's an argument for a perspective you agree with, poke through it. The goal is to see whether we can have a constructive political argument here.
The fact that this is a test, and known to be a test, means this isn't a blind study. Uh, try to act as if you're not being tested?
After it's gone on a little while, if this post hasn't been hopelessly downvoted and ridiculed (and thus the premise and test discarded as undesirable to begin with), we can put up a poll to see whether people found the political debates helpful, not helpful, and so on.
In Defense of Tone Arguments
Suppose, for a moment, you're a strong proponent of Glim, a fantastic new philosophy of ethics that will maximize truth, happiness, and all things good, just as soon as 51% of the population accepts it as the true way; once it has achieved majority status, careful models in game theory show that Glim proponents will be significantly more prosperous and happy than non-proponents (although everybody will benefit on average, according to its models), and it will take over.
Glim has stalled, however; it's stuck at 49% belief, and a new countermovement, antiGlim, has arisen, claiming that Glim is a corrupt moral system with fatal flaws which will destroy the country if it has its way. Belief is starting to creep down, and those who accepted the ideas as plausible but weren't ready to commit are starting to turn away from the movement.
In response, a senior researcher of Glim ethics has written a scathing condemnation of antiGlim as unpatriotic, evil, and determined to keep the populace in a state of perpetual misery to support its own hegemony. He vehemently denies that there are any flaws in the moral system, and refuses to entertain antiGlim in a public debate.
In response to this, belief creeps slightly up, but acceptance goes into a freefall.
You immediately ascertain that the negativity was worse for the movement than the criticisms; you write a response, and are accused of attacking the tone and ignoring the substance of the arguments. Glim and antiGlim leadership proceed into protracted and nasty arguments, until both are highly marginalized, and ignored by the general public. Belief in Glim continues, but when the leaders of antiGlim and Glim finally arrive on a bitterly agreed upon conclusion - the arguments having centered on an actual error in the original formulations of Glim philosophy, they're unable to either get their remaining supports to cooperate, or to get any of the public to listen. Truth, happiness, and all things good never arise, and things get slightly worse, as a result of the error.
Tone arguments are not necessarily logical errors; they may be invoked by those who agree with the substance of an argument who nevertheless may feel that the argument, as posed, is counterproductive to its intended purpose.
I have stopped recommending Dawkin's work to people who are on the fence about religion. The God Delusion utterly destroyed his effectiveness at convincing people against religion. (In a world in which they couldn't do an internet search on his name, it might not matter; we don't live in that world, and I assume other people are as likely to investigate somebody as I am.) It doesn't even matter whether his facts are right or not, the way he presents them will put most people on the intellectual defensive.
If your purpose is to convince people, it's not enough to have good arguments, or good facts; these things can only work if people are receptive to those arguments and those facts. Your first move is your most important - you must try to make that person receptive. And if somebody levels a tone argument at you, your first consideration should not be "Oh! That's DH2, it's a fallacy, I can disregard what this person has to say!" It should be - why are they leveling a tone argument at you to begin with? Are they disagreeing with you on the basis of your tone, or disagreeing with the tone itself?
Or, in short, the categorical assessment of "Responding to Tone" as either a logical fallacy or a poor argument is incorrect, as it starts from an unfounded assumption that the purpose of a tone response is, in fact, to refute the argument. In the few cases I have seen responses to tone which were utilized against an argument, they were in fact ad-hominems, of the formulation "This person clearly hates [x], and thus can't be expected to have an unbiased perspective." Note that this is a particularly persuasive ad-hominem, particularly for somebody who is looking to rationalize their beliefs against an argument - and that this inoculation against argument is precisely the reason you should, in fact, moderate your tone.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)