Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Pancritical Rationalism Can Apply to Preferences and Behavior

1 TimFreeman 25 May 2011 12:06PM

ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't.  Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not.  Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:

Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.

The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:

  • Nihilism. Nothing matters, so it doesn't matter what you believe. This path is self-consistent, but it gives no guidance.
  • Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
  • Pancritical rationalism. You have taken the available criticisms for the belief into account and still feel comfortable with the belief. This path gives guidance about what to believe, although it does not uniquely determine one's beliefs. Pancritical rationalism can be criticized, so it is self-consistent in that sense.

Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.

continue reading »

Metacontrarian Metaethics

2 Will_Newsome 20 May 2011 05:36AM

Designed to gauge responses to some parts of the planned “Noticing confusion about meta-ethics” sequence, which should intertwine with or be absorbed by Lukeprog’s meta-ethics sequence at some point.

Disclaimer: I am going to leave out many relevant details. If you want, you can bring them up in the comments, but in general meta-ethics is still very confusing and thus we could list relevant details all day and still be confused. There are a lot of subtle themes and distinctions that have thus far been completely ignored by everyone, as far as I can tell.

Problem 1: Torture versus specks

Imagine you’re at a Less Wrong meetup when out of nowhere Eliezer Yudkowsky proposes his torture versus dust specks problem. Years of bullet-biting make this a trivial dilemma for any good philosopher, but suddenly you have a seizure during which you vividly recall all of those history lessons where you learned about the horrible things people do when they feel justified in being blatantly evil because of some abstract moral theory that is at best an approximation of sane morality and at worst an obviously anti-epistemic spiral of moral rationalization. Temporarily humbled, you decide to think about the problem a little longer:

"Considering I am deciding the fate of 3^^^3+1 people, I should perhaps not immediately assert my speculative and controversial meta-ethics. Instead, perhaps I should use the averaged meta-ethics of the 3^^^3+1 people I am deciding for, since it is probable that they have preferences that implicitly cover edge cases such as this, and disregarding the meta-ethical preferences of 3^^^3+1 people is certainly one of the most blatantly immoral things one can do. After all, even if they never learn anything about this decision taking place, people are allowed to have preferences about it. But... that the majority of people believe something doesn’t make it right, and that the majority of people prefer something doesn’t make it right either. If I expect that these 3^^^3+1 people are mostly wrong about morality and would not reflectively endorse their implicit preferences being used in this decision instead of my explicitly reasoned and reflected upon preferences, then I should just go with mine, even if I am knowingly arrogantly blatantly disregarding the current preferences of 3^^^3 currently-alive-and-and-not-just-hypothetical people in doing so and thus causing negative utility many, many, many times more severe than the 3^^^3 units of negative utility I was trying to avert. I may be willing to accept this sacrifice, but I should at least admit that what I am doing largely ignores their current preferences, and there is some chance it is wrong upon reflection regardless, for though I am wiser than those 3^^^3+1 people, I notice that I too am confused."

You hesitantly give your answer and continue to ponder the analogies to Eliezer’s document “CEV”, and this whole business about “extrapolation”...

(Thinking of people as having coherent non-contradictory preferences is very misleadingly wrong, not taking into account preferences at gradient levels of organization is probably wrong, not thinking of typical human preferences as implicitly preferring to update in various ways is maybe wrong (i.e. failing to see preferences as processes embedded in time is probably wrong), et cetera, but I have to start somewhere and this is already glossing over way too much.)

Bonus problem 1: Taking trolleys seriously

"...Wait, considering how unlikely this scenario is, if I ever actually did end up in it then that would probably mean I was in some perverse simulation set up by empirical meta-ethicists with powerful computers, in which case they might use my decision as part of a propaganda campaign meant to somehow discredit consequentialist reasoning or maybe deontological reasoning, or maybe they'd use it for some other reason entirely, but at any rate that sure complicates the problem...” (HT: Steve Rayhawk)

Conceptual Analysis and Moral Theory

60 lukeprog 16 May 2011 06:28AM

Part of the sequence: No-Nonsense Metaethics. Also see: A Human's Guide to Words.

If a tree falls in the forest, and no one hears it, does it make a sound?

Albert:  "Of course it does.  What kind of silly question is that?  Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds.  I don't believe the world changes around when I'm not looking."

Barry:  "Wait a minute. If no one hears it, how can it be a sound?"

Albert and Barry are not arguing about facts, but about definitions:

...the first person is speaking as if 'sound' means acoustic vibrations in the air; the second person is speaking as if 'sound' means an auditory experience in a brain.  If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word 'sound'.

Of course, Albert and Barry could argue back and forth about which definition best fits their intuitions about the meaning of the word. Albert could offer this argument in favor of using his definition of sound:

My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain. 'Sound' means a pattern of vibrations.

Barry might retort:

Imagine some aliens on a distant planet. They haven't evolved any organ that translates vibrations into neural signals, but they still hear sounds inside their own head (as an evolutionary biproduct of some other evolved cognitive mechanism). If these creatures seem metaphysically possible to you, then this shows that our concept of 'sound' is not dependent on patterns of vibrations.

If their debate seems silly to you, I have sad news. A large chunk of moral philosophy looks like this. What Albert and Barry are doing is what philosophers call conceptual analysis.1


The trouble with conceptual analysis

I won't argue that everything that has ever been called 'conceptual analysis' is misguided.2 Instead, I'll give examples of common kinds of conceptual analysis that corrupt discussions of morality and other subjects.

The following paragraph explains succinctly what is wrong with much conceptual analysis:

Analysis [had] one of two reputations. On the one hand, there was sterile cataloging of pointless folk wisdom - such as articles analyzing the concept VEHICLE, wondering whether something could be a vehicle without wheels. This seemed like trivial lexicography. On the other hand, there was metaphysically loaded analysis, in which ontological conclusions were established by holding fixed pieces of folk wisdom - such as attempts to refute general relativity by holding fixed allegedly conceptual truths, such as the idea that motion is intrinsic to moving things, or that there is an objective present.3

continue reading »

On Being Okay with the Truth

33 lukeprog 02 May 2011 12:17AM

On January 11, 2007, I timidly whispered to myself: "There is no God."

And with that, all my Christian dreams and hopes and purposes and moral systems came crashing down.

I wrote a defiant email to the host of an atheist radio show I'd been listening to:

I was coming from a lifetime high of surrendering… my life to Jesus, releasing myself from all cares and worries, and filling myself and others with love. Then I began an investigation of the historical Jesus… and since then I’ve been absolutely miserable. I do not think I am strong enough to be an atheist. Or brave enough. I have a broken leg, and my life is much better with a crutch… I’m going to seek genuine experience with God, to commune with God, and to reinforce my faith. I am going to avoid solid atheist arguments, because they are too compelling and cause for despair. I do not WANT to live in an empty, cold, ultimately purposeless universe in which I am worthless and inherently alone.

I was not okay with the truth. I had been taught that meaning and morality and hope depended on God. If God didn't exist, then life was meaningless.

My tongue felt like cardboard for a week.

But when I pulled my head out of the sand, I noticed that millions of people were living lives of incredible meaning and morality and hope without gods. The only thing I had 'lost' was a lie, anyway.

This crisis taught me a lesson: that I could be okay with the truth.

When I realized that I am not an Unmoved Mover of my own actions, I was not much disturbed. I realized that 'moral responsibility' still mattered, because people still had reasons to condemn, praise, punish, and reward certain actions in others. And I realized that I could still deliberate about which actions were likely to achieve my goals, and that this deliberation would affect my actions. Apples didn't stop falling from trees when Einstein's equations replaced Newton's, and humans didn't stop making conscious choices that have consequences when we discovered that we are fully part of nature.

I didn't freak out when I gave up moral absolutism, either. I had learned to be okay with the truth. Whatever is meant by 'morality', it remains the case that agents have reasons to praise and condemn certain desires and actions in other agents, and that there are more reasons to praise and condemn some actions than others.

I've gone through massive reversals in my metaethics twice now, and guess what? At no time did I spontaneously acquire the urge to rape people. At no time did I stop caring about the impoverished. At no time did I want to steal from the elderly. At no time did people stop having reasons to praise or condemn certain desires and actions of mine, and at no time did I stop having reasons to praise or condemn the desires and actions of others.

We humans have a tendency to 'freak out' when our model of the world changes drastically. But we get over it.

The love a mother has for her child does not disappear when we explain the brain processes that instantiate that love. Explaining something is not explaining it away. Showing that love and happiness and moral properties are made of atoms does not mean they are just atoms. They are also love and happiness and moral properties. Water was still water after we discovered which particular atoms it was made of.

When you understand this, you need not feel the threat of nihilism as science marches on. Instead, you can jump with excitement as science locates everything we care about in the natural world and tells us how it works. Along the way, you can take joy in the merely real.

Whenever you 'lose' something as a result of getting closer to the truth, you've only lost a lie. You can face reality, even the truth about morality.



People can stand what is true,
for they are already enduring it.

- Eugene Gendlin

What is Metaethics?

31 lukeprog 25 April 2011 04:53PM

Part of the sequence: No-Nonsense Metaethics

When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?

First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.

My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.

So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.

Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?

Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?

Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?

continue reading »

Heading Toward: No-Nonsense Metaethics

38 lukeprog 24 April 2011 12:42AM

Part of the sequence: No-Nonsense Metaethics

A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.

Metaethics has been my target for a while now, but first I had to explain the neuroscience of pleasure and desire, and how to use intuitions for philosophy.

Luckily, Eliezer laid most of the groundwork when he explained couldnessterminal and instrumental values, the complexity of human desire and happiness, how to dissolve philosophical problems, how to taboo words and replace them with their substance, how to avoid definitional disputes, how to carve reality at its joints with our words, how an algorithm feels from the inside, the mind projection fallacy, how probability is in the mind, reductionism, determinism, free will, evolutionary psychology, how to grasp slippery things, and what you would do without morality.

Of course, Eliezer wrote his own metaethics sequence. Eliezer and I seem to have similar views on morality, but I'll be approaching the subject from a different angle, I'll be phrasing my solution differently, and I'll be covering a different spread of topics.

Why do I think much of metaethics can be solved now? We have enormous resources not available just a few years ago. The neuroscience of pleasure and desire didn't exist two decades ago. (Well, we thought dopamine was 'the pleasure chemical', but we were wrong.) Detailed models of reductionistic meta-ethics weren't developed until the 1980s and 90s (by Peter Railton and Frank Jackson). Reductionism has been around for a while, but there are few philosophers who relentlessly play Rationalist's Taboo. Eliezer didn't write How an Algorithm Feels from the Inside until 2008.

Our methods will be familiar ones, already used to dissolve problems ranging from free will to disease. We will play Taboo with our terms, reducing philosophical questions into scientific ones. Then we will examine the cognitive algorithms that make it feel like open questions remain.

Along the way, we will solve or dissolve the traditional problems of metaethics: moral epistemology, the role of moral intuition, the is-ought gap, matters of moral psychology, the open question argument, moral realism vs. moral anti-realism, moral cognitivism vs. non-cognitivism, and more. 

You might respond, "Sure, Luke, we can do the reduce-to-algorithm thing with free will or disease, but morality is different. Morality is fundamentally normative. You can't just dissolve moral questions with Taboo-playing and reductionism and cognitive science."

Well, we're going to examine the cognitive algorithms that generate that intuition, too.

And at the end, we will see what this all means for the problem of Friendly AI.

I must note that I didn't exactly invent the position I'll be defending. After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics, I've just never thought it through in so much detail and cited so much of the relevant science [e.g. recent work in neuroeconomics and the science of intuition]."

But for convenience I do need to invent a name for my theory of metaethics. I call it pluralistic moral reductionism.


Next post: What is Metaethics?



Is Kiryas Joel an Unhappy Place?

20 gwern 23 April 2011 12:08AM

I was browsing my RSS feed, as one does, and came across a New York Times article, "A Village With the Numbers, Not the Image, of the Poorest Place", about the Satmar Hasidic Jews of Kiryas Joel (NY).

Their interest lies in their extraordinarily high birthrate & population growth, and their poverty - which are connected. From the article:

"...officially, at least, none of the nation’s 3,700 villages, towns or cities with more than 10,000 people has a higher proportion of its population living in poverty than Kiryas Joel, N.Y., a community of mostly garden apartments and town houses 50 miles northwest of New York City in suburban Orange County.

About 70 percent of the village’s 21,000 residents live in households whose income falls below the federal poverty threshold, according to the Census Bureau. Median family income ($17,929) and per capita income ($4,494) rank lower than any other comparable place in the country. Nearly half of the village’s households reported less than $15,000 in annual income. About half of the residents receive food stamps, and one-third receive Medicaid benefits and rely on federal vouchers to help pay their housing costs.

Kiryas Joel’s unlikely ranking results largely from religious and cultural factors. Ultra-Orthodox Satmar Hasidic Jews predominate in the village; many of them moved there from Williamsburg, Brooklyn, beginning in the 1970s to accommodate a population that was growing geometrically. Women marry young, remain in the village to raise their families and, according to religious strictures, do not use birth control. As a result, the median age (under 12) is the lowest in the country and the household size (nearly six) is the highest. Mothers rarely work outside the home while their children are young. Most residents, raised as Yiddish speakers, do not speak much English. And most men devote themselves to Torah and Talmud studies rather than academic training — only 39 percent of the residents are high school graduates, and less than 5 percent have a bachelor’s degree. Several hundred adults study full time at religious institutions.

...Because the community typically votes as a bloc, it wields disproportionate political influence, which enables it to meet those challenges creatively. A luxurious 60-bed postnatal maternal care center was built with $10 million in state and federal grants. Mothers can recuperate there for two weeks away from their large families. Rates, which begin at $120 a day, are not covered by Medicaid, although, Mr. Szegedin said, poorer women are typically subsidized by wealthier ones.

...The village does aggressively pursue economic opportunities. A kosher poultry slaughterhouse, which processes 40,000 chickens a day, is community owned and considered a nonprofit organization. A bakery that produces 800 pounds of matzo daily is owned by one of the village’s synagogues.

Most children attend religious schools, but transportation and textbooks are publicly financed. Several hundred handicapped students are educated by the village’s own public school district, which, because virtually all the students are poor and disabled, is eligible for sizable state and federal government grants.

... Still, poverty is largely invisible in the village. Parking lots are full, but strollers and tricycles seem to outnumber cars. A jeweler shares a storefront with a check-cashing office. To avoid stigmatizing poorer young couples or instilling guilt in parents, the chief rabbi recently decreed that diamond rings were not acceptable as engagement gifts and that one-man bands would suffice at weddings. Many residents who were approached by a reporter said they did not want to talk about their finances.

...Are as many as 7 in 10 Kiryas Joel residents really poor? “It is, in a sense, a statistical anomaly,” Professor Helmreich said. “They are clearly not wealthy, and they do have a lot of children. They spend whatever discretionary income they have on clothing, food and baby carriages. They don’t belong to country clubs or go to movies or go on trips to Aruba.

...David Jolly, the social services commissioner for Orange County, also said that while the number of people receiving benefits seemed disproportionately high, the number of caseloads — a family considered as a unit — was much less aberrant. A family of eight who reports as much as $48,156 in income is still eligible for food stamps, although the threshold for cash assistance ($37,010), which relatively few village residents receive, is lower....“You also have no drug-treatment programs, no juvenile delinquency program, we’re not clogging the court system with criminal cases, you’re not running programs for AIDS or teen pregnancy,” he [Mr. Szegedin, the village administrator] said. “I haven’t run the numbers, but I think it’s a wash.”

From Wikipedia:

The land for Kiryas Joel was purchased in 1977, and fourteen Satmar families settled there. By 2006, there were over 3,000...In 1990, there were 7,400 people in Kiryas Joel; in 2000, 13,100, nearly doubling the population. In 2005, the population had risen to 18,300, a rate of growth suggesting it will double again in the ten years between 2000 and 2010.

Robin Hanson has argued that uploaded/emulated minds will establish a new Malthusian/Darwinian equilibrium in "IF UPLOADS COME FIRST: The crack of a future dawn" - an equilibrium in comparison to which our own economy will look like a delusive dreamtime of impossibly unfit and libertine behavior. The demographic transition will not last forever. But despite our own distaste for countless lives living at near-subsistence rather than our own extreme per-capita wealth (see the Repugnant Conclusion), those many lives will be happy ones (even amidst disaster).

So. Are the inhabitants of Kiryas Joel unhappy?

Guilt: Another Gift Nobody Wants

67 Yvain 31 March 2011 12:27AM

Evolutionary psychology has made impressive progress in understanding the origins of morality. Along with the many posts about these origins on Less Wrong I recommend Robert Wright's The Moral Animal for an excellent introduction to the subject.

Guilt does not naturally fall out of these explanations. One can imagine a mind design that although often behaving morally for the same reasons we do, sometimes decides a selfish approach is best and pursues that approach without compunction. In fact, this design would have advantages; it would remove a potentially crippling psychological burden, prevent loss of status from admission of wrongdoing, and allow more rational calculation of when moral actions are or are not advantageous. So why guilt?

In one of the few existing writings I could find on the subject, Tooby and Cosmides theorize that "guilt functions as an emotion mode specialized for recalibration of regulatory variables that control trade-offs in welfare between self and other."

If I understand their meaning, they are saying that when an action results in a bad outcome, guilt is a byproduct of updating your mental processes so that it doesn't happen again. In their example, if you don't share food with your sister, and your sister starves and becomes sick, your brain gives you a strong burst of negative emotion around the event so that you reconsider your decision not to share. It is generally a bad idea to disagree with Tooby and Cosmides, but this explanation doesn't satisfy me for several reasons.

First, guilt is just as associated with good outcomes as bad outcomes. If I kill my brother so I can inherit the throne, then even if everything goes according to plan and I become king, I may still feel guilt. But why should I recalibrate here? My original assumptions - that fratricide would be easy and useful - were entirely correct. But I am still likely to feel bad about it. In fact, some criminals report feeling "relieved" when caught, as if a negative outcome decreased their feelings of guilt instead of exacerbating them.

Second, guilt is not only an emotion, but an entire complex of behaviors. Our modern word self-flagellation comes from the old practice of literally whipping one's self out of feelings of guilt or unworthiness. We may not literally self-flagellate anymore, but when I feel guilty I am less likely to do activities I enjoy and more likely to deliberately make myself miserable.

Third, although guilt can be very private it has an undeniable social aspect. People have messaged me at 3 AM in the morning just to tell me how guilty they feel about something they did to someone I've never met; this sort of outpouring of emotion can even be therapeutic. The aforementioned self-flagellators would parade around town in their sackcloth and ashes, just in case anyone didn't know how guilty they felt. And we expect guilt in certain situations: a criminal who feels guilty about what ey has done may get a shorter sentence.

Fourth, guilt sometimes occurs even when a person has done nothing wrong. People who through no fault of their own are associated with disasters can nevertheless report "survivor's guilt" and feel like events were partly their fault. If this is a tool for recalibrating choices, it is a very bad one. This is not a knockdown argument - a lot of mental adaptations are very bad at what they do - but it should at least raise suspicion that there is another part to the puzzle besides recalibration.

continue reading »

Secure Your Beliefs

40 lukeprog 12 February 2011 04:53PM

When I was 12, my cousin Salina was 15. She was sitting in the back seat of a car with the rest of her family when a truck carrying concrete pipes came around the turn. The trucker had failed to secure his load properly, and the pipes broke loose. One of them smashed into Salina's head. My family has never wept as deeply as we did during the slideshow at her funeral.

The trucker didn't want to kill Salina. We can't condemn him for murder. Instead, we condemn him for negligence. We condemn him for failing to care enough for others' safety to properly secure his load. We give out the same condemnation to the aircraft safety inspector who skips important tests on his checklist because it's cold outside. That kind of negligence can kill people, and people who don't want their loved ones harmed have strong reasons to condemn such a careless attitude.

Social tools like praise and condemnation can change people's attitudes and desires. I was still a fundamentalist Christian when I went to college, but well-placed condemnation from people I respected changed my attitude toward gay marriage pretty quickly. Most humans care what their peers think of them. That's why public praise for those who promote a good level of safety, along with public condemnation for those who are negligent, can help save lives.

Failure to secure a truck load can be deadly. But failure to secure one's beliefs can be even worse.

Again and again, people who choose to trust intuition and anecdote instead of the replicated scientific evidence about vaccines have caused reductions in vaccination rates, which are then followed by deadly epidemics of easily preventable disease. Anti-vaccination activists are negligent with their beliefs. They fail to secure their beliefs in an obvious and clear-cut case. People who don't want their loved ones to catch polio or diphtheria from a neighbor who didn't vaccinate their children have reasons to condemn - and thereby decrease - such negligence.

People often say of false or delusional beliefs: "What's the harm?" The answer is "lots." WhatsTheHarm.com collects incidents of harm from obvious products of epistemic negligence like AIDS denial, homeopathy, exorcism, and faith healing. As of today they've counted up more than 300,000 injuries, 300,000 deaths, and $2 billion in economic damages due to intellectual recklessness. Very few of those harmed by such epistemic negligence have been listed by WhatsTheHarm.com, so the problem is actually much, much worse than that.

Failure to secure one's beliefs can lead to misery on a massive scale. That is why your rationality is my business.

Folk grammar and morality

20 Emile 17 December 2010 09:20PM

If you've spent any time with foreigners learning your language, you may have been in conversations like this:

Mei: I'm a bit confused ... what's the difference between "even though" and "although"?

Albert: Um, I think they're mostly equivalent, but "even though" is a bit more emphatic.

Barry: Are you sure ? I remember something about one being for positives, and the other for negatives. For example, let's see, these sentences sound a bit weird:"He refused to give me the slightest clue, although I begged on my knees", and "Although his car broke down on the first mile, he still won the rally".

People can't automatically state the rules underlying language, even though they follow them perfectly in their daily speech. I've been made especially aware of this when teaching French to Chinese students, where I had to frequently revise my explanation, or just say "sorry, I don't know what the rule is for this case, you'll just have to memorize it". You learn separately how to speak the language and how to apply the rules.

Morality is similar: we feel what's wrong and what's right, but may not be able to formulate the underlying rules. And when we do, we're likely to get it wrong the first time. For example you might say:

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality.

But unlike grammar, people don't always agree on right and wrong : if Alfred unintentionally harms Barry, Barry is more likely to think that what Alfred did was morally wrong, even if both started off with similar moral intuitions. So if you come up with an explanation and insist it's the definition of morality, you can't be "proven wrong" nearly as easily as on grammar. You may even insist your explanation is true, and adjust your behavior accordingly, as some religious fanatics seem to do ("what is moral is what God said" being a quite common rule people come up with to explain morality).

So: beware of your own explanations. Morality is a complex topic, you're even more likely to shoot yourself in the foot than with grammar, and even less likely to realize that you're wrong.

(edit) Related posts by Eliezer: Fake Justification, Fake Selfishness, Fake Morality.

View more: Prev | Next