What is Metaethics?

31 lukeprog 25 April 2011 04:53PM

Part of the sequence: No-Nonsense Metaethics

When I say I think I can solve (some of) metaethics, what exactly is it that I think I can solve?

First, we must distinguish the study of ethics or morality from the anthropology of moral belief and practice. The first one asks: "What is right?" The second one asks: "What do people think is right?" Of course, one can inform the other, but it's important not to confuse the two. One can correctly say that different cultures have different 'morals' in that they have different moral beliefs and practices, but this may not answer the question of whether or not they are behaving in morally right ways.

My focus is metaethics, so I'll discuss the anthropology of moral belief and practice only when it is relevant for making points about metaethics.

So what is metaethics? Many people break the field of ethics into three sub-fields: applied ethics, normative ethics, and metaethics.

Applied ethics: Is abortion morally right? How should we treat animals? What political and economic systems are most moral? What are the moral responsibilities of businesses? How should doctors respond to complex and uncertain situations? When is lying acceptable? What kinds of sex are right or wrong? Is euthanasia acceptable?

Normative ethics: What moral principles should we use in order to decide how to treat animals, when lying is acceptable, and so on? Is morality decided by what produces the greatest good for the greatest number? Is it decided by a list of unbreakable rules? Is it decided by a list of character virtues? Is it decided by a hypothetical social contract drafted under ideal circumstances?

Metaethics: What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?

continue reading »

Avoid inflationary use of terms

74 lsparrish 30 May 2012 08:31PM

Inflationary terms! You see them everywhere. And for those who actually know and care about the subject matter they can be very frustrating. These terms are notorious for being used in contexts where:

  1. They are only loosely applicable at best.
  2. There exists a better word that is more specific.
  3. The topic has a far bias.

Some examples:

  • Rational
  • Evolution
  • Singularity
  • Emergent
  • Nanotech
  • Cryogenics
  • Faith

The problem is not that these words are meaningless in their original form, nor that you shouldn't ever use them. The problem is that they often get used in stupid ways that make them much less meaningful. By that I mean, less useful for keeping a focus on the topic and understanding what the person is really talking about.

For example, terms like Nanotech (or worse, "Nanobot") do apply in a certain loose sense to several kinds of chemistry and biological innovations that are currently in vogue. Nonetheless, each time the term is used to refer to these things it makes it much harder to know if you are referring to Drexlerian Mechanosynthesis. Hint: If you get your grant money by convincing someone you are working on one thing whereas you are really working on something completely different, that's fraud.

Similarly, Cryogenics is the science of keeping things really cold. And of course Cryonics is a form of that. But saying "Cryogenics" when you really mean exactly Cryonics is an incredibly harmful practice which actual Cryonicists generally avoid. Most people who work in Cryogenics have nothing to do with Cryonics, and this kind of confusion in popular culture has apparently engendered animosity towards Cryonics among Cryogenics specialists.

Recently I fell prey to something like this with respect to the term "Rational". I wanted to know in general terms what the best programming language for a newbie would be and why. I wanted some in depth analysis, from a group I trust to do so. (And I wasn't disappointed -- we have some very knowledgeable programmers whose opinions were most helpful to me.) However the reaction of some lesswrongers to the title I initially chose for the post was distinctly negative. The title was "Most rational programming language?"

After thinking about it for a while I realized what the problem was: This way of using the term, despite being more or less valid, makes the term less meaningful in the long run. And I don't want to be the person who makes Rational a less meaningful word. Nobody here wants that to happen. Thus it would have been better to use a term such as "Best" or "Most optimal" instead.

Another example that comes to mind is when people (usually outsiders) refer to Transhumanism, Bayeseanism, the Singularity, or even skepticism, as a "Faith" or "Belief". Well yeah, trivially, if you are willing to stretch that word to its broadest possible meaning you can feel free to apply it to such as us. But... for crying out loud! What meaning does the word have if Faith is something absolutely everyone has? We're really referring to something like "Confidence" here.

Then there's Evolution. Is Transhumanism really about the next stage in human Evolution? Perhaps in a certain loose sense it is -- but let's not lose sight of the mutilation of the language (and consequent noise-to-signal increase) that occurs when you say such a thing. Human Evolution is an existing scientific specialty with absolutely zilch to do with cybernetic body modification or genetic engineering, and everything to do with the effects of natural selection and mutation on the development of humans in the past.

Co-opting terms isn't always bad. If you are brand-new to a topic, seeing an analogy to something with which you are already familiar may reduce the inferential distance and help you click the idea in your brain. But this gets more hazardous the closer the terms actually are in meaning. Distant terms are safer  -- when I say "Avoid inflationary use of terms" you can instantly see that I'm definitely not talking about money, nor rubber objects with compressed air inside of them, but about words and phrases.

On the other hand with such things as Rational versus Optimal, we're taking two surface-level-similar words and blurring them in such a way that one cannot meaningfully talk about either without accidentally importing baggage from the other. Rational is more suitable for use in contrast with clear examples of irrationality -- cognitive biases, for example, or drug addiction, and is a rather unabashedly idealistic term. Optimal on the other hand doesn't so much require specific contrast because pretty much everything is suboptimal by default to some degree or another -- optimizing is understood as an ongoing and very relativistic process.

To sum up: Avoid making words cheaper and less effective for their specialized tasks. Don't use them for things where a better and more appropriate term exists. As your brain gets used to an idea, be prepared to discard old terms you have co-opted from other domains that were really just useful placeholders to get you started. Specialized jargon exists for a reason!

Rational Toothpaste: A Case Study

68 badger 31 May 2012 12:31AM

Inspired by Konkvistador's comment

Posts titled "Rational ___-ing" or "A Rational Approach to ____" induce groans among a sizeable contingent here, myself included. However, inflationary use of "rational" and its transformation into an applause light is only one part of the problem. These posts tend to revolve around specific answers, rather than the process of how to find answers. I claim a post on "rational toothpaste buying" could be on-topic and useful, if correctly written to illustrate determining goals, assessing tradeoffs, and implementing the final conclusions. A post detailing the pros and cons of various toothpaste brands is for a dentistry or personal hygiene forum; a post about algorithms for how to determine the best brands or whether to do so at all is for a rationality forum. This post is my shot at showing what this would look like.

continue reading »

When None Dare Urge Restraint, pt. 2

56 Jay_Schweikert 30 May 2012 03:28PM

In the original When None Dare Urge Restraint post, Eliezer discusses the dangers of the "spiral of hate" that can develop when saying negative things about the Hated Enemy trumps saying accurate things. Specifically, he uses the example of how the 9/11 hijackers were widely criticized as "cowards," even though this vice in particular was surely not on their list. Over this past Memorial Day weekend, however, it seems like the exact mirror-image problem played out in nearly textbook form.

The trouble began when MSNBC host Chris Hayes noted* that he was uncomfortable with how people use the word "hero" to describe those who die in war -- in particular, because he thinks this sort of automatic valor attributed to the war dead makes it easier to justify future wars. And as you might expect, people went crazy in response, calling Hayes's comments "reprehensible and disgusting," something that "commie grad students would say," and that old chestnut, apparently offered without a hint of irony, "unAmerican." If you watch the video, you can tell that Hayes himself is really struggling to make the point, and by the end he definitely knew he was going to get in trouble, as he started backpedaling with a "but maybe I'm wrong about that." And of course, he apologized the very next day, basically stating that it was improper to have "opine[d] about the people who fight our wars, having never dodged a bullet or guarded a post or walked a mile in their boots."

This whole episode struck me as particularly frightening, mostly because Hayes wasn't even offering a criticism. Soldiers in the American military are, of course, an untouchable target, and I would hardly expect any attack on soldiers to be well received, no matter how grounded. But what genuinely surprised me in this case was that Hayes was merely saying "let's not automatically apply the single most valorizing word we have, because that might cause future wars, and thus future war deaths." But apparently anything less than maximum praise was not only incorrect, but offensive.

Of course, there's no shortage of rationality failures in political discourse, and I'm obviously not intending this post as a political statement about any particular war, policy, candidate, etc. But I think this example is worth mentioning, for two main reasons. First, it's just such a textbook example of the exact sort of problem discussed in Eliezer's original post, in a purer form than I can recall seeing since 9/11 itself. I don't imagine many LW members need convincing in this regard, but I do think there's value in being mindful of this sort of problem on the national stage, even if we're not going to start arguing politics ourselves.

But second, I think this episode says something not just about nationalism, but about how people approach death more generally. Of course, we're all familiar with afterlifism/"they're-in-a-better-place"-style rationalizations of death, but labeling a death as "heroic" can be a similar sort of rationalization. If a death is "heroic," then there's at least some kind of silver lining, some sense of justification, if only partial justification. The movie might not be happy, but it can still go on, and there's at least a chance to play inspiring music. So there's an obvious temptation to label death as "heroic" as much as possible -- I'm reminded of how people tried to call the 9/11 victims "heroes," apparently because they had the great courage to work in buildings that were targeted in a terrorist attack.

If a death is just a tragedy, however, you're left with a more painful situation. You have to acknowledge that yes, really, the world isn't fair, and yes, really, thousands of people -- even the Good Guy's soldiers! -- might be dying for no good reason at all. And even for those who don't really believe in an afterlife, facing death on such a large scale without the "heroic" modifier might just be too painful. The obvious problem, of course -- and Hayes's original point -- is that this sort of death-anesthetic makes it all too easy to numb yourself to more death. If you really care about the problem, you have to face the sheer tragedy of it. Sometimes, all you can say is "we shall have to work faster." And I think that lesson's as appropriate on Memorial Day as any other.

*I apologize that this clip is inserted into a rather low-brow attack video. At the time of posting it was the only link on Youtube I could find, and I wanted something accessible.

Some potential dangers of rationality training

18 lukeprog 21 January 2012 04:50AM

Taylor & Brown (1988) argued that several kinds of irrationality are good for you — for example that overconfidence, including the planning fallacy, protects you from depression and gives you greater motivation because your expectancy of success is higher.

One can imagine other examples. Perhaps the sunk cost fallacy is useful because without it you're prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.

This may be one reason that many people's lives aren't much improved by rationality training. Perhaps the benefits of having more accurate models of the world and making better decisions are swamped by the negative effects of losing out on the benefits of overconfidence and the sunk costs fallacy and other "positive illusions." Yes, I read "Less Wrong Probably Doesn't Cause Akrasia," but there were too many methodological weaknesses to give that study much weight, I think. 

Others have argued against Taylor & Brown's conclusion, and at least one recent study suggests that biases are not inherently positive or negative for mental health and motivation because the effect depends on the context in which they occur. There seems to be no expert consensus on the matter.

(Inspired by a conversation with Louie.)

Fundamentals of kicking anthropic butt

18 Manfred 26 March 2012 06:43AM


Galactus

Introduction

An anthropic problem is one where the very fact of your existence tells you something. "I woke up this morning, therefore the earth did not get eaten by Galactus while I slumbered." Applying your existence to certainties like that is simple - if an event would have stopped you from existing, your existence tells you that that it hasn't happened. If something would only kill you 99% of the time, though, you have to use probability instead of deductive logic. Usually, it's pretty clear what to do. You simply apply Bayes' rule: the probability of the world getting eaten by Galactus last night is equal to the prior probability of Galactus-consumption, times the probability of me waking up given that the world got eaten by Galactus, divided by the probability that I wake up at all. More exotic situations also show up under the umbrella of "anthropics," such as getting duplicated or forgetting which person you are. Even if you've been duplicated, you can still assign probabilities. If there are a hundred copies of you in a hundred-room hotel and you don't know which one you are, don't bet too much that you're in room number 68.

But this last sort of problem is harder, since it's not just a straightforward application of Bayes' rule. You have to determine the probability just from the information in the problem. Thinking in terms of information and symmetries is a useful problem-solving tool for getting probabilities in anthropic problems, which are simple enough to use it and confusing enough to need it. So first we'll cover what I mean by thinking in terms of information, and then we'll use this to solve a confusing-type anthropic problem.

continue reading »

Making Beliefs Pay Rent (in Anticipated Experiences)

110 Eliezer_Yudkowsky 28 July 2007 10:59PM

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, "Yes it does, for it makes vibrations in the air." Another says, "No it does not, for there is no auditory processing in any brain."

Suppose that, after the tree falls, the two walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other? Though the two argue, one saying "No," and the other saying "Yes," they do not anticipate any different experiences.  The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them.

continue reading »

Attention control is critical for changing/increasing/altering motivation

174 kalla724 11 April 2012 12:48AM

I’ve just been reading Luke’s “Crash Course in the Neuroscience of Human Motivation.” It is a useful text, although there are a few technical errors and a few bits of outdated information (see [1], updated information about one particular quibble in [2] and [3]).

There is one significant missing piece, however, which is of critical importance for our subject matter here on LW: the effect of attention on plasticity, including the plasticity of motivation. Since I don’t see any other texts addressing it directly (certainly not from a neuroscientific perspective), let’s cover the main idea here.

Summary for impatient readers: focus of attention physically determines which synapses in your brain get stronger, and which areas of your cortex physically grow in size. The implications of this provide direct guidance for alteration of behaviors and motivational patterns. This is used for this purpose extensively: for instance, many benefits of the Cognitive-Behavioral Therapy approach rely on this mechanism.

continue reading »

References & Resources for LessWrong

90 XiXiDu 10 October 2010 02:54PM

A list of references and resources for LW

Updated: 2011-05-24

  • F = Free
  • E = Easy (adequate for a low educational background)
  • M = Memetic Hazard (controversial ideas or works of fiction)

Summary

Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. (And Khan Academy followed by BetterExplained plus the help of Google and Wikipedia ought to be enough to let anyone read anything directed at the scientifically literate.) Most of these references aren't prerequisite, and only a small fraction are pertinent to any particular post on LessWrong. Do not be intimidated, just go ahead and start reading the Sequences if all this sounds too long. It's much easier to understand than this list makes it look like.

Nevertheless, as it says in the Twelve Virtues of Rationality, scholarship is a virtue, and in particular:

It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.

continue reading »

Trying to Try

42 Eliezer_Yudkowsky 01 October 2008 08:58AM

"No!  Try not!  Do, or do not.  There is no try."
        —Yoda

Years ago, I thought this was yet another example of Deep Wisdom that is actually quite stupid.  SUCCEED is not a primitive action.  You can't just decide to win by choosing hard enough.  There is never a plan that works with probability 1.

But Yoda was wiser than I first realized.

The first elementary technique of epistemology—it's not deep, but it's cheap—is to distinguish the quotation from the referent.  Talking about snow is not the same as talking about "snow".  When I use the word "snow", without quotes, I mean to talk about snow; and when I use the word ""snow"", with quotes, I mean to talk about the word "snow".  You have to enter a special mode, the quotation mode, to talk about your beliefs.  By default, we just talk about reality.

If someone says, "I'm going to flip that switch", then by default, they mean they're going to try to flip the switch.  They're going to build a plan that promises to lead, by the consequences of its actions, to the goal-state of a flipped switch; and then execute that plan.

No plan succeeds with infinite certainty.  So by default, when you talk about setting out to achieve a goal, you do not imply that your plan exactly and perfectly leads to only that possibility.  But when you say, "I'm going to flip that switch", you are trying only to flip the switch—not trying to achieve a 97.2% probability of flipping the switch.

So what does it mean when someone says, "I'm going to try to flip that switch?"

continue reading »

View more: Prev | Next