Part of the sequence: No-Nonsense Metaethics

A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.

Metaethics has been my target for a while now, but first I had to explain the neuroscience of pleasure and desire, and how to use intuitions for philosophy.

Luckily, Eliezer laid most of the groundwork when he explained couldness, terminal and instrumental values, the complexity of human desire and happiness, how to dissolve philosophical problems, how to taboo words and replace them with their substance, how to avoid definitional disputes, how to carve reality at its joints with our words, how an algorithm feels from the inside, the mind projection fallacy, how probability is in the mind, reductionism, determinism, free will, evolutionary psychology, how to grasp slippery things, and what you would do without morality.

Of course, Eliezer wrote his own metaethics sequence. Eliezer and I seem to have similar views on morality, but I'll be approaching the subject from a different angle, I'll be phrasing my solution differently, and I'll be covering a different spread of topics.

Why do I think much of metaethics can be solved now? We have enormous resources not available just a few years ago. The neuroscience of pleasure and desire didn't exist two decades ago. (Well, we thought dopamine was 'the pleasure chemical', but we were wrong.) Detailed models of reductionistic meta-ethics weren't developed until the 1980s and 90s (by Peter Railton and Frank Jackson). Reductionism has been around for a while, but there are few philosophers who relentlessly play Rationalist's Taboo. Eliezer didn't write How an Algorithm Feels from the Inside until 2008.

Our methods will be familiar ones, already used to dissolve problems ranging from free will to disease. We will play Taboo with our terms, reducing philosophical questions into scientific ones. Then we will examine the cognitive algorithms that make it feel like open questions remain.

Along the way, we will solve or dissolve the traditional problems of metaethics: moral epistemology, the role of moral intuition, the is-ought gap, matters of moral psychology, the open question argument, moral realism vs. moral anti-realism, moral cognitivism vs. non-cognitivism, and more.

You might respond, "Sure, Luke, we can do the reduce-to-algorithm thing with free will or disease, but morality is different. Morality is fundamentally normative. You can't just dissolve moral questions with Taboo-playing and reductionism and cognitive science."

Well, we're going to examine the cognitive algorithms that generate that intuition, too.

And at the end, we will see what this all means for the problem of Friendly AI.

I must note that I didn't exactly invent the position I'll be defending. After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics, I've just never thought it through in so much detail and cited so much of the relevant science [e.g. recent work in neuroeconomics and the science of intuition]."

But for convenience I do need to invent a name for my theory of metaethics. I call it pluralistic moral reductionism.

Next post: What is Metaethics?

New Comment
60 comments, sorted by Click to highlight new comments since: Today at 10:01 AM

Back when Eliezer was writing his metaethics sequence, it would have been great to know where he was going, i.e., if he had posted ahead of time a one-paragraph technical summary of the position he set out to explain. Can you post such a summary of your position now?

Hmmmm. What do other people think of this idea?

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don't pay attention to the rest of the sequence. But if you had first stepped them through the entire argument, they would have found no place at which they can really disagree. That's a concern, anyway.

I endorse the summary idea. It will help me decide whether and how carefully to read your posts.

I would like to know what you think; depending on what it is, I may or may not be interested in the details of why you think it. For example, I'd rather not spend lots of time reading detailed arguments for a position I already find obvious, in a subject (like metaethics) that I have comparatively little interest in. On the other hand, if your position is something I find counterintuitive, then I may be interested in reading your arguments carefully, to see if I need to update my beliefs.

This is Less Wrong, and you have 5-digit karma. We're not going to ignore your arguments because your conclusion sounds silly.

Furthermore, you don't necessarily have to post the summary prominently, if you really don't want to. You could bury it in these comments right here, for example.

My reaction to this idea depends a lot on how the sequence gets written.

If at every step along the way you appear to be heading towards a known goal, I'm happy to go along for the ride.

If you start to sound like you're wandering, or have gotten lost in the weeds, or make key assertions I reject and don't dispose of my objections, or I otherwise lose faith that you know where you're going, then having a roadmap becomes important.

Also, if your up-front list of claims turns out to be a bunch of stuff I think is unremarkable, I'll be less interested in reading the posts.

OTOH, if I follow you through the entire argument for an endpoint I think is unremarkable, I'll still be less interested, but it would be too late to act on that basis.

I would vote against the summary idea. Just in general, I like it better if a writer starts off with observations, builds their way up with chains of reasoning, and gets the reader everything they need to draw the conclusion of the author, as opposed to telling the reader what position they have, and then providing arguments for it. In terms of rationality, it's probably better to build to your conclusion.

In addition, if you are proposing anything controversial, posting a summary will spark debates before you really had given the requisite background knowledge.

Agreed on all counts. Plus it would just feel like a spoiler, knowing that there was supposed to be a lot building up to it.

(Maybe, to get the best of both options, Luke could post the summary in Discussions, marking it as containing philospoilers; that way people can read through sequence unspoiled if they prefer, while those who want to see a summary in advance can do so, and discuss and inquire about it, with the understanding that "That question/argument will be answered/addressed in the sequence" will always be an acceptable response.)

I am more motivated to read the rest of your sequence if the summary sounds silly than if I can easily see the arguments myself.

Agreed. I know from experience how hard it is to convince someone to change their position on meta ethics. The reason for this is that if you post any specific example or claim that people disagree with, they will then look for reasons to disagree with your meta ethics based on that reason alone. Posting only abstract principles prevents this. It's the exact same motivation as for politics is the mindkiller or any other topic that is both complex and something people feel strongly about (ideal breeding grounds for motivated reasoning).

Nonetheless i would be very interested in seeing such a list.

I disagree with the grandparent and endorse not giving a summary.

I vote for writing a summary, and including it with the last post of the sequence. That way, extra-skeptical people can wait until the sequence has been posted in its entirety before deciding to read it based on the summary, without losing much expected value.

There will without a doubt at least be a summary toward the end of the sequence.

I think in practice what would happen is the skeptical people would disagree on each post, and then when presented with the summary would be compelled to disagree with it in order to remain consistent.

You're right; that sounds like a likely failure unless skeptics could proactively choose to hide that sequence until they could read the summary; which the current LW codebase doesn't support.

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly a

Why doesn't that apply to abstracts?

That's sort of like reading the end of a novel before you buy it. If you do include a summary, please announce what you're doing and make it something we can skip.

Novels are meant to be entertaining. Luke's metaethics post(s) would be meant to be useful, so the analogy isn't valid. Even so, novels frequently have a summary on the inside flap of the dust cover. I hope to see the summary.

I suspect one reason Eliezer did not do this is that when you make a long list of claims without any justification for them, it sounds silly and people don't pay attention to the rest of the sequence.

Yeah, I am still waiting for someone to thoroughly cite all the relevant science to back up AI going FOOM.

Oh, and...

Yes, the next post in this sequence will be "What the hell is metaethics?" :)

[-][anonymous]13y00

More to the point: what are the problems of metaethics?

It's a pretty bold claim to state that you can solve meta-ethics!

I think one important thing to make explicit up front is what you mean by "solving", and how we can see that your particular solution is the correct one. I mean, it might be possible to show that a system is consistent (which is non-trivial...) , maybe even that it is practical, but apart from that, I have a hard time seeing meta-ethics as a problem with a definite solution.

...and how we can see that your particular solution is the correct one.

I think this is a very important point and would like to see it being addressed.

After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics"

You can add me to the list. Except for the part about not citing the science. Your claims appear to be not-insane, a rather unusual feature when people are talking about this subject.

The only thing I have consistently rejected on LW is the metaethics. I find that a much simpler Friedmanite explanation of agents pursuing their separate interests fits my experience.

For example, I would pay a significant amount of money to preserve the life of a friend, and practically zero money to preserve the life of an unknown stranger. I would spend more money to preserve the life of a successful scientist or entrepreneur, than I would to preserve the life of a third world subsistence farmer.

This is simply because I value those persons differently. I recognize that some people have an altruistic terminal value of something like:

"See as many agents as possible having their preferences fulfilled."

... and I can see how the metaethics sequence / discussion are necessary for reducing that terminal value to a scientific, physical metric by which to judge possible futures (especially if one wants to use an AI). But, since I don't share that terminal value, I'm consistently left underwhelmed by the metaethics discussions.

That said, this looks like an ambitious sequence. Good luck!

You are talking about ethics, not meta-ethics.

I am confused about metaethics, and I anticipate becoming less confused as I read this planned sequence.

So thanks, man!

I think metaethics can be solved now. This solution will be the topic of my upcoming sequence 'No-Nonsense Metaethics.'

Good luck. I think you're going to need it.

I am looking forward to reading the sequence! Metaethics is one of the areas where I'm foggy; a lot of the stuff I read confuses me immensely. A reductionist explanation sounds very helpful.

Any plans to turn this sequence (or parts of it, or things like it) into a philosophy journal article?

It's too long. It would have to be a monograph.

Any plans for that? :P

Yes, possibly.

Looking forward to reading this. :)

In "we thought dopamine was 'the pleasure chemical', but we were wrong" the link is no longer pointing to a topic-relevant page.

The dopamine - 'pleasure chemical' link doesn't seem to work. Could you fix it?

I have a bad feeling about this.

Can you unpack the feeling to get more detail about what you intuit the problem is?

As one self-contained point (which doesn't bear most of my intuition, isn't strong in itself), I don't see how finer details about the way brain actually works (e.g. roles of pleasure/desire) can be important for this question. The fact that this is apparently going to be important in the planned sequence tells me that it'll probably go in a wrong direction. Similarly, emphasis on science, where the sheer load of empirical facts can distract from the way they should be interpreted.

Just as a preview, I don't think the neuroscience of pleasure and desire are crucial for metaethics either, but they are useful for illustrative purposes of what possible moral reductions could mean. They can bring some clarity to our thinking about such matters. But yes, of course it matters hugely how one interprets the cognitive science relevant to metaethics.

Wait... did you switch from Desire Utilitarianism? Or is Desirism within pluralistic moral reductionism? Or should I just wait for "What the hell is metaethics?"

Desirism fits within pluralistic moral reductionism. It is one possible reduction of moral terms, but there are others. But yeah, I'm basically gonna ask you to wait for the sequence. :)

That's fine -- your answer was enough to get a rough sketch of a Venn diagram :)

[-][anonymous]13y00

Somehow I have managed to live for 50 years without realizing that metaethics needs solving :)