Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Shut Up and Divide?
Comment author: PhilGoetz 09 February 2010 08:50:04PM *  33 points [-]

how much of what we think our values are, is actually the result of not thinking things through, and not realizing the implications and symmetries that exist?

A very, very large portion.

When I was a child, I read a tract published by Inter-Varsity Press called "The salvation of Zachary Baumkletterer". It's a story about a Christian who tries to actually live according to Christian virtues. Eventually, he concludes that he can't; in a world in which so many people are starving and suffering, he can't justify spending even the bare minimum food and money on himself that would be necessary to keep him alive.

It troubled me for years, even after I gave up religion. It's stressful living in America when you realize that every time you get your hair cut, or go to a movie, or drink a Starbucks latte, you're killing someone. (It's even more stressful now that I can actually afford to do these things regularly.)

You can rationalize that allowing yourself little luxuries will enable you to do enough more good to make up for the lives you could have saved. (Unlikely; the best you can do is buy yourself "offsets"; but you'd usually save more lives with more self-denial.) You can rationalize that saving lives today inevitably leads to losing more lives in the future. (This carried me for a long time.) But ultimately, the only way I find to cope is not caring.

Recently, Michael Vassar told me I was one of the nicest people he knows. And yet I know that every day, I make decisions that would horrify almost everyone in America with their callousness. Other people act the same way; they just avoid making the decisions, by not thinking about the consequences of their actions.

I'm not a nice person inside, by any stretch of the imagination. I just have less of a gap between how nice my morals tell me to be, and how nice I act. This gap, in most people, is so large, that although I have morals that are "worse" than everyone around me, I act "nicer" than most of them by trying to follow them.

Comment author: Toby_Ord 10 February 2010 11:56:23AM *  11 points [-]


It's not actually that hard to make a commitment to give away a large fraction of your income. I've done it, my wife has done it, several of my friends have done it etc. Even for yourself, the benefits of peace of mind and lack of cognitive dissonance will be worth the price, and by my calculations you can make the benefits for others at least 10,000 times as big as the costs for yourself. The trick is to do some big thinking and decision making about how to live very rarely (say once a year) then limit your salary through regular giving. That way you don't have to agonise at the hairdresser's etc, you just live within your reduced means. Check out my site on this, http://www.givingwhatwecan.org -- if you haven't already.

Comment author: timtyler 06 February 2010 05:41:07PM *  3 points [-]

The conjunction fallacy is a subset of Occam's razor? Hmm. Yes - I had never thought of it like that before.

Comment author: Toby_Ord 07 February 2010 11:20:07AM 2 points [-]

I didn't watch the video, but I don't see how that could be true. Occam's razor is about complexity, while the conjunction fallacy is about logical strength.

Sure 'P & Q' is more complex than 'P', but 'P' is simpler than '(P or ~Q)' despite it being stronger in the same way (P is equivalent to (P or ~Q) & (P or Q)).

(Another way to see this is that violating Occam's razor does not make things fallacies).

Comment author: JamesAndrix 31 January 2010 12:24:49AM 25 points [-]

Disagreeing positions don't add up just because they share a feature. On the contrary, If people offer lots of different contradictory reasons for a conclusion (even if each individual has consistent beliefs) it is a sign that they are rationalizing their position.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

This should be clear if A is the koran and B is the bible.

If we're going to add up expert views, we need to add up what experts consider important about a question, not features of their conclusions.

You shouldn't add up two experts if they would consider each other's arguments irrational. That's ignoring their expertise.

Comment author: Toby_Ord 31 January 2010 07:41:03PM 5 points [-]

This certainly doesn't work in all cases:

There is a hidden object which is either green, red or blue. Three people have conflicting opinions about its colour, based on different pieces of reasoning. If you are the one who believes it is green, you have to add up the opponents who say not-green, despite the fact that there is no single not-green position (think of the symmetry -- otherwise everyone could have too great confidence). The same holds true if these are expert opinions.

The above example is basically as general as possible, so in order for your argument to work it will need to add specifics of some sort.

Also, the Koran/Bible case doesn't work. By symmetry, the Koran readers can say that they don't need to add up the Bible readers and the atheists, since they are heterogeneous, so they can keep their belief in the Koran...

Comment author: CarlShulman 31 January 2010 01:24:13PM *  0 points [-]

Agreed. Perhaps Toby or David Pearce can be persuaded.

Comment author: Toby_Ord 31 January 2010 07:27:58PM -1 points [-]

I don't think I can persuaded.

I have many good responses to the comments here, and I suppose I could sketch out some of the main arguments against anti-realism, but there are also many serious demands on my time and sadly this doesn't look like a productive discussion. There seems to be very little real interest in finding out more (with a couple of notable exceptions). Instead the focus is on how to justify what is already believed without finding out any thing else about what the opponents are saying (which is particularly alarming given that many commenters are pointing out that they don't understand what the opponents are saying!).

Given all of this, I fear that writing a post would not be a good use of my time.

Comment author: CarlShulman 30 January 2010 11:59:53PM *  13 points [-]

Atheism doesn't get 80% support among philosophers, and most philosophers of religion reject it because of a selection effect where few wish to study what they believe to be non-subjects (just as normative and applied ethicists are more likely to reject anti-realism).

Comment author: Toby_Ord 31 January 2010 06:13:49PM 4 points [-]

You are correct that it is reasonable to assign high confidence to atheism even if it doesn't have 80% support, but we must be very careful here. Atheism is presumably the strongest example of such a claim here on Less Wrong (i.e. one which you can tell a great story why so many intelligent people would disagree etc and hold a high confidence in the face of disagreement). However, this does not mean that we can say that any other given view is just like atheism in this respect and thus hold beliefs in the face of expert disagreement, that would be far too convenient.

Comment author: Roko 30 January 2010 08:15:08PM *  11 points [-]

Toby, I spent a while looking into the meta-ethical debates about realism. When I thought moral realism was a likely option on the table, I meant:

Strong Moral Realism: All (or perhaps just almost all) beings, human, alien or AI, when given sufficient computing power and the ability to learn science and get an accurate map-territory distinction, will agree on what physical state the universe ought to be transformed into, and therefore they will assist you in transforming it into this state.

But modern philosophers who call themselves "realists" don't mean anything nearly this strong. They mean that that there are moral "facts". But what use is it if the paperclipper agrees that it is a "moral fact" that human rights ought to be respected, if it then goes on to say it has no desire to act according to the prescription of moral facts, and moral facts can't somehow revoke it.

The force of "scientific facts" is that they constrain the world. If an alien wants to get from Andromeda to here, it has to take at least 2.5 million years, the physical fact of the finite speed of light literally stops the alien from getting here sooner, whether it likes it or not.

The 56.3/27.7% split on philpapers seems to me to be an argument about whether you should be allowed to attach the word "fact" to your preferences, kind of as a shiny badge of merit, without actually disagreeing on any physical prediction about the world. The debate between weak moral realists and antirealists sounds like the debate where two people ask "if a tree falls in the forest, does it really make a sound?" - they're not arguing about anything substantive.

So, I ask, how many philosophers are strong moral realists, in the sense I defined?

EDIT: After seeing Carl's comment, it seems likely to me that there probably are a bunch of theists who would, in fact, support the strong moral realism position; but they're clowns, so who cares.

Comment author: Toby_Ord 30 January 2010 10:15:09PM 1 point [-]

Roko, you make a good point that it can be quite murky just what realism and anti-realism mean (in ethics or in anything else). However, I don't agree with what you write after that. Your Strong Moral Realism is a claim that is outside the domain of philosophy, as it is an empirical claim in the domain of exo-biology or exo-sociology or something. No matter what the truth of a meta-ethical claim, smart entities might refuse to believe it (the same goes for other philosophical claims or mathematical claims).

Pick your favourite philosophical claim. I'm sure there are very smart possible entities that don't believe this and very smart ones that do. There are probably also very smart entities without the concepts needed to consider it.

I understand why you introduced Strong Moral Realism: you want to be able to see why the truth of realism would matter and so you came up with truth conditions. However, reducing a philosophical claim to an empirical one never quite captures it.

For what its worth, I think that the empirical claim Strong Moral Realism is false, but I wouldn't be surprised if there was considerable agreement among radically different entities on how to transform the world.

Comment author: CarlShulman 30 January 2010 08:59:40PM *  32 points [-]

Among target faculty listing meta-ethics as their area of study moral realism's lead is much smaller: 42.5% for moral realism and 38.2% against.

Looking further through the philpapers data, a big chunk of the belief in moral realism seems to be coupled with theism, where anti-realism is coupled with atheism and knowledge of science. The more a field is taught at Catholic or other religious colleges (medieval philosophy, bread-and-butter courses like epistemology and logic) the more moral realism, while philosophers of science go the other way. Philosophers of religion are 87% moral realist, while philosophers of biology are 55% anti-realist.

In general, only 61% of respondents "accept" rather than lean towards atheism, and a quarter don't even lean towards atheism. Among meta-ethics specialists, 70% accept atheism, indicating that atheism and subject knowledge both predict moral anti-realism. If we restricted ourselves to the 70% of meta-ethics specialists who also accept atheism I would bet at at least 3:1 odds that moral anti-realism comes out on top.

Since the Philpapers team will be publishing correlations between questions, such a bet should be susceptible to objective adjudication within a reasonable period of time.

A similar pattern shows up for physicalism.

In general, those interquestion correlations should help pinpoint any correct contrarian cluster.

Comment author: Toby_Ord 30 January 2010 09:53:52PM 2 points [-]

Thanks for looking that up Carl -- I didn't know they had the break-downs. This is the more relevant result for this discussion, but it doesn't change my point much. Unless it was 80% or so in favour of anti-realism, I think holding something like 95% credence in anti-realism this is far too high for non-experts.

Comment author: JamesAndrix 30 January 2010 04:38:40PM 6 points [-]

From your SEP link on Moral Realism: "It is worth noting that, while moral realists are united in their cognitivism and in their rejection of error theories, they disagree among themselves not only about which moral claims are actually true but about what it is about the world that makes those claims true. "

I think this is good cause for breaking up that 56%. We should not take them as a block merely because (one component of) their conclusions match, if their justifications are conflicting or contradictory. It could still be the case that 90% of expert philosophers reject any given argument for moral realism. (This would be consistent with my view that those arguments are silly.)

I may have noticed this because the post on Logical Rudeness is fresh in my mind.

Comment author: Toby_Ord 30 January 2010 09:48:39PM 3 points [-]

You are entirely right that the 56% would split up into many subgroups, but I don't really see how this weakens my point: more philosophers support realist positions than anti-realist ones. For what its worth, the anti-realists are also fragmented in a similar way.

Comment author: ciphergoth 30 January 2010 11:59:36AM 6 points [-]

Could you direct us to the best arguments for moral realism, or against anti-realism? Thanks!

Comment author: Toby_Ord 30 January 2010 02:35:43PM *  8 points [-]

In metaethics, there are typically very good arguments against all known views, and only relatively weak arguments for each of them. For anything in philosophy, a good first stop is the Stanford Encyclopedia of Philosophy. Here are some articles on the topic at SEP:

I think the best book to read on metaethics is:

Comment author: Toby_Ord 30 January 2010 11:45:41AM 17 points [-]

There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam's Razor.

There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.


Given this, and given comments from people like me in the intersection of the philosophical and LW communities who can point out that it isn't a case of stupid philosophers supporting realism and all the really smart ones supporting anti-realism, there is no way that the LW community should have anything like the confidence that it does on this point.

Moreover, I should point out that most of the realists lean towards naturalism, which allows a form of realism that is very different to the one that Eliezer critiques. I should also add that within philosophy, the trend is probably not towards anti-realism, but towards realism. The high tide of anti-realism was probably in the middle of the 20th Century, and since then it has lost its shiny newness and people have come up with good arguments against it (which are never discussed here...).

Even for experts in meta-ethics, I can't see how their confidence can get outside the 30%-70% range given the expert disagreement. For non-experts, I really can't see how one could even get to 50% confidence in anti-realism, much less the kind of 98% confidence that is typically expressed here.

View more: Prev | Next