Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Fallacy of Gray

88 Post author: Eliezer_Yudkowsky 07 January 2008 06:24AM

Followup toTsuyoku Naritai, But There's Still A Chance Right?

    The Sophisticate:  "The world isn't black and white.  No one does pure good or pure bad. It's all gray.  Therefore, no one is better than anyone else."
    The Zetet:  "Knowing only gray, you conclude that all grays are the same shade.  You mock the simplicity of the two-color view, yet you replace it with a one-color view..."
      —Marc Stiegler, David's Sling

I don't know if the Sophisticate's mistake has an official name, but I call it the Fallacy of Gray.  We saw it manifested in yesterday's post—the one who believed that odds of two to the power of seven hundred and fifty millon to one, against, meant "there was still a chance".  All probabilities, to him, were simply "uncertain" and that meant he was licensed to ignore them if he pleased.

"The Moon is made of green cheese" and "the Sun is made of mostly hydrogen and helium" are both uncertainties, but they are not the same uncertainty.

Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black.  Or even if not, we can still compare shades, and say "it is darker" or "it is lighter".

Years ago, one of the strange little formative moments in my career as a rationalist was reading this paragraph from Player of Games by Iain M. Banks, especially the sentence in bold:

"A guilty system recognizes no innocents.  As with any power apparatus which thinks everybody's either for it or against it, we're against it.  You would be too, if you thought about it.  The very way you think places you amongst its enemies.  This might not be your fault, because every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it.  You come from one of the latter and you're being asked to explain yourself to one of the former.  Prevarication will be more difficult than you might imagine; neutrality is probably impossible.  You cannot choose not to have the politics you do; they are not some separate set of entities somehow detachable from the rest of your being; they are a function of your existence.  I know that and they know that; you had better accept it."

Now, don't write angry comments saying that, if societies impose fewer of their values, then each succeeding generation has more work to start over from scratch.  That's not what I got out of the paragraph.

What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me.

It was the whole notion of the Quantitative Way applied to life-problems like moral judgments and the quest for personal self-improvement.  That, even if you couldn't switch something from on to off, you could still tend to increase it or decrease it.

Is this too obvious to be worth mentioning?  I say it is not too obvious, for many bloggers have said of Overcoming Bias:  "It is impossible, no one can completely eliminate bias."  I don't care if the one is a professional economist, it is clear that they have not yet grokked the Quantitative Way as it applies to everyday life and matters like personal self-improvement.  That which I cannot eliminate may be well worth reducing.

Or consider this exchange between Robin Hanson and Tyler Cowen.  Robin Hanson said that he preferred to put at least 75% weight on the prescriptions of economic theory versus his intuitions:  "I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment".  Tyler Cowen replied:

In my view there is no such thing as "straightforwardly applying economic theory"... theories are always applied through our personal and cultural filters and there is no other way it can be.

Yes, but you can try to minimize that effect, or you can do things that are bound to increase it.  And if you try to minimize it, then in many cases I don't think it's unreasonable to call the output "straightforward"—even in economics.

"Everyone is imperfect."  Mohandas Gandhi was imperfect and Joseph Stalin was imperfect, but they were not the same shade of imperfection.  "Everyone is imperfect" is an excellent example of replacing a two-color view with a one-color view.  If you say, "No one is perfect, but some people are less imperfect than others," you may not gain applause; but for those who strive to do better, you have held out hope.  No one is perfectly imperfect, after all.

(Whenever someone says to me, "Perfectionism is bad for you," I reply:  "I think it's okay to be imperfect, but not so imperfect that other people notice.")

Likewise the folly of those who say, "Every scientific paradigm imposes some of its assumptions on how it interprets experiments," and then act like they'd proven science to occupy the same level with witchdoctoring.  Every worldview imposes some of its structure on its observations, but the point is that there are worldviews which try to minimize that imposition, and worldviews which glory in it.  There is no white, but there are shades of gray that are far lighter than others, and it is folly to treat them as if they were all on the same level.

If the moon has orbited the Earth these past few billion years, if you have seen it in the sky these last years, and you expect to see it in its appointed place and phase tomorrow, then that is not a certainty.  And if you expect an invisible dragon to heal your daughter of cancer, that too is not a certainty.  But they are rather different degrees of uncertainty—this business of expecting things to happen yet again in the same way you have previously predicted to twelve decimal places, versus expecting something to happen that violates the order previously observed.  Calling them both "faith" seems a little too un-narrow.

It's a most peculiar psychology—this business of "Science is based on faith too, so there!"  Typically this is said by people who claim that faith is a good thing.  Then why do they say "Science is based on faith too!" in that angry-triumphal tone, rather than as a compliment?  And a rather dangerous compliment to give, one would think, from their perspective.  If science is based on 'faith', then science is of the same kind as religion—directly comparable.  If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars.  It would make sense to say, "The priests of science can blatantly, publicly, verifiably walk on the Moon as a faith-based miracle, and your priests' faith can't do the same."  Are you sure you wish to go there, oh faithist?  Perhaps, on further reflection, you would prefer to retract this whole business of "Science is a religion too!"

There's a strange dynamic here:  You try to purify your shade of gray, and you get it to a point where it's pretty light-toned, and someone stands up and says in a deeply offended tone, "But it's not white!  It's gray!"  It's one thing when someone says, "This isn't as light as you think, because of specific problems X, Y, and Z."  It's a different matter when someone says angrily "It's not white!  It's gray!" without pointing out any specific dark spots.

In this case, I begin to suspect psychology that is more imperfect than usual—that someone may have made a devil's bargain with their own mistakes, and now refuses to hear of any possibility of improvement.  When someone finds an excuse not to try to do better, they often refuse to concede that anyone else can try to do better, and every mode of improvement is thereafter their enemy, and every claim that it is possible to move forward is an offense against them.  And so they say in one breath proudly, "I'm glad to be gray," and in the next breath angrily, "And you're gray too!"

If there is no black and white, there is yet lighter and darker, and not all grays are the same.

AddendumG points us to Asimov's The Relativity of Wrong:  "When people thought the earth was flat, they were wrong.  When people thought the earth was spherical, they were wrong.  But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

 

Part of the Overly Convenient Excuses subsequence of How To Actually Change Your Mind

Next post: "Absolute Authority"

Previous post: "But There's Still A Chance, Right?"

Comments (72)

Sort By: Old
Comment author: Tiiba2 07 January 2008 07:40:38AM 28 points [-]

I suggest this post for the "start here" list. It's unusually close to perfection.

Comment author: James_Bach 07 January 2008 08:31:31AM -1 points [-]

It sounds like you are trying to rescue induction from Hume's argument that it has no basis in logic. "The future will be like the past because in the past the future was like the past" is a circular argument. He was the first to really make that point. Immanuel Kant spent years spinning elaborate philosophy to try to defeat that argument. Immanuel Kant, like lots of people, had a deep need for universal closure.

An easier way to go is to overcome your need for universal closure.

Induction is not logically justified, but you can make a different argument. You could point out that creatures who ignore the apparent patterns in nature tend to die pretty quick. Induction is a behavior that seems to help us stay alive. That's pretty good. That's why people can't just wave their hands and claim reality is whatever anyone believes-- if they do that, they will discover that acting on that belief won't necessarily, say, win them the New York lottery.

My concern with your argument is, again, structural. You are talking about "gray", and then you link that to probability. Wait a minute, that oversimplifies the metaphor. You present the idea of gray as a one-dimensional quantity, similar to probability. But when people invoke "gray" in rhetoric they are simply trying to say that there are potentially many ways to see something, many ways to understand and analyze it. It's not a one-dimensional gray, it's a many dimensional gray. You can't reduce that to probability, in any actionable way, without specifying your model.

Here's the tactic I use when I'm trying to stand up for a distinction that I want other people to accept (notice that I don't need to invoke "reality" when I say that, since only theories of reality are available to me). I ask them to specify in what way the issue is gray. Let's distinguish between "my spider senses are telling me to be cautious" and "I can think of five specific factors that must be included in a competent analysis. Here they are..."

In other words, don't deny the gray, explore it.

A second tactic I use is to talk about the practical implications of acting-as-if a fact is certain: "I know that nothing can be known for sure, but if we can agree, for the moment, that X, Y, and Z are 'true' then look what we can do... Doesn't that seem nice?"

I think you can get what you want without ridiculing people who don't share your precise worldview, if that sort of thing matters to you.

Comment author: robertskmiles 16 November 2012 01:40:03PM 14 points [-]

Induction is a behavior that seems to help us stay alive.

Well, it has helped us to stay alive in the past, though there's no reason to expect that to continue...

Comment author: Elver 07 January 2008 08:35:11AM 5 points [-]

This post is unusually white. The two arguments -- all shades of gray being seen as the same shade and science being a demonstrably better "religion" -- have seriously expanded my mind. Thank you!

Comment author: Dan_Burfoot 07 January 2008 09:56:30AM 10 points [-]

That which I cannot eliminate may be well worth reducing.

I wish this basically obvious point was more widely appreciated. I've participated in dozens of conversations which go like this:

Me: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Person: "Yeah, but we can't get rid of government, because we need it for roads, police, etc." Me: " $%&*@#!! Of course we can't get rid of it entirely, but that doesn't mean it isn't worth reducing!"

Great post. I encourage you to expand on the idea of the Quantitative Way as applied to areas such as self improvement and everyday life.

Comment author: ChristroperRobin 19 July 2012 10:33:58AM -2 points [-]

Seeing Dan_Burfoot's comment from four years ago, I felt compelled to join the discussion.

I've participated in dozens of conversations which go like this:

Me: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad." Person: "Yeah, but we can't get rid of government, because we need it for roads, police, etc." Me: " $%&*@#!! Of course we can't get rid of it entirely, but that doesn't mean it isn't worth reducing!"

I would put it like this

Libertarian: "Government is based on the principle of coercive violence. Coercive violence is bad. Therefore government is bad."

Me: "Coercive violence is dissuading me from killing you. So maybe coercive violence is not so bad, after all."

Seriously, what some people call "government" is the ground upon which civilization, and ultimately all rationality, rests. "Government" is not "coercive violence", it is the agreement between rational people that they will allow their

Comment author: wedrifid 19 July 2012 11:01:43AM *  11 points [-]

Seriously, what some people call "government" is the ground upon which civilization, and ultimately all rationality, rests.

I was nodding along until: "The ground upon which all rationality rests".

You seem to have fallen into same trap of self-defeating hyperbole that the quoted straw-libertarian has fallen into. It is enough to make your point that government, and the implied threat of violence is not all bad and is even useful. Don't try to make ridiculous claims about "all rationality". Apart from being a distasteful abuse of 'rational' as an applause light it is also false. With actual rational agents all sorts of alternative arrangements not fitting the label "government" would be just as good---it is the particular quirks of humans that make government more practical for us right now.

Comment author: ChristroperRobin 19 July 2012 12:28:30PM 2 points [-]

I am embarrassed that I accidentally clicked "close" before I was done writing my comment. While I was off composing it in the sandbox, you saw the first draft and commented on it. And you are correct, I think. Is my face red, or what? I have retracted my original comment. My browser shows it as struck out, anyway.

So, yeah, saying that government is "coercive violence" is a straw argument. I think we agree.

I think we agree. What are "actual rational agents"? I am new here, so maybe I should do some more reading. I'm sure Eliezer has published extensively on defining that term. My prejudice would be that "actual rational agents" are entities which "rationally" would want to protect their own existence. I mean, they may be "rational", but they still have self-interest.

So what I'm saying is that "government" is a system for settling claims between competing rational agents. It's a set of game rules. Game rules enshrined by rational agents, for the purpose of protecting their own rational self-interests, are rational.

Rational debate, without the existence of these game rules, which is what government is, is impossible. That's what I'm saying.

Here's another way to look at it. The Laws of Logic (A is A, etc.) are also game rules. We don't think of them that way because we don't accept the Laws of Logic voluntarily. We are forced to accept them because they are necessarily true. Additional rules, which we call government, are also necessary. We write our own Constitution, but we still need to have one.

Comment author: wedrifid 19 July 2012 01:43:45PM 2 points [-]

I think we agree. What are "actual rational agents"? I am new here, so maybe I should do some more reading. I'm sure Eliezer has published extensively on defining that term. My prejudice would be that "actual rational agents" are entities which "rationally" would want to protect their own existence. I mean, they may be "rational", but they still have self-interest.

We are using approximately the same meaning. (I would only insist that they value something, it doesn't necessarily have to be their own existence but that'll do as an example.)

So what I'm saying is that "government" is a system for settling claims between competing rational agents. It's a set of game rules. Game rules enshrined by rational agents, for the purpose of protecting their own rational self-interests, are rational.

Rational debate, without the existence of these game rules, which is what government is, is impossible. That's what I'm saying.

I'm disagreeing that government is actually necessary. It is a solution to cooperation problems but not the only one. It just happens to be the one most practical for humans.

Comment author: TheOtherDave 19 July 2012 02:35:50PM 1 point [-]

Well, for sufficiently large groups of humans.

Comment author: TheLooniBomber 26 January 2013 11:29:57PM 0 points [-]

Bringing party politics into a discussion about rationality makes you the straw man, my friend. Attacking a philosophy of limited government would imply that every government action is the same shade of grey, and all must be necessary, because a group of people voted on a policy, therefore it must be thought out. Politics in itself is not the product of careful examination and rational thinking about public issues, but rather a way of conveying ones interests in a manner that appears to benefit the target audience and gain support. Not all rules are necessary or of the same necessity, simply because they are written.

I would also add that we do, in fact accept the Laws of Logic voluntarily, but only if we are not indoctrinated to do otherwise. To believe that we don't, would suggest that the first philosophers had to have been taught, perhaps by some supernatural or extraterrestrial deity, or perhaps the first logical thought was triggered by a concussion.

Comment author: Roxton 29 May 2013 03:26:54PM 3 points [-]

Doesn't "coercive violence is bad" beg the question in a way that would only be deemed natural if one were implicitly invoking the noncentral fallacy?

Comment author: Larks 29 May 2013 05:27:48PM 1 point [-]

No, many people think coercion qua coercion is wrong - for example, philosophers of a Kantian bent, which is very common in political philosophy.

Comment author: Roxton 29 May 2013 05:57:58PM *  0 points [-]

Point taken, but I would advance the view that the popularity of such a categorical point stems from the fallacy. It seems to be the backbone that makes deontological ethics intuitive.

In any event, it's still clearly an instance of begging the question.

But my goal was to cast a shadow on the off-topic point, not to derail the thread.

Comment author: Larks 29 May 2013 10:05:24PM 1 point [-]

it's still clearly an instance of begging the question.

I'm not sure it is; that government involves coercion is a substantive premise.

But my goal was to cast a shadow on the off-topic point, not to derail the thread.

Unfortunately, people who agree with the off-topic point can hardly accept such behaviour without response.

Comment author: Juno_Watt 29 May 2013 06:59:15PM 0 points [-]

Many libertarians think that. I'm not so sure about that. I don't think he would have wished "no criminals should be captured" or "Everyone should dodge taxes" to be the Universal Law.

Comment author: Larks 29 May 2013 10:02:05PM 0 points [-]

I'm not referring to Kant, I mean contemporary philosophers, like Michael Blake, who is not a libertarian.

Comment author: Ben_Jones 07 January 2008 12:39:36PM 0 points [-]

Agreed - best post in ages, many thanks. That is all.

Comment author: RobinHanson 07 January 2008 02:23:47PM 16 points [-]

All who love this post, do you love it because it told you something you didn't know before, or because you think it would be great to show others who you don't think understand this point? I worry when our reader's favorite posts are based on how much they agree with the post, instead of how much they learned from it.

Comment author: eyelidlessness 29 July 2012 06:32:33AM 3 points [-]

It's possible both are true: that the reader understood the point already, but learned a better way to articulate it in an effort to advance another conversation.

Comment author: redlizard 01 May 2014 06:09:45PM 2 points [-]

I already knew it, but this post made me understand it.

Comment author: Mike_Kenny 07 January 2008 02:42:17PM 0 points [-]

For me, the main point is incremental advancement towards perfection means expending resources and creating other consequences. The questions ultimately have to be 'how much is it worth to move closer to perfection? What other consequences probably will happen?' This question obviously depends on your context. It appears that some kinds of perfectionism, as far as I can tell, have negative effects on the holder of perfectionistic standards, in the view of psychologists, relevant experts on the matter, and that costs have to be considered when moving in the direction of perfection--and it might even be worthwhile to move away from perfection in one context if the costs are too great and benefits too small.

That said, I think the ethos of this blog seems to be "We're too comfortable with our imperfections in thinking," which I think is true enough. On the other hand, emphasizing how bad or dopey we are is depressing or off-putting, true though it may be in many cases, and focusing on how we'd be happier and more powerful with less bias is exciting, and it can be fun (lots of people like betting, which can help us see our biases, for example).

Comment author: LG 07 January 2008 02:54:02PM 19 points [-]

Robin, I think people tend to be enthusiastic when an idea they've known on a more or less intuitive level for a long time is laid out eloquently, and in a way they could see relaying to their particular audience. It's a form of relief, maybe.

So it's not so much "I like it because I agree with it," it's more "I like it because I knew it before but I could never explain it that well."

/unscientific guessing

Comment author: Ben_Jones 07 January 2008 03:32:10PM 1 point [-]

Robin,

I'm with LG, the answer to your question is 'neither'. I also enjoy posts which reinform my way of thinking, but a straight account of what I already think myself wouldn't draw praise. Crystallization of a hitherto-unclear concept can be invaluable - I quote:

"What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me."

Mike, any action or updating of beliefs will have a net effect on 'whiteness' (or 'blackness'). If you're worried that improving in manner x will lead to worsening in manner y, weigh one against the other and take action. 'Perfectionism' holds negative connotations that Tsuyoku Naritai seems not to.

Comment author: Utilitarian2 07 January 2008 03:59:32PM 1 point [-]

Then why do they say "Science is based on faith too!" in that angry-triumphal tone, rather than as a compliment?

When used appropriately, the "science is based on faith too" point is meant to cast doubt upon specific non-falsifiable conclusions that scientists take for granted: for instance, that the only things that exist are matter (rather than, say, an additional immaterial spirit) or that evolution happens by itself (rather than, say, being directed by an intelligent designer). Scientific evidence doesn't distinguish between these hypotheses; it's taken on faith that the first of these is "simpler" and deserves higher prior probability. Maybe these priors are derived from Kolmogorov complexity or something similar, but it still must be taken on faith that those measures are meaningful. (This is, of course, what you recognized when you said, "Every worldview imposes some of its structure on its observations [...].")

Induction is not logically justified, but you can make a different argument. You could point out that creatures who ignore the apparent patterns in nature tend to die pretty quick. Induction is a behavior that seems to help us stay alive.

Isn't this argument premised on induction, i.e., things that helped organisms stay alive in the past will help them stay alive in the future?

Comment author: Peter_de_Blanc 07 January 2008 04:22:17PM 2 points [-]

Utilitarian, you said:

non-falsifiable conclusions that scientists take for granted: for instance, that the only things that exist are matter (rather than, say, an additional immaterial spirit) or that evolution happens by itself (rather than, say, being directed by an intelligent designer).

How much time did you spend trying to come up with predictions from these hypotheses before declaring them unfalsifiable?

Comment author: Utilitarian2 07 January 2008 05:24:08PM 0 points [-]

How much time did you spend trying to come up with predictions from these hypotheses before declaring them unfalsifiable?

Not much; it's possible that these hypotheses are falsifiable (in the sense of having a likelihood ratio < 1 compared against the other corresponding hypothesis). I was assuming this wasn't true given only the evidence currently available, but I'd be glad to hear if you think otherwise.

Comment author: Nick_Tarleton 07 January 2008 06:26:30PM 0 points [-]

It's easy to think of potential observations that would very strongly favor dualism or intelligent design, and the absence of those observations counts as falsifying evidence.

Comment author: steven 07 January 2008 06:39:40PM 0 points [-]

I think it's worth keeping the distinction between falsification (a likelihood ratio of 0) and disconfirmation (a likelihood ratio < 1). Usually when people say "unfalsifiable" they really mean "undisconfirmable" or "unstronglydisconfirmable".

Comment author: Peter_Kim 07 January 2008 06:42:40PM 3 points [-]

Dan Burfoot, permit me to join in those conversations:

Me: "No, coercive violence is merely a shade of gray. Another harm of the status quo, like sick children, may be a darker shade of gray, in which case I'm willing to become a little darker so I can gain more lightness overall. For example, I don't think there's much opposition to using coercive violence to protect the life of infants (criminalizing infanticide, taxation to support wards of state, etc.). Of course, opinions on the relative light/darkness of coercive violence vs. other 'bad' differ, and therein lies the popular contention between 'big govt' vs. 'small govt,' not whether government based on coercive violence, or that coercive violence is bad."

Comment author: G2 07 January 2008 07:12:31PM 8 points [-]

This post reminds me of Isaac Asimov's The Relativity of Wrong, which is excellent. Wikipedia page

Comment author: Hul-Gil 20 April 2012 06:24:41PM *  2 points [-]

It reminded me of that as well. Here is the full article; I'm glad it's online, because the errors he (and Yudkowsky, above) clears up are astonishingly prevalent. I've had cause to link to it many times.

Comment author: josh 07 January 2008 09:50:42PM 0 points [-]

LG, Doesn't that mean you like the post, specifically becuase it appeals to confirmation bias, one of the known biases we should be seeking to overcome?

Comment author: Nathan_Myers 08 January 2008 11:29:21PM 0 points [-]

In other words, "numbers matter". But I suppose mentioning numbers eliminates most of your audience.

Comment author: Zander 09 January 2008 05:23:31PM 0 points [-]

Ah, I love the way the cheap shots just keep on coming...

Comment author: david_foster 11 January 2008 04:16:32PM 1 point [-]

Arthur Koestler has some thoughts that are relevant here.

Comment author: ksvanhorn 21 January 2011 07:41:27PM *  3 points [-]

Thanks, Eliezer, for an excellent article. Some of my favorite quotables:

  • the Quantitative Way

  • Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black.

  • If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars.

  • "Everyone is imperfect" is an excellent example of replacing a two-color view with a one-color view.

Comment author: dspeyer 17 November 2011 03:38:19AM 8 points [-]

Then there's the fallacy of shades of gray: that every space can be reasonably modeled as 1-dimensional.

Comment author: Hul-Gil 20 April 2012 06:30:24PM *  0 points [-]

I'm trying to imagine the other dimension we could add to this. If we have "more right" and "less right" along one axis, what's orthogonal to it?

I initially felt this comment was silly (the post isn't saying every space can be reasonably modeled as one-dimensional, is it?), but my brain is telling me we actually could come up with a more precise way to represent the article's concept with a Cartesian plane... but I'm not actually able to think of one. False intuition based on my experience with the "Political Compass" graph, perhaps.

Comment author: dlthomas 20 April 2012 06:50:42PM 1 point [-]

Direction of divergence?

Neither (1, 5) nor (5, 1) may be "more wrong" when the answer is (2, 2), but may still be quite meaningfully distinct for some purposes.

Comment author: Hul-Gil 20 April 2012 07:39:15PM *  0 points [-]

That's true. They could be wrong in different ways (or "different directions", in our example), which could be important for some purposes. But as you say, that depends on said purposes; I'm still uncertain as to the fallacy that dspeyer refers to. If our only purpose is determining some belief's level of correctness, absent other considerations (like in which way it's incorrect), isn't the one dimension of the "shades of grey" model sufficient?

Although -- come to think of it, I could be misunderstanding his criticism. I took it to mean he had an issue with the original post, but he could just be providing an example of how the shades-of-grey model could be used fallaciously, rather than saying it is fallacious, as I initially interpreted.

Comment author: dspeyer 26 April 2012 05:25:55AM *  2 points [-]

I meant my comment more as a warning to readers than as a criticism of the article. When you've upgraded your mental model, don't stop and be satisfied -- see if there are more low-hanging upgrades. This is especially important if having recently improved your model biases you toward overconfidence (which I suspect is common).

To address your actual challenge...

Probability of correctness may actually be one dimensional. Though in practice it's worth keeping around what the big hunks of uncertainty are so you can update them easily if needed (i.e. P(my_understanding) = P(I_understood_what_I_read) * P(the_author_was_honest) * ... is easier to update if you later learn the author was a troll).

Degrees of correctness are more complex. "The geography of the Earth is as shown on a Mercator map" and "The geography of the Earth is as shown on a Peters map" are both false. They are both useful approximations. Is one more useful than the other? That depends on what you want to do with it.

There were other examples in the article besides correctness. "Every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it" and some maximize it with regard to their perspective on murder and minimize it with regard to their perspective on shellfish. "No one is perfect, but some people are less imperfect than others" and some people are imperfect in different ways from others, which are more or less harmful in different circumstances.

Comment author: [deleted] 19 February 2012 05:14:49PM 0 points [-]

This was a very useful post and one I will be adding into my daily dossier I know. I agree this is a good "start post" because it is lucid, clear, and useful. There's little I feel to add at the moment as doing so would simply be glorifying the item itself rather than using the knowledge gained, so thank you for the post.

Comment author: wobster109 26 February 2012 07:27:46PM 1 point [-]

I'm glad this post is here! Today, I came across this lovely little statement on Xanga: "Richard Dawkins admitted recently that he can't be sure that God does not exist. He is generally considered the World's most famous Atheist. So this question is for Atheists. Can you be sure that God does not exist?"

It made me cranky right away (I promise, I was more patient many many instances of this sentiment ago), and my first response was to link here in a comment. Well, I'm glad this post is here to link to. Grr.

Comment author: David_Gerard 02 December 2012 02:36:45PM 3 points [-]

Surprised no-one's yet noted that the proper name for this is the continuum fallacy or sorites fallacy.

Comment author: non-expert 08 January 2013 08:52:20AM 0 points [-]

i don't follow the relevance of article, as it seems quite obvious. the real problem with the black and white in the world of rationality is the assumption there is a universal answer to all questions. the idea of "grey" helps highlight that many answers have no one correct universal answer. what i dont understand about rationalists (LW rationalists) is that the live in a world in which everything is either right or wrong. this simplifies a world that is not so simple. what am i missing?

Comment author: MugaSofer 08 January 2013 10:58:58AM -1 points [-]

Offtopic: Have you considered running your comments through a spell- and grammar- checker? It might help with legibility and signalling competence.

Ontopic:

what i dont understand about rationalists (LW rationalists) is that the live in a world in which everything is either right or wrong.

Rationalists, or at least Bayesians, use probabilities, not binary right-or-wrong judgments. There is, mathematically, only one "correct" probability given the data; is that what you mean?

Comment author: non-expert 09 January 2013 04:15:20AM 1 point [-]

Ok, yes, the idea of using probabilities raises two issues -- knowing you have the right inputs, and having the right perspective. Knowing and valuing the proper inputs to most questions seems impossible because of the subjectivity of most issues -- while Bayesian judgements may still hold in the abstract, they are often not practical to use (or so I would argue). Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world? You don't have to sign on completely to Nietzsche's theory to see its potential application, even if limited in scope. For example, a number of communication techniques employ a type of perspectivism because different people view issues through an "individual lens". In either case, seeing the world as constructed of shades of grey seems more practical and accurate relative to using probabilities. This seems at odds with Bayesian judgments that assume that probabilities yield one correct answer AND that a person can and should be able to derive that correct answer.

The point i raise about communication techniques relates to your "offtopic" point. I assume you are a rationalist, and thus believe yourself to have superior decision making skills (at least relative to those that are not students (or masters) of rationality). If so, what is the value of your "off topic" point -- you clearly were able to answer my question despite its shortcomings -- why belittle someone that is trying to understand an article that is well-received by LW? Is the petty victory of pointing out my mistakes, from your perspective, the most rational way to answer my comment? I'm not insulted personally (this type of pettiness always makes me smile), but I'm interested in understanding the logic of your comments. From my perspective, rationality failed you in communicating in an effective way. It seems your arrogance could keep many from following and learning from LW -- unless of course the goal is to limit the ranks of those that employ rationality. What am I missing? (and the answer is no, i haven't considered using a spell or grammar checker other than the one provided by this site).

Comment author: MugaSofer 09 January 2013 10:34:31AM *  0 points [-]

Ok, yes, the idea of using probabilities raises two issues -- knowing you have the right inputs, and having the right perspective. Knowing and valuing the proper inputs to most questions seems impossible because of the subjectivity of most issues -- while Bayesian judgements may still hold in the abstract, they are often not practical to use (or so I would argue).

Unreliable evidence, biased estimates etc. can, in fact, be taken into account.

Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world?

This.

You don't have to sign on completely to Nietzsche's theory to see its potential application, even if limited in scope. For example, a number of communication techniques employ a type of perspectivism because different people view issues through an "individual lens". In either case, seeing the world as constructed of shades of grey seems more practical and accurate relative to using probabilities. This seems at odds with Bayesian judgments that assume that probabilities yield one correct answer AND that a person can and should be able to derive that correct answer.

Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes.

The point i raise about communication techniques relates to your "offtopic" point. I assume you are a rationalist, and thus believe yourself to have superior decision making skills (at least relative to those that are not students (or masters) of rationality). If so, what is the value of your "off topic" point -- you clearly were able to answer my question despite its shortcomings -- why belittle someone that is trying to understand an article that is well-received by LW? Is the petty victory of pointing out my mistakes, from your perspective, the most rational way to answer my comment? I'm not insulted personally (this type of pettiness always makes me smile), but I'm interested in understanding the logic of your comments. From my perspective, rationality failed you in communicating in an effective way. It seems your arrogance could keep many from following and learning from LW -- unless of course the goal is to limit the ranks of those that employ rationality. What am I missing? (and the answer is no, i haven't considered using a spell or grammar checker other than the one provided by this site).

Oh, I'm not going to downvote your comments or anything. I just thought you might prefer your comments to be easier to read and avoid signalling ... well, disrespect, ignorance, crazy-ranting-on-the-internet-ness, and all the other low status and undesirable signals given off. Of course, I'm giving you the benefit of the doubt, but people are simply less likely to do so when you give off signals like that. This isn't necessarily irrational, since these signals are, indeed, correlated with trolls and idiots. Not perfectly, but enough to be worth avoiding (IMHO.)

Comment author: non-expert 09 January 2013 02:22:36PM 1 point [-]

Throwing your hands in the air and saying "well we can never know for sure" is not as accurate as giving probabilities of various results. We can never know for sure which answer is right, but we can assign our probabilities so that, on average, we are always as confident as we should be. Of course, humans are ill-suited to this task, having a variety of suboptimal heuristics and downright biases, but they're all we have. And we can, in fact, assign the correct probabilities / choose the correct choice when we have the problem reduced to a mathematical model and apply the math without making mistakes.

If all you're looking for is confidence, why must you assign probabilities? I'm pushing you in hopes of understanding, not necessarily disagreeing. If I'm very religious and use that as my life-guide, I could be extremely confident in a given answer. In other words, the value of using probabilities must extend beyond confidence in my own answer -- confidence is just a personal feeling. Being "right" in a normative sense is also relevant, but as you point out, we often don't actually know what answer is correct. If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance -- this is simply not practical in many situations precisely because the world is so complex. I guess it boils down to this -- what is the value of being "right" if what is "right" cannot be determined? I think there are decisions where what is right can be determined -- and rationality and the bayesian model works quite well. I think far more decisions (social relationships, politics, economics -- particularly decisions that do not directly affect the decision maker) are too subjective to know what is "right" or accurately model inputs. In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.

I think I'm the only one on LessWrong that finds EY's writing maddening -- mostly the style -- I keep screaming to myself, "get to the point!" -- as noted, perhaps its just me. His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.

Comment author: TheOtherDave 09 January 2013 02:56:47PM 2 points [-]

I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.

Yes, this community is generally concerned with methods for, as you say, getting "the right answer more often than not."

And, sure, sometimes a marginal increase in my chance of getting the right answer isn't worth the cost of securing that increase -- as you say, sometimes "accurately identifying the proper inputs and valuing them correctly [...] is simply not practical" -- so I accept a lower chance of having the right answer. And, sure, complex contexts such as social relationships, politics, and economics are often cases where the cost of a greater chance of knowing the right answer is prohibitive, so we go with the highest chance of it we can profitably get.

To say that "rationality falls short" in these cases suggests that it's being compared to something. If you're saying it falls short compared to perfect knowledge, I absolutely agree. If you're saying it falls short compared to something humans have access to, I'm interested in what that something is.

I agree that expressing beliefs numerically (e.g., as probabilities) can lead people to assign more value to the answer than it deserves. But saying that it's "the best answer" has that problem, too. If someone tells me that answer A is the best answer I will likely assign more value to it than if they tell me they are 40% confident in answer A, 35% confident in answer B, and 25% confident in answer C.

I have no idea what you mean by the truth being "relative".

Comment author: non-expert 10 January 2013 07:49:24AM 0 points [-]

I suspect that the word "confidence" is not being used consistently in this exchange, and you might do well to replace it with a more explicit description of what you intend for it to refer to.

i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used "confident as we should be"? Regardless, I am still wondering what the value of being "right" is if we can't determine what is in fact right? If it gives confidence/ego/comfort that you've derived the right answer, being "right" in actuality is not necessary to have those feelings.

To say that "rationality falls short" in these cases suggests that it's being compared to something.

Fair. The use of rationality and the belief in its merits generally biases the decision maker to form a belief that rationality will yield a correct answer, even if it does not -- it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don't know they are accurate). To say it differently, to the extent a question has no clear answer (for example, because we don't have enough information or it isn't worth the cost), I think we'd be better off withholding judgment altogether than forming a judgment for the sake of having an opinion. Rumsfeld had this great quote -- "we dont know what we don't know" -- we also don't know the importance of what we don't know relative to what we do know when forming judgments. From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know. Rationality cannot take into account information that is not known to be relevant -- what is the value of forming a judgment in this case? To be clear, I'm not "throwing my hands up" for all of life's questions and saying we don't know anything -- I'm trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).

Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue. This is because our notions of truth are man-made, even if we account for the possibility that there are certain universal truths (what relevance do those truths have if only you know them?). Despite the logic underlying probability theory/science in general, truths derived therefrom are accepted as such only because people value and trust probability theory and science. All other matters of truth are even more subjective -- this does not mean that contradicting beliefs are equally true or equally valid, instead, truth is subjective precisely because we cannot even attempt prove anything as true outside of human comprehension. We're stuck debating and determining truth only amongst ourselves. Its the human paradox of freedom of expression/reasoning trapped within an animal form that is fallible and will die. From my perspective, determining universal truth, if it exists, requires transcending the limitations of man -- which of course i cannot do.

Comment author: MugaSofer 10 January 2013 10:36:08AM -1 points [-]

i referenced confidence only because Mugasofer did. What was your understanding of how Mugasofer used "confident as we should be"? Regardless, I am still wondering what the value of being "right" is if we can't determine what is in fact right?

Because it helps us make decisions.

Incidentally, replacing words that may be unclear or misunderstood (by either party) with what we mean by those words is generally considered helpful 'round here for producing fruitful discussions - there's no point arguing about whether the tree in the forest made a sound if I mean "auditory experience" and you mean "vibrations in the air". This is known as "Rationalist's Taboo", after a game with similar rules, and replacing a word with (your) definition is known as "tabooing" it.

Comment author: non-expert 14 January 2013 07:55:45AM 0 points [-]

I actually don't think we're using the word differently -- the issue was premised solely for issues where the answer cannot be known after the fact. In that case, our use of "confidence" is the same -- it simply helps you make decisions. Once the value of the decision is limited to the belief in its soundness, and not ultimate "correctness" of the decision (because it cannot be known), rationality is important only if you believe it to be correct way to make decisions.

Comment author: MugaSofer 14 January 2013 09:31:30AM 0 points [-]

Indeed. And probability is confidence, and Bayesian probability is the correct amount of confidence.

Comment author: TheOtherDave 10 January 2013 06:44:52PM 1 point [-]

What was your understanding of how Mugasofer used "confident as we should be"?

Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.

what the value of being "right" is if we can't determine what is in fact right?

I'm not quite sure what "right" means, but if nothing will happen differently depending on whether A or B is true, either now or in the future, then there's no value in knowing whether A or B is true.

it seems rationality always errs on applying probabilities (and forming a judgment), even if they are flawed (or you don't know they are accurate).

Yes, pretty much. I wouldn't say "errs", but semantics aside, we're always forming probability judgments, and those judgments are always flawed (or at least incomplete) for any interesting problem.

to the extent a question has no clear answer (for example, because we don't have enough information or it isn't worth the cost), I think we'd be better off withholding judgment altogether than forming a judgment for the sake of having an opinion.

There are many decisions I'm obligated to make where the effects of that decision for good or ill will differ depending on whether the world is A or B, but where the question "is the world A or B?" has no clear answer in the sense you mean. For those decisions, it is useful to make the procedure I use as reliable as is cost-effective.

But sure, given a question on which no such decision depends, I agree that withholding judgment on it is a perfectly reasonable thing to do. (Of course, the question arises of how sure I am that no such decision depends on it, and how reliable the process I used to arrive at that level of sureness is.)

From this perspective, having an awareness of how little we know seems far more important than creating judgments based on what we know.

Yes, absolutely. Forming judgments based on a false idea of how much or how little we know is unlikely to have reliably good results.

Rationality cannot take into account information that is not known to be relevant -- what is the value of forming a judgment in this case?

As above, there are many situations where I'm obligated to make a decision, even if that decision is to sit around and do nothing. If I have two decision procedures available, and one of them is marginally more reliable than the other, I should use the more reliable one. The value is that I will make decisions with better results more often.

I'm trying to see how far LW is willing to push rationality as a universal theory (or the best theory in all cases short of perfect knowledge, whatever that means).

I'd say LW is willing to push rationality as the best "theory" in all cases short of perfect knowledge right up until the point that a better one comes along, where "better" and "best" refer to their ability to reliably obtain benefits.

That's why I asked you what you're comparing it to; what it falls short relative to.

Truth is relative because its relevance is limited to the extent other people agree with that truth, or so I would argue.

So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice.
You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.

Did I understand that correctly?

Comment author: non-expert 14 January 2013 07:48:20AM 0 points [-]

Roughly speaking, I understood Mugasofer to be referring to a calculated value with respect to a proposition that ought to control my willingness to expose myself to penalties contingent on the proposition being false.

How is this different than being "comfortable" on a personal level? If it isn't, the only value of rationality where the answer cannot be known is simply the confidence it gives you. Such a belief only requires rationality if you believe rationality provides the best answer -- the "truth" is irrelevant. For example, as previously noted in the thread, if I'm super religious, I could use scripture to guide a decision and have the same confidence (on a subjective, personal way). Once the correctness of the belief cannot be determined as right or wrong, the manner in which the belief is created becomes irrelevant, EXCEPT to the extent laws/norms change because other people agree. I've taken the idea of absolute truth and simply converted it social truth because I think its a more appropriate term (more below).

You are suggesting that rationality provides the "best way" to get answers short of perfect knowledge. Reflecting on your request for a comparatively better system, I realized you are framing the issue differently than I am. You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers). In that model, looking for the "best system" to find answers makes sense. In other words, you assume answers exist, and only the manner in which to derive them is unknown. I am proposing that there are issues for which answers do not necessarily exist, or at least do not exist within world of human comprehension. In those cases, any model by which someone derives an answer is equally ridiculous. That is why I cannot give you a comparison. Again, this is not to throw up my hands, its a different way of looking at things. Rationality is important, but a smaller part of the bigger picture in my mind. Is my characterization of your position fair? If so, what is your basis for your position that all issues have answers?

So, I have two vials in front of me, one red and one green, and a thousand people are watching. All thousand-and-one of us believe that the red vial contains poison and the green vial contains yummy fruit juice. You are arguing that this is all I need to know to make a decision, because the relevance of the truth about which vial actually contains poison is limited to the extent to which other people agree that it does.

I am only talking about the relevance of truth, not the absolute truth, because the absolute truth cannot be necessarily be known beforehand (as in your example!). Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent. Related to the point I made above, if you presuppose Truth exists, it is easy to question or point out how people could be wrong about what it is. I don't think we have the luxury to know the Truth in most cases. Until future events prove otherwise, truth is just what we humans make of it, whether or not it conforms with the Truth -- thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.

In your example, immediately after the vial is taken -- we find out we're right or wrong -- and our subjective truths may change. They remain subjective truths so long as future facts could further change our conclusions.

Comment author: TheOtherDave 14 January 2013 02:10:00PM 1 point [-]

You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers).

Yes. The vial is either poisoned or it isn't, and my task is to decide whether to drink it or not. Do you deny that?

In that model, looking for the "best system" to find answers makes sense.

Yes, I agree. Indeed, looking for systems to find answers that are better than the one I'm using makes sense, even if they aren't best, even if I can't ever know whether they are best or not.

I am proposing that there are issues for which answers do not necessarily exist,

Sure. But "which vial is poisoned?" isn't one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting.

Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent.

This is where we disagree.

Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.

And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours.

thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.

Again, this is where we disagree. The relevance of "Truth" (as you're referring to it... I would say "reality") is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion.

In your example, immediately after the vial is taken -- we find out we're right or wrong -- and our subjective truths may change.

Sure, that's true.

But it's far more useful to better entangle our decisions (our "subjective truths," as you put it) with reality ("Truth") before we make those decisions.

Comment author: Peterdjones 09 January 2013 02:57:48PM 3 points [-]

Second, what do you think about the idea of "perspectivism" -- that there is only subjective truth in the world?

Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases

Inasmuch as subjectivism is a form of relativism, those comments seem to contradict each other.

Comment author: non-expert 10 January 2013 06:45:44AM 1 point [-]

Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, "Murder is wrong," even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.

Comment author: MugaSofer 10 January 2013 10:37:35AM *  -1 points [-]

Wait, does this "truth is relative" stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there's a sizable minority here who wont.

Comment author: non-expert 14 January 2013 08:01:43AM 0 points [-]

What do you disagree with? That "truth is relative" applies to only moral questions? or that it applies to more than moral questions?

If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)

Comment author: MugaSofer 14 January 2013 09:38:56AM *  2 points [-]

My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective".

Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise - see e.g. Nazis.

EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I'm having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for "metaethics sequence".

Comment author: Peterdjones 10 January 2013 12:50:27PM 0 points [-]

Thanks for the clarifiction.

Comment author: MugaSofer 10 January 2013 10:26:53AM *  -1 points [-]

If your point instead is that probabilities will result in the right answer more often then not, fine, then accurately identifying the proper inputs and valuing them correctly is of utmost importance -- this is simply not practical in many situations precisely because the world is so complex.

Indeed. One of the purposes of this site is to help people become more rational - closer to a mathematical perfect reasoner - in everyday life. In math problems, however - and every real problem can, eventually, be reduced to a math problem - we can always make the right choice (unless we make a mistake with the math, which does happen.)

I think I'm the only one on LessWrong that finds EY's writing maddening -- mostly the style -- I keep screaming to myself, "get to the point!" -- as noted, perhaps its just me.

Unfortunately for you, most of the basic introductory-level stuff - and much of the really good stuff generally - is by him. So I'm guessing there's a certain selection effect for people who enjoy/tolerate his style of writing.

His examples from the cited article miss the point of perspectivism I think. Perspectivism (or at least how I am using it) simply means that truth can be relative, not that it is relative in all cases. Rationality does not seem to account for the possibility that it could be relative in any case.

I'm still not sure how truth could be "relative" - could you perhaps expand on what you mean by that? - although obviously it can be obscured by biases and simple lack of data. In addition, some questions may actually have no answer, because people are using different meanings for the same word or the question itself is contradictory (how many sides does a square triangle have?)

EDIT:

In those cases, I think rationality falls short, and the attempt to assign probabilities can give false confidence that the derived answer has a greater value than simply providing confidence that it is the best one.

A lot of people here - myself included - practice or advise testing how accurate your estimates are. There are websites and such dedicated to helping people do this.

Comment author: alanf 13 December 2013 08:52:46AM 0 points [-]

Science is not based on faith, nor on anything else. Scientific knowledge is created by conjecture and criticism. See Chapter I of "Realism and the Aim of Science" by Karl Popper.