It may be impossible to completely eliminate bias, but it is still worth reducing bias.
Being aware of this (and of the fallacy of gray in general) is one of the things that really sets LessWrong apart as a community. A huge number of arguments and discussions are stopped in their tracks by arguments of the form, "Well, your side does X too!" or "We both have biases" or "Both policies have positives and negatives." It's difficult to route around these stopsigns because of the extra inferential step it takes to explain the fallacy of gray; it's refreshing to talk to people who realize that grayness shouldn't be a conversation-halter.
I took the quote differently, more as an anti nihilist comment.
Yes, everything is gray, but there are shades of gray, and different shades really are different. There is still better and worse, even when all you have are shades of gray. Don't throw out the baby with the bathwater.
I don't think the intended meaning of the title The Fallacy of Grey is "grey is a fallacy". I think it's a much nicer sounding name than "the fallacy of concluding that because things are shades of grey instead of black and white, that they are all equivalent".
Along the lines of the 'ambiguity of gray'? Or that something classified as gray can be said to be inherently undefined? To think about anything, it seems that we have to categorize it in some way. The category we choose unless it is a category of that one item, will be a model also used to describe things or concepts that differ in significant ways from the 'it' we are trying to think about. The fallacy of black and white might then be described as confusing the category with the item itself. The fallacy of gray would be a failure to recognize that gray is a non-category used for 'its' we have not yet been able usefully to categorize as properly belonging with other 'its' on one side of the spectrum or the other.
Putting it another way: bias cannot be eliminated since it provides the mental structure used by the brain to organize data. Bias can be described as the operating system built by the brain as it functions. From what I have read, certain responses are hardwired, so to speak, into our brains by selective adaptation. We each have to have a point of view, a place where the individual receives initial limited sets of data, and a system to turn the data into thoughts, measurements, reactions or opinions. As we learn to recognize our biases and how they can lead us to serious errors in our interpretation of data, we hope to be able to make better decisions. I think most people registered on this website would agree that the goal of better decisions is both worthy and possible.
Looking at this statement from a different point of view, all measurements are seemingly on a continuum that regresses to some theoretical limit depending upon how finely grained is your measuring rod. My understanding of modern realism is that the absolute or limit – be it infinity or the concept of a point particle or perfectly black – does exist in some independent real world. Does the lead statement refer to our perceptions of black and white or does it refute the possibility of perfectly black or white in an independent real world? On another level, does the possibility of perfectly black or white in an independent real world even matter? Most people agree that at some point on the spectrum, gray can usefully be called black. Shouldn’t the focus of our moral judgment be aimed at the shifting line dividing more black from more white?
Putting it another way: bias cannot be eliminated since it provides the mental structure used by the brain to organize data.
I assume you mean all bias cannot be eliminated? Obviously we can eliminate most of it. We just need to keep inductive bias and our predilection to satisfying our own preferences.
Yes, I mean all bias. My working definition of bias is the set of assumptions we more or less rely on for most of our daily activity. In most of what I do, I don't have time or it's not worth the energy to scrutinize all the underlying assumptions that shape my reactions. But I can develop methodologies to identify when I need to be more critical of my assumptions and think before I act. I can also, I hope, learn to be a better analytical thinker and problem solver.
"When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."
I don't disagree with the point in the post, but I do have a problem with this quote.
They were not just as wrong in an absolute sense, but they were just as wrong in the sense of how much it mattered at the time. Asimov said this as a reply to a reply to him saying that science has most everything figured out. For all that it matters to us, the Earth may as well be spherical, but it certainly isn't flat. For all it mattered to them the Earth was flat. Someday, things like the Earth not being spherical will matter as much as the fact that it isn't flat currently matters to us.
When technology advances to the point that physics matters, physics can advance. When that happens, it will advance. As such, our knowledge and the importance of its accuracy always match.
I can't possibly disagree more. The importance of a round earth for oceangoing navigation and trade was ALWAYS of huge importance, and EVEN if I allow you to dismiss it, there are plenty of other examples where our knowledge is or was woefully inadequate even for what we want to do. You might say to a peasant it doesn't matter what shape the earth is, but you could never say he didn't need to know about the disease vectors of effluent in the water EVEN if he didn't know. We are as we speak trying to understand cancer in greater and greater detail because want to cure it, but it's not hugely more important today to cure disease than it was in the past.
The importance of a round earth for oceangoing navigation and trade was ALWAYS of huge importance
I find it unlikely that they didn't notice if it was hugely important.
but you could never say he didn't need to know about the disease vectors of effluent in the water
Useful, but difficult to find out, not unlike economics, psychology, and several other things we still don't fully understand. I suppose we are making progress in these areas. We're learning more, and it's not becoming more important. We are still nowhere near done. I didn't think of this, and I adjust my position towards yours, but I still disagree with the original quote.
but it's not hugely more important today to cure disease than it was in the past.
They knew what they could do with their level of technology. Absolutely nothing. This is no different than everything about nanotechnology that we don't know and won't learn until we can actually make nanobots.
...it's not hugely more important today to cure disease than it was in the past.
Well, cancer used to be quite rare when few people lived past forty.
Today's post, The Fallacy of Gray was originally published on 07 January 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was But There's Still A Chance, Right?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.