Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

"What Is Wrong With Our Thoughts"

23 Post author: Eliezer_Yudkowsky 17 May 2009 07:24AM
"But let us never forget, either, as all conventional history of philosophy conspires to make us forget, what the 'great thinkers' really are: proper objects, indeed, of pity, but even more, of horror."

David Stove's "What Is Wrong With Our Thoughts" is a critique of philosophy that I can only call epic.

The astute reader will of course find themselves objecting to Stove's notion that we should be catologuing every possible way to do philosophy wrong.  It's not like there's some originally pure mode of thought, being tainted by only a small library of poisons.  It's just that there are exponentially more possible crazy thoughts than sane thoughts, c.f. entropy.

But Stove's list of 39 different classic crazinesses applied to the number three is absolute pure epic gold.  (Scroll down about halfway through if you want to jump there directly.)

I especially like #8:  "There is an integer between two and four, but it is not three, and its true name and nature are not to be revealed."

Comments (103)

Comment author: brian_jaress 17 May 2009 10:47:35AM 9 points [-]

Stove himself concludes that his "nosology" is probably not worth compiling. I think he's actually just using it to make the same point you've made by mentioning entropy. He considers it in order to justify rejecting it.

He then does something similar with the possibility of figuring out individual cases, rejecting it because the findings won't be generalizable.

Then he gets to what seems like his main point: getting rid of almost all philosophy because it's crazy.

(I thought the piece as whole was much funnier than the list. It's a tongue-in-cheek version of bending over backwards to avoid accusations of dismissing something crazy out of hand.)

Comment author: RolfAndreassen 18 May 2009 05:03:57PM 6 points [-]

Possibly Stove intended this only as an extended Take That to philosophers he dislikes; but it seems to me that he is a bit too dismissive of his own project, the 'nosology'. Without wanting a Fully General Counterargument, I think it might be useful to have a set of, say, five or six different classes of erroneous statements; and I also think Stove is too eager to insist on the singularity of each of his examples. For example, he states that the objection "not verifiable" cannot be applied to his example 8; I don't see why not. Anything whose "name and nature are not to be revealed" has just been declared unverifiable, no? Similarly 3 through 7 look pretty unverifiable to me.

Then he has some examples further down the list which look reasonably testable, such as 13 : "3 is a lucky number". One could easily do an experiment on this by submitting lottery tickets with and without 3's filled in; and as for 14, I think a simple "false-to-fact" would suffice to dismiss it.

So far then there are three classifications: False to fact, contradiction, meaningless through having no connection to observation. We may need a fourth to cover such statements as 26: "The tie which unites the number three to its properties (such as primeness) is inexplicable". This seems somehow vaguely related to observation, in that there does seem to be something called three which has the property of primeness, and nobody has really explained the tie between triples of objects and these properties. (It is perhaps not strongly coupled to observation, but I hesitate to dismiss it completely on that ground.) I suggest a fourth classification of 'uninteresting' or 'unfruitful': A proposition which, when adopted as an axiom, yields few or no deductions, is unfruitful. One might also call it the 'So What' error: Making statements which even if true are not useful to know.

There does seem to be some overlap here; for example, Stove's 25: "Five is of the same substance as three, co-eternal with three, very three of three: it is only in their attributes that three and five are different." This looks to me quite unverifiable, but even if it were true, So What? What conclusions or prediction would you draw from this?

Contrary to Stove, I think these four will cover all his list: False to fact, contradiction, meaningless, and So What. I am not certain, however, whether this insight is useful.

Comment author: cousin_it 19 May 2009 11:33:38AM *  0 points [-]

I'd unify your "So What" with "meaningless" into a single category "does not constrain observations". Math passes the test inasmuch as it constrains observations about outcomes of proof checking.

But now some people will complain (are already complaining) that we reject the majority of humanity's thought.

Comment author: RolfAndreassen 27 May 2009 03:57:57PM 0 points [-]

Again, it does seem observable that nobody has explained why three is prime and four isn't. (I'm not sure you can actually use 'why' in an intelligible way here; possibly I'm being confused by non-mathematical language applied to math.) It's not an observation I would expect anyone to care about, and possibly it may be the equivalent of nobody having seen something invisible; but it does seem to make a statement that could in principle have gone the other way.

Comment author: thomblake 27 May 2009 04:29:05PM 0 points [-]

I agree that I'm not sure how you're intending to use 'why' here, and I'm pretty sure there's a good answer for any particular meaning.

To answer the question in a possibly unsatisfactory way, 3 is prime because it is a natural number which has exactly two distinct natural number factors, whereas 4 is not prime because it has more than two distinct natural number factors.

Comment author: Annoyance 19 May 2009 02:25:14PM 0 points [-]

What humanity does isn't "thought", by and large. Not in any meaningful sense. It's mostly the expression of prejudices combined with associational triggers and repeating what others say.

Part of becoming an effective thinker is recognizing that unpleasant realities need to be acknowledged even when we'd prefer they weren't the case. For people living in this time, in this place, one of those truths is that we're surrounded by blatant stupidity. Even worse, we're blatantly stupid a lot of the time.

Deriving those conclusions from the evidence, and then acknowledging their validity, is one of the basic necessary steps to becoming better. No problem can be (expected to be) solved if we deny its reality.

Comment author: Matt_Simpson 17 May 2009 10:53:41PM 5 points [-]

I have to say that the positivist critique that "it's all meaningless" is seductive and it may well be correct - it feels like the words have meaning, but when you try to parse the sentence the feeling quickly disappears.

The problem is, this isn't very useful for talking about specific errors and how to avoid them. Many of the statements on that list looked rather meaningless to me, but to someone who believes in one of these statements, there are some underlying beliefs or confusions that need to be addressed before the "meaningless critique" will have any effect. At this point, pointing out the meaninglessness of their pet statement becomes entirely superfluous.

Comment author: Jack 18 May 2009 01:33:03AM 9 points [-]

There is a pretty innocent reason for why those passages look meaningless– they're all jargon filled when you don't know what the jargon means you will likely fail to understand what the passages mean. A paper on quantum chromodynamics is going to look meaningless to someone who doesn't know what quarks, quanta, flavor symmetry, gluons, hadrons, chirality etc. refer to. Similarly, I assume most people here have no idea what Plotinus means by "Being", "Essence", "Intellectual-Principle", "form" etc. I've done course work on Neo-Platonism and I don't remember what all of that was about. The same goes for the other passages.

Now Plotinus is particular might still be meaningless since some of that jargon is actually meant to refer to real things that he thinks exist. And insofar as he is referring to non-existentials whether or not the passage is meaningful depends on your philosophy of language (it is either false, meaningless or non-propositional).

Occasionally you find an analytically trained philosopher working on continental subject matter and they tend to assure me that the jargon and unconventional usage actually DO mean things. What does happen, I think, is that the jargon and unconventional language gets abused by stupid people who don't really understand the original philosopher but try to use their language. Since the language is so hard to parse in the first place it ends up being pretty easy for a charlatan to survive. Particularly if the charlatan isn't actually working in a philosophy department where there are people to challenge her.

In that vein, I don't think "bad continental philosophy" consists in Foucault and leading figures like him but many of their insipid followers on the continent and off who were never trained to express themselves clearly and logically.

This is why all philosophers should be trained in the analytical tradition, even if they want to work in other areas.

Comment author: jimrandomh 18 May 2009 01:51:42AM 1 point [-]

There is a pretty innocent reason for why those passages look meaningless– they're all jargon filled when you don't know what the jargon means you will likely fail to understand what the passages mean.

No, the passages given in the article have much deeper problems than just the jargon. The jargon only serves to defend these texts from criticism; because they're difficult to understand, anyone who says that these passages are wrong or mere gibberish can be accused of not understanding them. This defense works even if the critic understands the text perfectly.

Comment author: Jack 18 May 2009 02:33:25AM 4 points [-]

Uh, maybe. I'm willing to hear arguments to that effect. But you didn't give one.

I think Plotinus is definitely wrong, I don't know enough about Hegel to form an opinion, and I disagree with what I know of Foucault. But that doesn't make what they wrote meaningless.

Comment author: jimrandomh 18 May 2009 02:48:11AM 0 points [-]

Arguments to what effect? Are you objecting to my claim that "you don't understand" is used inappropriately to defend bad philosophy, to the claim that jargon makes it easier to do so, or to my claim that the passages have deeper problems?

Comment author: Jack 18 May 2009 03:17:50AM *  5 points [-]

Sorry, I should be specific. I don't think the passages, or the writing of these philosophers and the well-know continental philosophers, generally, are gibberish. I think the reason people think they are gibberish is because of the jargon. I would like to see an argument for why I should consider them gibberish for reasons other than jargon I don't understand.

And since I hold that the jargon is meaningful, I don't think that the jargon "only" serves to defend the texts from criticism (did you really mean "only")? I also, deny that a critic who understands the text perfectly would argue that the text is meaningless– but that issue will be addressed by the argument I ask for above.

(Note: Of course there are deeper problems to these passages. But those problems don't have anything to do with the syntactic rules for sentence formation or semantic rules for word usage. In other words, the problem isn't that their gibberish.)

Comment author: jimrandomh 18 May 2009 03:38:37AM *  1 point [-]

I define "gibberish" to mean "difficult to understand and entirely or almost entirely false or meaningless". Since you have said you think Plotinus and Foucault are wrong, and I think we can agree that they're at least somewhat obfuscated, then we must have different definitions. What's yours?

Comment author: Jack 18 May 2009 04:50:02AM 4 points [-]

I define gibberish as "difficult to understand and entirely or almost entirely meaningless". I think Plotinus and Foucault are "difficult to understand and entirely or almost entirely false". A statement is meaningless if it either fails to follow rules of syntax, i.e. "Running the the snacks on quickly!" or semantics, i.e. "Green ideas sleep furiously."

The distinction is actually pretty important. If you know something is meaningless then you can move on, but you can't decide something is false without first considering the argument, obfuscated or not.

There is some middle ground when it comes to arguments about things that don't exist. The trinity argument (and probably Plotinus) appeals to something that doesn't exist and so it says things that would be meaningful if the holy trinity was real but can't really be evaluated since there is no such thing. Obviously there is no reason for you to care much about this argument. But I don't think Hegel, Foucault or Heidegger and the other usual suspects are talking about things that don't exist.

Comment author: saturn 18 May 2009 08:50:13PM *  6 points [-]

Syntax does rules necessarily broken imply meaninglessness not.

Comment author: Jack 19 May 2009 01:24:29AM 2 points [-]

Semantic rules aren't holding knives to the throat of meaning either.

So yeah, it is more complicated than what I said before because our brain is pretty good at fixing broken sentences with context. Rules for context and pragmatics should also be included in requirements of meaningfulness. My bad for missing that.

Comment author: cousin_it 18 May 2009 02:31:34PM *  3 points [-]

The word "exist" confuses you. Does three exist? Maybe yes, maybe no; what real-world consequences would arise from three existing or not? If a tree falls in the forest, etc.

Humanity to date knows two families of statements that appear to possess truth values independent of the listener's psychology:

1) Experimental results, objectively verifiable by repeating the experiment.

2) Axiom-based mathematics, objectively verifiable e.g. by proof checking software.

Of course people can make personally or culturally meaningful statements that don't fall into type 1 or 2. Just don't delude yourself about their universal applicability or call them "science".

Comment author: Jack 18 May 2009 04:24:51PM *  5 points [-]

First, the word exist does not confuse me anymore than it confuses anyone else. If you think it does you should say why, since it wasn't explained in the previous post. The ontological status of numbers is a classic and ongoing philosophical dispute, whether there are real-world consequences to the question, I don' t know but even if there aren't it does not follow that the question has no truth value.

Experimental results don't verify anything, they either falsify or fail to falsify huge sets of different scientific propositions. When an experimental test of a hypothesis comes up false one can dismiss the hypothesis or one can dismiss any number of auxiliary assumptions that you had when you made your hypothesis. It is the job of scientists to find the best interpretation of experimental results according to criteria such as parsimony, consistency, usefulness, etc. But scientific theories are better understood as best working interpretations not objectively verified truths that exist independent of human interpretation. Metaphysics uses the exact same criteria to try and figure out the best interpretations with regard to other issues for which experiments are sometimes relevant but often not.

Also, axiom-based math can't really be addressed by proof checking software since you can't program proof-checking software before discovering some axiom based mathematics. Plus it isn't like we started believing math was true 60 years ago. We figured it out because our vulnerable, biased, human brains happen to have considerable abilities for ascertaining the truth.

Anyway, we also know things based on non-experimental observation and data gathering. This includes non-scientific things like whether or not there is a car on the street as well as the less experimental sciences like, astronomy, linguistics and economics. Knowledge in linguistics and economics is certainly somewhat more precarious than in physics since in the former fields it is by turns often impossible or unethical to run experiments. But that doesn't mean the insights in these fields aren't useful. I have no problem calling them sciences.

Of course there are the other so-called analytic truths- the whole set of possible tautologies one can make with natural language and entailment relations between categories. Altogether, I think there are quite a few more statements that possess truth values than just experimental science and axiomatic mathematics and they all involve human interpretation.

This isn't a reason to be frustrated, it just means we don't get to take an aerial picture of the terrain in making our map, we've got to figure it out by making best guesses according to limited information.

Finally, so what if some philosophy is simply personally and culturally meaningful statements? That isn't a reason to reject them as bad thinking.

Comment author: jimrandomh 18 May 2009 03:41:28PM 0 points [-]

So you maintain that anything which follows a few syntactic and semantic laws cannot be gibberish? I disagree; text can have meaning and still be gibberish. Consider a sequence of words drawn uniformly at random from a dictionary, then slotted into a repeating template like (noun) (verb) (article) (adjective) (noun). The template ensures that no rules of syntax are violated. A few constraints on the vocabulary can ensure there are no egregious violations of semantic rules, like green ideas and furious sleeping. Restrict the vocabulary to a few hundred concrete words and you can even ensure that every sentence makes a testable prediction. But it's definitely gibberish.

Comment author: Jack 18 May 2009 04:56:52PM 2 points [-]

Well there are a lot of semantic rules and plenty that we've haven't formalized. So I'm not convinced anyone now alive could write such a program. But I'm not a programmer so maybe someone has proved me wrong. However,iIf they were successful I don't think I would consider the result gibberish- especially if each sentence made a testable prediction. In this case wouldn't some of the predictions be true? If so then it is clear that your definition is not broad enough.

Thats troubling since I had already concluded your definition was too broad because it seemed to include important but complex and falsified scientific claims,

Comment author: ShardPhoenix 18 May 2009 04:17:57AM *  2 points [-]

While I mostly agree with the article, I don't think the Foucalt example given at the start is entirely bad - it just seems like a long-winded warning against confusing the map with the territory (or more specifically against trying to hammer a square territory into a pre-conceived round map).

Comment author: PhilGoetz 17 May 2009 05:13:27PM *  3 points [-]

The history of philosophy can't really have been one of thousands of years of nearly unrelenting adoration of stupidity. What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough. There is a bandpass filter on philosophy, and it has both a low and a high cutoff.

We propagate knowledge by collective judgements about it. In fields where we can't eliminate bad ideas by experiment, both the very worst and the very best ideas must be rejected. The requirement that an influential philosopher appeal to a large group of philosophers guarantees that relatively simplistic, self-aggrandizing or at least inoffensive crap with enough fuzziness to give one leeway in how to interpret it will be favored over careful, complex, a-polite ideas.

I recently looked at a bunch of my grad-school AI textbooks. It made me ill to think how many years I wasted studying an entire discipline filled with almost nothing but knowledge that has so far proven useless to me across a wide range of problems and disciplines for anything other than writing computer games - and useful there only because you can scale the game down and restrict its environment until the techniques work. Is this a different way of going wrong than the philosophers, or is it the same thing? Many of the bad-old-fashioned-AI (BOFAI) way of doing things are quite difficult: You can't accuse Kripke or Quine of being simplistic.

I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.

However, my experience on LW has shown that the best and brightest people are still very bad at conveying even relatively simple ideas to each other.

I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can't communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year's worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence. I wish there were some Twitter version of Science, that published only pithy, insightful comments, unsubstantiated by experiment. But since I've also seen cases where researchers spent decades gathering data and publishing critiques in their field and getting no traction, this alone is not enough.

How can we use the internet to recognize good ideas and get them to the people who can use them? Cross-discipline reputation brokers could be part of the solution.

Comment author: jimrandomh 17 May 2009 08:34:24PM 6 points [-]

What probably happened is that philosophers became popular only if their ideas were simple enough and appealing enough.

On the contrary, philosophers became popular only if their ideas were complicated enough to fill a book. The ideas that were simple enough to be true were also too short to publish.

Comment author: PhilGoetz 17 May 2009 08:50:06PM 1 point [-]

An interesting possibility. (Nitpick: "Simple enough to be true" implies that complex ideas can't be true. This is wrong.)

Can you give an example of a simple but non-obvious truth that was available but passed over in philosophy?

Comment author: AllanCrossman 17 May 2009 08:59:58PM 0 points [-]

What do you mean by "available"?

Comment author: PhilGoetz 17 May 2009 10:33:44PM 0 points [-]

Eg., I'm not interested in hearing that medieval philosophers ignored the idea that the motion of the planets are governed by the same laws that govern the motion of bodies on earth.

Comment author: AllanCrossman 18 May 2009 12:01:02PM *  1 point [-]

So, are we looking for something which is:

  • Simple,
  • True,
  • Not obvious,
  • Was claimed as true by someone or other,
  • But mostly ignored?

Perhaps Aristarchus and his heliocentrism would fit the bill (while not strictly true, it was truer than the alternative).

Comment author: ChrisG 18 May 2009 07:51:34AM 4 points [-]

I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can't communicate to people in that field because they would have to spend years learning >enough about the field to write a paper, probably with half a year's worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence.

I for one would be interested in hearing these sentences, and also which fields you feel are being held back by simple errors of logic. The margins here are quite large ;).

Comment author: PhilGoetz 18 May 2009 11:00:24PM *  7 points [-]

Some examples off the top of my head:

Rodney Brooks and others published many papers in the 1980s on reactive robotics. (Yes, reactive robotics are useful for some tasks; but the claims being made around 1990 were that non-symbolic, non-representational AI was better than representational AI at just about everything and could now replace it.) Psychologists and linguists could immediately see that the reactive behavior literature was chock-full of all the same mistakes that were pointed out with behavioral psychology in the decade after 1956 (see eg. Noam Chomsky's article on Skinner's Verbal Behavior).

To be fair, I'll give an example involving Chomsky on the receiving end: Chomsky prominently and repeatedly claims that children are not exposed to enough language to get enough information to learn a grammar. This claim is the basis of an entire school of linguistic thought that says there must be a universal human grammar built into the human brain at birth. It is trivial to demonstrate that it is wrong, by taking a large grammar, such as one used by any NLP program (and, yes, they can handle most of the grammar of a 6-year-old), and computing the amount of information needed to specify that grammar; and also computing the amount of information present in, say, a book. Even before you adjust your estimate of the information needed to specify a grammar by dividing by the number of adequate, nearly-equivalent grammars (which reduces the information needed by orders of magnitude), you find you only need a few books-worth of information. But linguists don't know information theory very well.

Chomsky also claims that, based on the number of words children learn per day, they must be able to learn a word on a single exposure to it. This assumes that a child can work on only one word at a time, and not remember anything about any other words it hears until it learns that word. As far as I know, no linguist has yet noticed this assumption.

In the field of sciencology?, or whatever you call the people who try to scientify science (eg., "We must make science more efficient, and only spend money discovering those things that can be successfully utilized"), there was an influential paper in 1969 on Project Hindsight, which studied the major discoveries contributing to a large number of US weapons systems, and asked whether each discovery was done via basic research (often at a university), or by a DoD-directed applied R+D program specific to that weapon system. They found that most of the contributions, numerically, came from applied engineering specific to that weapon system. They concluded that basic research is basically a waste of money and should not have its funding increased anymore. Congress has followed their advice since then. They ignored 2 factors: 1) According to their own statistics, universities accounted for 12% of the discoveries, but only 1% of the cost. This by itself shows basic research to be more cost-effective than applied research. 2) They did not factor in the fact that the results of each basic research project were applied to many different engineering projects; but the results of each applied project were often applied only to one project.

NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they're still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.

Comment author: PhilGoetz 19 May 2009 12:54:20AM *  5 points [-]

Though you also see cases where people from the outside do get their message across, repeatedly, and fail to make an impact. Something more is going wrong then.

The FDA, in its decision whether to allow a drug on the market, doesn't do an expected-value computation. They would much rather avoid one person dying from a reaction than save one person's life. They know this. It's been pointed out many times, sometimes by people in the FDA. Yet nothing changes.

EDIT: Probably a bad example. The FDA's motivational structure is usually claimed to be the cause of this.

Maybe when one particular stupidity thrives in a field, it's because it's a really robust meme for reasons other than accuracy. There are false memes that can't be killed, because they're so appealing to some people. For example, "Al Gore said he invented the Internet" - a lie repeated 3 times by Wired that simply can't be killed, because Republicans love it. "You only use 1/10th of your brain" - people love to imagine they have tremendous untapped potential. "Einstein was bad at math" - reassures people that being good at math isn't important for physics, so it's probably not important for much.

So, for example, NASA keeps trying to get ET's attention, not because it's rational, but because they read too many 1950s science fiction novels. The people behind project Hindsight and Factors in the Transfer of Technology wanted to conclude that basic research was ineffective, because they were all about making research efficient and productive, and undirected exploratory research was the enemy of everything they stood for. Saying that humans have a universal grammar is a reassuring story about the unity of humanity, and also about how special and different humans are. And the FDA doesn't picture themselves as bureaucrats optimizing expected outcome; they picture themselves as knights in armor defending Americans from menacing drugs.

Comment author: Nick_Tarleton 19 May 2009 04:49:16AM 3 points [-]

This, and your comment below, should be top-level posts IMO.

Comment author: Douglas_Knight 19 May 2009 03:05:00PM 1 point [-]

These are interesting examples, but they're not what I envisioned from your original comment. (The Brooks example might be, but it's the vaguest.)

A problem is that people gain status in high-level fights, so there is a lot of screening of who is allowed to make them. But the screening is pretty lousy and, I think, most high-level fights are fake. Are Chomsky's followers so different from other linguists? Similarly, Brooks may have been full of bluster for status reasons that were not going to affect how the actual robots. It may be hard for outsiders to tell what's really going on. But the bluster may have tricked insiders, too.

Also, "You don't understand information theory," while one sentence, is not a very effective one.

Comment author: steven0461 19 May 2009 10:57:02AM *  1 point [-]

NASA has had some projects to try to notify ETs of our presence on Earth. AFAIK they're still doing it? They should have asked transhumanists what the expected value of being contacted by ET is.

People are still doing it, not NASA though. Their rationalizations can get pretty funny. It seems stupid but rather harmless; it's hard to find a set of assumptions under which there's a nontrivial probability that it matters.

Comment author: Douglas_Knight 18 May 2009 02:43:57AM 3 points [-]

I have also seen instances where nearly an entire field is making some elementary error, which people outside that field can see more clearly, but which they can't communicate to people in that field because they would have to spend years learning enough about the field to write a paper, probably with half a year's worth of experimental work, and not get rejected, even if their insight is something that could be communicated in a single sentence.

I think that you're saying that the outsiders can't be published without learning the jargon and doing experiments. But publication is not the only avenue. If it really only takes a single sentence, the outsider should be able to find an insider who will look past jargon and data and listen to the sentence. Then the insider can tell other insiders, or tack it onto a publication, or do the new experiments.

If jargon is not just a barrier to publication, but also to communication it's a lot harder to find a sympathetic insider, but it hardly seems impossible. Also, in that situation, how can outsiders be sure they understand?

These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don't care about seeking truth, only about having a routine.

Comment author: MrShaggy 18 May 2009 09:55:56PM 0 points [-]

"These situations sound like there is a much bigger problem than the elementary error, perhaps that the people involved just don't care about seeking truth, only about having a routine."

Well, a large part of it is funding/bureaucracy/grants. I tend to thing that's the main part in many of these fields. Look at Taubes's Good Calories, Bad Calories for a largely correct history of how the field of nutrition went wrong and is still going at it pretty badly. You do have a growing number of insiders doing research not on the "wrong" path and you did all along, but they never got strong enough to challenge the "consensus" and it's due not just to the field but the forces outside the field (think tanks, government agencies, media reports). So even being published and well-known isn't enough to change a field.

Comment author: RichardKennaway 19 May 2009 02:28:00PM 3 points [-]

I wonder if the internet can provide a way for thinkers of the highest quality to find each other, and pass on ideas to each other that would go over the head of the larger professional bodies. I wonder if these ideas would influence the world, or remain useless in the hands of their brilliant but uninfluential custodians.

TED.

Comment author: CannibalSmith 17 May 2009 07:18:38PM 2 points [-]

[..] has so far proven useless to me [..]

It's just you.

Comment author: PhilGoetz 17 May 2009 08:41:52PM 2 points [-]

I was talking about the content of artificial intelligence books published in the 1980s. None of the examples you gave involved anything from the BOFAI school of artificial intelligence; nothing that would have been in those books.

Comment author: PhilGoetz 17 May 2009 10:30:09PM 2 points [-]
Comment author: hrishimittal 17 May 2009 12:36:09PM 2 points [-]

Genetic engineering aside, given a large aggregation of human beings, and a long time, you cannot reasonably expect rational thought to win. You could as reasonably expect a thousand unbiased dice, all tossed at once, all to come down 'five,' say. There are simply far too many ways, and easy ways, in which human thought can go wrong. Or, put it the other way round: anthropocentrism cannot lose.

That's the same argument against rationalist winning that has been seen many times on LW. However, it is based on hopelessness and fear, rather than on knowledge of even a single failure of an organised attempt at large-scale rational winning. So, while Stove recognises the obviously wrong thoughts of philosophers, he himself goes wrong in thinking the above by making a wrong probability estimate.

So just to be clear, we are saying that the probability of a significant number of people turning to rational thinking is greater than the probability of winning a lottery, right?

Comment author: Annoyance 17 May 2009 06:28:59PM 2 points [-]

The history of philosophy can't really have been one of thousands of years of nearly unrelenting adoration of stupidity.

I often see statements like that. "This couldn't possibly be the case", "that can't really happen", etc.

The first question we should ask ourselves when we see such statements: Why?

Usually, the person speaking is dismissing possibilities and potentialities out of hand for one of a variety of reasons, rather than having a valid and justifiable reason for discarding the contingency.

And even when there are good reasons, it's important to remember that we can always be wrong. Conservation of mass-energy is an incredibly useful and extraordinarily broad-in-application principle, and showing that a proposed idea in physics or engineering violates it is a powerful critique, but it's possible that it's not really the case.

Comment author: phane 17 May 2009 11:56:17AM 1 point [-]

I don't like this paper. It's wholly scathing for no reason other than to justify ignoring all of philosophy. Some philosophy is valuable and some is not, and of his 40 statements about three, I'd say 6 of them are claims I would take seriously and would hear arguments for, were I interested in the nature of three.

Generally, continental philosophy is trash, but I wouldn't throw out the baby with the bathwater.

Comment author: PhilGoetz 17 May 2009 05:54:12PM *  6 points [-]

Analytical philosophy is a quest for truth; continental philosophy is a way to get laid. (I hear it works better in France.)

Comment author: Tyrrell_McAllister 17 May 2009 07:27:24PM 1 point [-]

But it bears noting explicitly that many of his examples represent positions from within analytic philosophy. For example, "23 The proposition that 3 is the fifth root of 243 is a tautology, just like 'An oculist is an eye-doctor.'"

Comment author: Eliezer_Yudkowsky 17 May 2009 09:51:14PM 2 points [-]

I'd agree with that, actually, I'd just note that tautologies have to be empirically observed somehow and also that the case of the oculist and the eye-doctor is nowhere near as clear-cut.

Comment author: Jack 18 May 2009 01:09:33AM 3 points [-]

The piece about tautologies having to be empirically observed is one of the most bizarre posts I've ever read by you. It is so strange that I'm not really sure if there is anything I can say that would change you mind if you really think you could be convinced that 2+2=3 in that way. I can't even tell where you went wrong. Do you also hold that that the identity relation has to be empirically observed? Could you be convinced that 4=3? That 3 doesn't = 3? Do you believe you could be convinced that triangles on Euclidean planes are round? Do you not trust modus ponens and modus tollens? How does one even empirically observe tautologies in symbolic logic?

Comment author: JGWeissman 18 May 2009 09:47:36PM 2 points [-]

That 2+2=4 is a fact about a mathematical system that exists independently of the physical universe, including us humans that decided to use those symbols to express that fact. That fact is in the territory. But, in order to interact with the physical universe, it has to be discovered by some physical system that explores logical conclusions, such as our brains. This exploration builds our map of the territory. Our uncertainty about the tautological statement does not reflect some vagueness in the territory of logic, but our uncertainty about the workings of our physical brains, and their ability to build maps that reflect the territory.

Problems of logic have 100% correct answers, but our physical brains cannot become 100% entangled with those correct answers. It is observation, which can be abstract observations of our own logical reasoning, which give us increasing entanglement which approaches, but never reaches, 100%.

Comment author: Vladimir_Nesov 18 May 2009 11:03:05PM 1 point [-]

Whatever you could possibly know and value about reality can only exist independently of the physical universe. (Huh?) If your uncertainty about math doesn't indicate uncertainty of the math, and it's an argument for math being otherworldly, it's also an argument for the territory being otherworldly, which is clearly a confusion of terms.

And so you should bring the math back where it belongs, an aspect of the territory.

Comment author: JGWeissman 18 May 2009 11:21:36PM 0 points [-]

Whatever you could possibly know and value about reality can only exist independently of the physical universe.

That is not what I am saying. I mean that things that we think of as tautologies, or purely logical truths, which are true no matter what universe we are in, exist independently of the physical universe. Facts about the physical universe are not in this class. Indeed, the entanglement of our physical brains with these logical truths is an example of a fact about the physical universe that, of course, depends on the the universe.

If your uncertainty about math doesn't indicate uncertainty of the math, and it's an argument for math being otherworldly...

You have my argument backwards. I first make the point that facts about math are not facts about the physical universe to support that the uncertainty we have about math, which exists in our heads, in our physical universe, does not exist in math itself. The argument does not work the other way, there are plenty of instances of uncertainty in our minds that are not uncertainty in the things elsewhere in the physical universe that they are about.

My comment was an attempt to explain why we need observation to believe things that are objectively true regardless of the world we exists in. Basically, we need evidence that our brains, existing in the physical worlds, are suitable for representing the logical truths.

Comment author: Jack 19 May 2009 12:16:55AM 0 points [-]

This is really helpful and I think I agree with all of it. I've just never understood "observation" to include my logical reasoning. If your position is that we know 2+2=4 by virtue of observing our own reasoning and not by virtue of any sensory data (information about the outside world) then I don't think that position is any different from the one I already hold. But is this Eliezer's position? His OB post made it sound like he could be swayed to think 2+2=3 as a result of external events mediated by his sensory perception of those events. That is what I objected to.

Comment author: JGWeissman 19 May 2009 05:24:32AM 1 point [-]

Well, I think that observations can be both our reasoning and sensory data.

Suppose you have a model* of your own accuracy at addition of integers, which is that you are 95% likely to get the correct answer, 2% to be one high, 2% to be one low, and with the remaining 1% divided somehow amongst other possibilities. Then, when you actually observe that when adding 2 + 2 you get 4, this is Bayesian evidence that gives a likelihood ratio of 42.5 : 1 in favor of the theory that 2 + 2 = 4 compared to the theory that 2 + 2 = 3.

Now suppose you have a collection of pebbles, and your model of the pebbles claims that if you count out 2 distinct collections of pebbles, and then combine them and count the total, that the sum of the counts of the distinct collections is 90% likely to be the count of the combined collection, and is 4% likely to be one high, 4% to be one low, and 2% to be something else. And then you actually count out a collection of 2 pebbles, and another collection of 2 pebbles, and combine them, and when you count the combined collection you count 4 pebbles. This is Bayesian evidence with a likelihood ratio of 22.5 : 1 in favor of 2 + 2 = 4 as opposed to 2 + 2 = 3.

In both cases, belief in a logical proposition results from our belief that an observable system has some probability of reflecting logical truth. If, as in the example numbers that I made up just now, we believe that our reasoning process is more likely than observations of our environment, then the results of our reasoning is stronger evidence, but it is still the same class of evidence.

* I have neglected the harder problem of simultaneously updating propositions about additions and propositions about a given system's probability of representing addition. That is, I have not explained where the models I asked you suppose you have really should come from.

Comment author: komponisto 18 May 2009 02:26:24AM 2 points [-]

It may be worth noting that Quine had a view similar to Eliezer's -- which Stove alludes to (dismissively) in the essay.

Comment author: Jack 18 May 2009 02:56:31AM *  1 point [-]

Thanks. That is worth noting. My recollection is that Quine denies the existence of analytic statements but doesn't go as far as to hold that tautological statements are just like regular empirical statements. Logical truths still have some kind of special status for Quine. Plus, I think his reasons for denying analytic truths had very little to do with actually being able to imagine a series of experiences that could change his mind about them– it is one thing to claim that such experiences are possible. Its another thing to claim you have just described that set of experiences.

Finally, I remember thinking Quine was being silly, but it has been a while so I'm going to go read and come back.

Comment author: randallsquared 18 May 2009 04:05:29PM 0 points [-]

I don't think I'd read Eliezer's piece about tautologies having to be observed before, but it matches my pre-existing beliefs about its topic, and it seems so obvious that I'm left wondering how you think you got the understanding that 2+2=4, or that triangles on Euclidean planes are not round. Given that you got that understanding somehow, couldn't the same process give you the new understanding, assuming (for this argument) it was true?

Comment author: Jack 18 May 2009 05:09:30PM *  0 points [-]

This is certainly a strange divergence of intuitions. I think the story of how I came to know 2+2=4 goes like this: Someone taught me that 2 meant -oo- and 4 meant -oooo-. Then someone probably be told me that 2+2=4 but I don't think they would have needed to. I think I could easily have come to the conclusion myself since given -oo- and -oo- I can count four dots. If pushing four objects together meant one of the objects disappeared I would probably just stop pushing objects together and count in my head. If counting the objects made one of them disappear I would be pretty damn frustrated but I'm pretty confident I could realize that reality was changing as a result of a mental operation and not that I was counting wrong. Aside from being tortured with rats or Cardassian pain sticks I don't see what would make me think that 2+2 didn't =4.

I'm not sure how to explain my thinking any better except to say that it is the same thinking that lead generations of philosophers and mathematicians to conclude that mathematical knowledge was a different kind of knowledge than knowledge of our surrounding and the natural world. My reason is the reason Kant distinguished the analytic from the synthetic- a sense that a rational mind could figure these things out without sensory input.

Comment author: orthonormal 18 May 2009 06:29:42PM 4 points [-]

The trouble there is the claim of a rational mind, in my opinion. It's not logically necessary that our evolved brains, hacked by culture, are going to mirror reality in their most basic perceptions and intuitions.

The space of all possible minds includes some which have a notion of number and counting and an intuitive mental arithmetic, but for which 2 and 2 really do seem to make 3 when they think of it. These minds, of course, would notice empirical contradictions everywhere: they would put two objects together with two more, count them, and count four instead of three, when it's obvious by visualizing in their heads that two and two make three instead. Eventually, a sufficiently reflective mind of this type would entertain the possibility that maybe two and two do actually make four, and that its system of visualization and mental arithmetic are in fact wrong, as obvious as they seem from the inside. Switching "three" and "four" in this paragraph just illustrates how difficult accepting that hypothesis might actually be for such a mind.

The thing is, we ourselves are in this situation, not with arithmetic (fortunately, we receive constant empirical reinforcement that 2+2=4 and that our mental faculties for arithmetic work properly) but with our biases of thought. Things like our preferences and valuations seem to be rational and coherent, in that we can usually defend them all with arguments that look solid and persuasive to us. But occasionally this fiction becomes untenable, as when we are shown to have circular preferences in situations of risk and reward. As Eliezer put it in Zut Allais:

You want to scream, "Just give up already! Intuition isn't always right!"

Or, in this case, "Don't start by assuming that our minds work rationally whenever they see something as obvious! If this is true, it is an empirical fact; and you should be able to see the alternative as possible!"

Comment author: byrnema 18 May 2009 09:21:32PM *  0 points [-]

If this is true, it is an empirical fact; and you should be able to see the alternative as possible!

Indeed, 2+2=4 is only true in some contexts. For example, sometimes 1+1=1 -- in contexts where separate objects lose their distinct identity as soon as they are grouped. (Think of a particular object several times. How many times did you think of it? But how many objects did you think of?)

Later edit: It is interesting that such a benign comment would get 4 down votes. Perhaps I understand this group well enough to guess why: the experiment I suggested is an entirely "internal" one, it provides no external proof of what I am suggesting. I think that a common reader here feels dismissive of, if not entirely antagonistically towards, knowledge that is internally generated. Personally, I have a preference for the knowledge that arises from internal experience.

Comment author: komponisto 19 May 2009 03:46:46AM 6 points [-]

I agree that the downvoting of this comment was overly harsh. My theory on why it occurred is different, and best illustrated by an example: if someone posted a comment saying "2+2=4 is only true in some contexts; in arithmetic modulo 3, 2+2=1", that comment would have been similarly downvoted.

However, let me be so bold as to say a word in defense of even that hypothetical commenter. Anyone mathematically sophisticated (including our downvoters) will agree that it is possible to construct a mathematical system in which 2+2 equals anything you like -- or, more precisely, for any symbol x, a system can be constructed in which the formula (string of symbols) "2+2 = x" is given the label "TRUE". Mod 3 arithmetic is an example for x = "1".

Now, it is at this point that the downvoters protest: "But this is not the same thing as saying 2+2=1! All you've done is change the meaning of the symbols in the formula, such as '2' and '1'. Two plus two is still four, for the original meaning of those words. You're confusing the map and the territory. Downvoted!"

Well, the downvoters do have a point. But, at the same time, let me suggest that they're also making the same mistake as our poor beleaguered commenter!

What they've done, you see, is to make a leap from "Ordinary (i.e. non mod-3, etc.) Arithmetic accurately models certain physical phenomena" to something like "Ordinary Arithmetic is true in (or of) the physical world". Instead of saying what they mean, which is "the physical world is best modeled by a system that has '2+2=4' as a 'TRUE' formula", they say "2+2 is in fact equal to 4".

Small wonder that confusion arises about whether mathematical statements are "emprical" or not! "The physical world is best modeled by a system that has '2+2=4' as a 'TRUE' formula" is clearly an empirical claim. But what about 2+2 = 4, all by itself? When a mathematician at a blackboard proves that 2+2=4 in Ordinary Arithmetic (or, for Eliezer's benefit, that infinite sets exist in standard set theory), has he or she made a claim about physics? No! Not without the additional assumption that the formal system being used is in fact an accurate map of the territory! But the mathematician makes no such assumption; he or she (acting as a mathematician) is interested only in the properties of formal systems. (Yes, that's right: I'm advocating the view known as formalism here. The other well-known positions in the philosophy of mathematics, namely Platonism and intuitionism, suffer from map-territory confusion!)

Mathematical systems, like Ordinary Arithmetic or Mod-3 Arithmetic, are part of the map, not the territory. The facts of mathematics are, so to speak, cartographic, rather than geographic.

Comment author: Alicorn 18 May 2009 09:56:14PM 1 point [-]

How many words are in this list?

  • Duck
  • Duck
  • Goose
Comment author: steven0461 18 May 2009 09:28:17PM 1 point [-]

Doesn't that just mean that grouping doesn't always correspond to addition?

Comment author: byrnema 18 May 2009 09:17:12PM *  1 point [-]

Saying that 2+2=4 is a tautology in a certain axiomatic system defined with '+' means that you couldn't have anything but 2+2=4 in that system. It's simply mandatory, and a rational person could not wake up one day and be convinced that 2+2=3 within a self-consistent system that deduces 2+2=4.

While tautological truth is independent of observation (let's call it mathematical truth), it is dependent upon context (i.e., a self-consistent axiomatic system). Some mathematical truths in one axiomatic system are false in another. When we talk about whether a a mathematical statement is true, we need to specify the context, and, in my opinion, in the most demanding definition of truth, the context is the real, actual, empirical world. So I agree with Eliezer that a mathematical tautology must be observed in order to be true.

When we humans talk about "2+2=4", it is because we have chosen arithmetic from an infinite number of possible axiomatic systems and given it a name and a set of agreed-upon symbols. Why did we do that? Because we observed arithmetic empirically. Obviously, addition is just one operation of infinitely many operations. The ones we have defined (multiplication, subtraction, addition mod n, taking the cardinality of subsets of, etc.) usually have some empirical relevance. While we don't feel very comfortable thinking of those that don't (and this says somethng about the way we think), I have faith that if we were presented with a very strange set of observations, it would take a pretty short amount of time to train ourselves to think of the new operation as a "natural" one.

... I idly wonder if there is a such thing as a mathematical truth that could not be realized empirically, in any context, and if there would be any way of deducing it's non-feasability.

Comment author: Jack 19 May 2009 01:04:37AM 1 point [-]

Is saying "we could have a different axiomatic system" different from saying "2, 4, +, and = could all mean different things? Of course we've only defined the operations and terms that are useful to us. I don't care about the naturalness of '+' only that once I know the meaning of the operations and terms the answer is obvious and indisputable.

Math isn't my field, so my all means show me how I'm wrong.

Comment author: RichardKennaway 18 May 2009 11:52:26AM 2 points [-]

Some philosophy is valuable and some is not

Can you give some examples of valuable philosophy, and why you judge it valuable? I incline to the view that ignoring all of philosophy is, to a first approximation, the right thing to do, and that there are very few exceptions worth making.

Comment author: Vladimir_Golovin 18 May 2009 02:01:08PM *  0 points [-]

Off the top of my head: Karl Popper, due to his influence on the scientific method. (Perhaps it's not as valuable today as it was back then, due to bayesianism, but still.)

Edit: Also, epistemology is a branch of philosophy.

Comment author: RichardKennaway 18 May 2009 03:39:54PM *  5 points [-]

Yes, historically, Popper performed a valuable service, by showing (imperfectly) what distinguishes science from nonsense. (Stove characterises Popper as someone who overreacted to the fact that scientists sometimes make mistakes, but that is less than his due.)

But it's interesting that one can say that about Popper, and a few other philosophers -- that they were at least partly right, and where they were wrong, they were at least wrong, rather than "not even wrong". They created something to be corrected and improved on, not trash to be thrown out.

A colleague in theoretical computer science once showed me a Ph.D. thesis that a logician of his acquaintance had sent him. He found it rather strange in form, compared with the sort of mathematical thesis he was accustomed to reading. I looked at it and laughed. It followed precisely the standard form for a thesis in philosophy. (I think Pirsig describes this in "Zen and the Art...") In chapter 1, the author states the subject he is going to address. In chapters 2 to 8 he writes a detailed history of everything of significance that has ever been written on the subject. In chapter 9 he introduces his own modest contribution, and in chapters 10 to 12 indicates how it relates to the history. A scientific thesis, on the other hand, begins with a similar chapter 1, surveys the previous literature in chapter 2, going back only far enough to establish the context for his work, and the remainder is all about the author's own work.

No subject is worth anything whose entry qualification is a thesis of the first form. It would be interesting to write press-release style digests of current papers in philosophy, summarising their findings in bite-sized chunks:

  • What we studied.

  • What we discovered.

  • How we discovered it.

  • Why it matters.

I don't think it could be done other than as a work of satire. Maybe it should be. Any iconoclastic grad students in philosophy want to give it a go?

Comment author: kim0 17 May 2009 09:14:28PM -1 points [-]

Interesting, but too verbose.

The author is clearly not aware of the value of the K.I.S.S. principle, or Ockhams razor, in this context.

Comment author: cousin_it 17 May 2009 12:16:50PM *  1 point [-]

I've long loved this piece, but today would file most of its examples simply under "getting carried away".

Items on the list that reminded me of Eliezer's writings: #19, #22, #32, #35. Indictment not intended.

Comment author: pdf23ds 18 September 2009 09:18:12PM 0 points [-]

Your link is now broken. Is there some other web archive of the chapter? I've saved a copy from the google cache, in case it matters to anyone.

Comment author: Vladimir_Nesov 18 September 2009 09:26:15PM 1 point [-]

The Internet Archive has a copy.

Comment author: nazgulnarsil 17 May 2009 01:48:03PM 0 points [-]

does #23 have a quick explanation or does it require a serious delve into abstract math?

Comment author: cousin_it 17 May 2009 02:29:17PM *  0 points [-]

One is a provable theorem in an axiomatic system, the other isn't.

Comment author: Drahflow 17 May 2009 09:10:19PM -1 points [-]

Regarding most of the lengthy examples of "philosophy" given by Stove:

Reading a text takes time, time can be spent acquiring utilions. Hence reading a text is only worth if the expected utilion win due to additional knowledge is grater than the expected utilions when using the time differently. This approach kills most of his examples dead in their tracks for me. This also implies positivism, if a text does not either generate utilions directly, i.e. fun reading fiction, then it needs to provide knowledge (in form of testable statements about the world), otherwise, how would I generate utilions from the "knowledge"?

Possibly, some thoughts are only valuable when more efficient methods of communication become available.