Eliezer Yudkowsky identifies scholarship as one of the Twelve Virtues of Rationality:

Study many sciences and absorb their power as your own. Each field that you consume makes you larger... It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...

I think he's right, and I think scholarship doesn't get enough praise - even on Less Wrong, where it is regularly encouraged.

First, consider the evangelical atheist community to which I belong. There is a tendency for lay atheists to write "refutations" of theism without first doing a modicum of research on the current state of the arguments. This can get atheists into trouble when they go toe-to-toe with a theist who did do his homework. I'll share two examples:

  • In a debate with theist Bill Craig, agnostic Bart Ehrman paraphrased David Hume's argument that we can't demonstrate the occurrence of a miracle in the past. Craig responded with a PowerPoint slide showing Bayes' Theorem, and explained that Ehrman was only considering prior probabilities, when of course he needed to consider the relevant conditional probabilities as well. Ehrman failed to respond to this, and looked as though he had never seen Bayes' Theorem before. Had Ehrman practiced the virtue of scholarship on this issue, he might have noticed that much of the scholarly work on Hume's argument in the past two decades has involved Bayes' Theorem. He might also have discovered that the correct response to Craig's use of Bayes' Theorem can be found in pages 298-341 of J.H. Sobel’s Logic and Theism.

  • In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?” Craig pointed this out and handily won that part of the debate. Had Hitchens had a passing understanding of science or explanation, he could have avoided looking foolish, and also spent more time on substantive objections to theism. (One can give a "Who made God?" objection to theism that has some meat, but that's not the one Hitchens gave. Hitchens' objection concerned an infinite regress of explanations, which is just as much a feature of science as it is of theism.)

The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of diligent workers before you, and get off somewhere near where the road finally fades into fresh jungle. Study enough to have a view of the current state of the debate so you don't waste your time on paths that have already dead-ended, or on arguments that have already been refuted. Catch up before you speak up.

This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.

The Less Wrong community is a particularly smart and well-read bunch, but of course it doesn't always embrace the virtue of scholarship.

Consider the field of formal epistemology, an entire branch of philosophy devoted to (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory. These are central discussion topics at Less Wrong, and yet my own experience suggests that most Less Wrong readers have never heard of the entire field, let alone read any works by formal epistemologists, such as In Defense of Objective Bayesianism by Jon Williamson or Bayesian Epistemology by Luc Bovens and Stephan Hartmann.

Or, consider a recent post by Yudkowsky: Working hurts less than procrastinating, we fear the twinge of starting. The post attempts to make progress against procrastination by practicing single-subject phenomenology, rather than by first catching up with a quick summary of scientific research on procrastination. The post's approach to the problem looks inefficient to me. It's not standing on the shoulders of giants.

This post probably looks harsher than I mean it to be. After all, Less Wrong is pretty damn good at scholarship compared to most communities. But I think it could be better.

Here's my suggestion. Every time you're tempted to tackle a serious question in a subject on which you're not already an expert, ask yourself: "Whose giant shoulders can I stand on, here?"

Usually, you can answer the question by doing the following:

  1. Read the Wikipedia article on the subject, and glance over the references.
  2. Read the article on the subject in a field-specific encyclopedia. For example if you're probing a philosophical concept, find the relevant essay(s) in The Routledge Encyclopedia of Philosophy or the Internet Encyclopedia of Philosophy or the Stanford Encyclopedia of Philosophy. Often, the encyclopedia you want is at your local library or can be browsed at Google Books.
  3. Read or skim-read an entry-level university textbook on the subject.

There are so many resources for learning available today, the virtue of scholarship has never in human history been so easy to practice.

The Neglected Virtue of Scholarship
New Comment
155 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There are so many resources for learning available today, the virtue of scholarship has never in human history been so easy to practice.

Indeed.

I followed the links to In Defense of Objective Bayesianism by Jon Williamson and Bayesian Epistemology by Luc Bovens and Stephan Hartmann. They were expensive and unreviewed and my book reading heuristics generally require three independent suggestions before I start taking a book seriously.

A cheaper trick was to search the Stanford Encyclopedia of Philosophy for Bovens, Hartmann, and Williamson which lead to a nest of articles, some of which mentioned several of them. I listed and prioritized them using ad hoc scoring (points for mentioning each person and a good title). Hartmann jumped out because he had wider ranging interests and was tapped to co-author the encyclopedia article "Models In Science". To reduce the trivial inconvenience of starting to read, I reproduce the suggested reading list with my ad hoc numerical priorities right here:

... (read more)

I see that you have some experience applying the virtue of scholarship... :)

In general I'm very sympathetic to this point of view, and there are some good examples in your post.

One bad example, in my opinion, is Eliezer's recent procrastination post vs. the survey of "scientific research on procrastination." I read the chapter, and it appears to mostly cite studies that involved casual surveys, subjective description, and fuzzy labeling. Although there are many valid scientific endeavors that involve nothing but categorization (it is interesting to know how many species of tree frog there are and what they look and sound like even if we do not make any predictions beyond what is observed), categorization should at least be rigorous enough that we can specify what we expect to see with a modicum of precision.

When a biologist says that frogus neonblueicus has neon blue spots and chirps at 500 Hz, she will give you enough information that you can go to Costa Rica and check for yourself whether you have found one of the rare neonblueicus specimens. Although there will be some controversies around the edges, your identification of any particular frog will not correlate with your political biases or personal problems, and repeated observation of the ... (read more)

I very much agree with your final sentence.

Do you think Eliezer's post is more precise and useful than the controlled experiments published in peer-reviewed journals described in the book I linked to? I find that most writing on psychology is necessarily pretty soft, because the the phenomena it is trying to describe are vastly more complicated than those of the hard sciences.

Now, that link is a must-read. I got through the whole first chapter before I could look away, and I'll be going back for the rest.

I have nothing against psychology or psychologists or social science in general -- AP Psych was my second favorite class in high school, my mom has a master's degree in it, my bachelor's degree is in political science, etc. It's noble, hard work, and we even have a little bit of knowledge to show for it.

As for the "controlled experiments" described in the book you linked to, I'm afraid I missed them, for which I apologize. I only saw descriptive papers. Maybe a page reference when you get a chance? Or just link directly to one or two of the studies or the abstracts?

3lukeprog
Oops, you're right that my link does not mention controlled experiments. A few controlled experiments are instead mentioned in other sections of the book on techniques applicable to a greater variety of behavior change goals. Unfortunately, the author of Psychological Self-Help died last year, and his book has not been updated much in the past decade. Of course, more work on procrastination has been done in recent years, though I'm not sure if it is collected nicely anywhere.
8shokwave
Is there one more step in there? Vastly more complicated -> science happens at much higher levels of abstraction -> high level abstract science is necessarily pretty soft? Because it seems to me psychology is necessarily soft because it doesn't want to turn into thirty years of neurobiology before it can talk about human behaviour.

Because it seems to me psychology is necessarily soft because it doesn't want to turn into thirty years of neurobiology before it can talk about human behaviour.

I hear this sentiment echoed a lot, and I have to admit to either not understanding it or strongly disagreeing with it.

Claiming that psychology has nothing useful to say about human behavior until it can be fully cashed out in neurobiology strikes me as mistaken in many of the same ways that claiming that ballistics has nothing useful to say about missile trajectories until it can be fully cashed out in a relativistic understanding of gravity is.

Yes, our missiles don't always hit where we want them to, even after thousands of years of work in ballistics. But a deeper understanding of gravity won't help with that. If we want to improve our practical ability to hit a target, we have to improve our mastery of ballistics at the level of ballistics.

That isn't quite as true for psychology and neurobiology, granted: the insights afforded by neurobiology often do improve our practical ability to "hit a target." (Most strikingly in the last few decades, they have allowed us to develop an enormously powerful medical te... (read more)

7shokwave
Ha, no, I'm on your side. Psychology can say useful things precisely because it isn't cashed out in neurobiology. The point I was making was that in order to have simple rules for brains, in all their hundred-billion-neuron complexity, you need to have softer edges on your predictions. I don't mean soft in any derogatory way. The concept I was aiming for was something like Eliezer's "well, you could simulate an aeroplane prototype from the quark-level up, but that's inefficient. The field of aerodynamics has good approximations for that macro-scale behaviour,": even if you did drop psychology for neurobiology, describing human behaviour from the neuron-level up is inefficient. Psychology is soft in that it uses approximations of human behaviour, in order to be useful on human timescales. This is a good thing, made no worse by the fact that it necessitates some level of 'soft'ness. (I think the concern some people have with psychology is that they perceive it as too soft. Availability bias has them drawing generalisations from the describes-everything Freudian analysis, and so forth.)
3TheOtherDave
(nods) Fair enough, and agreed throughout. I stand by my response in and of itself, but I sheepishly admit that it's not actually a response to you at all. Rereading your comment, I conclude that I was overtrained on the kind of objections I responded to, which you didn't actually make... sorry about that.
1shokwave
Doesn't bother me in the slightest. In fact, I almost included another parenthetical: (Hard scientists probably do think hard is good and soft is bad, but that's because they're hard scientists. Soft scientists are probably sensitive to the negative connotations the hard scientists attach to these terms, because there is something of a rivalry between hard and soft science.) I guess you've studied some kind of soft science at a college or university? (I feel like I have overused the terms, though. I make sound as if there is a strict divide, when in my mind it's an evenly distributed spectrum.)
7billswift
I think it is more: Complication allows the researchers' biases to slip in more easily, since among other things any sort of cross-check is nearly impossible, which leads to softer results, especially when being evaluated by someone with different biases.
0[anonymous]
The book you linked to is mostly irrelevant to the problem Eliezer was addressing. The author writes, "Both types of procrastinators dislike the chores they are avoiding." Eliezer's hypothesis is a contribution even if (like me) you don't think it true. Eliezer recognized that ordinary hyperbolic discounting can't explain procrastination such as he experiences, where he decidedly does not dislike the activities, which can't be described as "chores." His clever solution is to apply hyperbolic-discounting considerations to mental acts. I don't think it's accurate to say Eliezer posted in ignorance of the literature on procrastination. Everything the book you linked to mentions is well-known, truistic by now, except the distinction between relaxed and tense procrastinators--a dispensable classification. Hyperbolic discounting is pretty much clearly the correct overarching framework for the kind of procrastination the author of the linked book discusses—but you don't learn that from the linked book (unless I missed it).

It is dangerous to assume that casually studying the leading textbook in a soft field will usually make you smarter.

However, enough rationality training will have alarm bells ringing when reading soft textbooks and studies. That in itself - "this field is overpopulated with concepts and undermeasured" - is marginally more useful than knowing nothing about the field.

If you haven't already, you should try reading postmodern philosophy. An uninterrupted wall of alarm bells. :)

I was a philosophy student for my brief attempt at tertiary education - I know what you mean. Our lecturer would describe the text as 'dense' - more aptly, I thought, the author is dense.

An anecdote from that class: after a lecture on Wittgenstein, a student asked the lecturer if the rest of the semester's lectures were to be canceled.

1Will_Sawin
I cannot think of a single obvious interpretation for why this occurred, but I can think of a few possible ones. Could you please clarify?
[-]gwern100

There is an obvious one, actually - a frequent (perhaps inaccurate) interpretation of the last parts of the Tractatus is as a denial of the possibility of any real philosophy (including Wittgenstein's).

Since one would naturally cover the Tractatus before The Philosophical Investigations or other works, a rather juvenile response would be exactly that anecdote.

3shokwave
Yep. The lecture presented the view that Wittgenstein had explained away most of philosophy - in his own words, that he had resolved all philosophical problems.

How silly of Wittgenstein! Didn't he know that Hegel had already completed philosophy?

Oh, Hegel. I remember a lecture where the professor read from Hegel's Wissenschaft der Logik like it was a holy scripture. When he was finished, he looked up and said: "With this, everything is said". I didn't understand anything, it was a jungle of words like being and not-being and becoming and how one thing becomes the other. I said that I didn't understand anything, and what did the lecturer reply with a smile? "It's good you don't understand it!" I seriously had the intense urge to shout at him, but instead I just didn't show up anymore.

0insigniff
A perhaps equally juvenile concern of mine, is whether Wittgenstein himself failed to stand on the shoulders of giants (at least in the Tractatus), by essentially starting from scratch with his own propositions, drawing logical conclusions from them rather than using or at least referring to previous work.
4lukeprog
Perhaps my choosing a recent Eliezer article as one example of an underuse of schholarship is an instance of "people trying to show off how willing they are to disagree with" Eliezer Yudkowsky!

Though I agree with you strongly, I think we should throw the easy objection to this out there: high-quality, thorough scholarship takes a lot of time. Even for people who are dedicated to self-improvement, knowledge and truth-seeking (which I speculate this community has many of), for some subjects, getting to the "state of the art"/minimum level of knowledge required to speak intelligently, avoid "solved problems", and not run into "already well refuted ideas" is a very expensive process. So much so that some might argue that communities like this wouldn't even exist (or would be even smaller than they are) if we all attempted to get to that minimum level in the voluminous, ever-growing list of subjects that one could know about.

This is a roundabout way of saying that our knowledge-consumption abilities are far too slow. We can and should attempt to be widely, broadly read knowledge-generalists and stand on the shoulders of giants; climbing even one, though, can take a dauntingly long time.

We need Matrix-style insta-learning. Badly.

[-]scav270

getting to the "state of the art"/minimum level of knowledge required to speak intelligently, avoid "solved problems", and not run into "already well refuted ideas" is a very expensive process.

So is spending time and effort on solved problems and already well refuted ideas.

4FiftyTwo
True. But there are also personal benefits to working on problems (increased cognitive ability, familiarity with useful methods, etc.) that arise even if the problem itself is already 'solved.'
0Davidmanheim
And worse, by spending time on solved problems and refuted ideas in public, you can easily destroy your credibility with those that could help you. This is a serious issue with how people like us, that have interdisciplinary interests, interact with and are respected by experts in fields touching on our own. Those that study, for instance, epistemology, view those that study, say, probability theory, fairly negatively, because they keep hearing uninformed and stupid opinions about things they know more about. This is especially bad because it happens instead of gaining from the knowledge of those experts, who are in a great position to help with thorny issues.
7greim
Hear, hear! Arguably, resources like Wikipedia, the LW sequences, and SEP (heck even Google and the internet in general) are steps in that general direction.
1duck_master
In fact, organized resources like Wikipedia, LW sequences, SEP, etc. are basically amortized scholarship. (This is particularly true for Wikipedia; its entire point is that we find vaguely-related content from around - or beyond - the web and then paraphrase it into a mildly-coherent article. Source: am wikipedia editor.)

In my experience, Ph.D. dissertations can be a wonderful resource for getting an overview of a particular academic topic. This is because the typical -- and expected -- pattern for a dissertation is to first survey the existing literature before diving into one's own research. This both shows that the doctoral candidate has done his/her homework, and, just as importantly, brings his/her committee members up to speed on the necessary background. For example, a lot of my early education in Bayesian methods came from reading the doctoral dissertations of Wray Buntine, David J. C. MacKay, and Radford Neal on applications of Bayesian methods to machine learning. Michael Kearns' dissertation helped me learn about computational learning theory. A philosophy dissertation helped me learn about temporal logic.

Of course, this requires that you already have some background in some related discipline. My background was in computer science when I read the above-mentioned dissertations, along with a pretty good foundation in mathematics.

5passive_fist
Research moves fast though; a dissertation just 3 or 4 years old may already be hopelessly out of date. Also, they are written by PhD students who, while masters in their own field of expertise, are really only 'apprentices' in training and many not be very knowledgable about areas only slightly outside their domain. Scientific journals often publish 'review' articles where people with decades of intimate knowledge about a field summarize recent developments. They are usually more concise than dissertations, and often written much better too. They are also peer-reviewed, just like dissertations and other papers.

LessWrong often makes pretty impressive progress in its discussions; I would be thrilled to see that progress made beginning at the edge of a field.

I sincerely doubt that the discussions which began on the leading edge would return anywhere near the same amount of progress as those which start in the scholarly middle. After all, those problems are on the edge because they're difficult to solve given the intellectual tools we have today. Though Less Wrong is often insightful, I suspect it's the result not of discovering genuinely new tools, but of applying known tools in ways most readers haven't seen them used. For Less Wrong to make progress with a problem that a lot of smart people have been thinking about in detail for a long time either requires that the entire field is so confused that no one has been able to think as clearly about it as we can (probably hubristic), or that we have developed genuinely new intellectual techniques that no one has tried yet.

Voted up, but I think that there are LOTS of fields that are that confused. Possibly every field without regular empirical tests (e.g. chemistry, engineering, computer science, applied physics, boxing) is that confused.

i ghost write papers for lazy rich undergrads at prestigious institutions and my experience has been that the soft sciences are a muddle of garbage with obscenely little worth given the billions of dollars poured into them that could be saving lives.

3Costanza
I, for one, am dying to hear more about this "ghost write [soft science] papers for lazy rich undergrads at prestigious institutions" business. Probably far more than you would be willing to tell, especially if you plan to keep this gig for any length of time.

It's not exactly a novel business model. You can read the testimony of a worker in that field here.

8Vladimir_M
Another pseudonymous confession from a worker in that field: http://www.eacfaculty.org/pchidester/Eng%20102f/Plagiarism/This%20Pen%20for%20Hire.pdf

comments by gwern and Desertopa upvoted, their links read.

How long can this go on before the whole thing* comes crashing down? Those of us who are Americans are ruled mostly by people who were "lazy rich undergrads at prestigious institutions" and then became lazy rich graduate students getting J.D.s or M.B.A.s from prestigious institutions, having been admitted based on their supposed undergraduate accomplishments.

The only thing to hope for, it seems, is that our supposed leaders are still getting cheat sheets from underpaid, unknown smart people.

* My impression is that higher education in the hard sciences in America is still excellent.

6SilasBarta
I've complained before about the same thing. My only answer is "it'll pass eventually, the only question is how much we'll have to suffer in the interim". Fortunately, these ghost writers basically give us a rosetta stone for identifying the lost and valueless fields: anything they consistently produce work on and which can escape detection is such a field. (Btw, change your last * to a \*.)
3Will_Sawin
With the caveat that low-level undergraduate assignment substance levels are not the same as cutting-edge research substance levels, though they are related.
6SilasBarta
See my reply to Desrtopa: A non-lost field should have a large enough inferential distance from a layshadow that the layshadow shouldn't be able to show proficiency from a brief perusal of the topic, even at the undergraduate levels.
3Costanza
I think you've started to identify an empirical test to sort the wheat from the chaff in universities. I've read your post from June, and agree. My guess would be that the proportions would turn out to show a lot of very expensive (and heavily subsidized) chaff for every unit of worthwhile wheat. This is a big issue, and I think you've called it correctly.
2Costanza
Thanks! Fixed! [NOTE SilasBarta's point about formatting is right and appreciated -- too meta to warrant a whole new comment.]
5SilasBarta
That's not what I said! ;-) Just so you know: the backslash escapes you out of Markdown, so to produce what you quoted, I put a double-backslash wherever you see a \.
2Desrtopa
I don't think any field in which they can produce an essay without being detected is necessarily valueless. At an undergraduate level, students in hard sciences are often assigned essays that could reasonably be written by a layperson who takes the time to properly search through the available peer reviewed articles. That may be an indictment of how the classes are taught and graded, but it's not a demonstration that the fields themselves are lacking worth.
3SilasBarta
All true, but the shadow authors: * don't mention doing work for the hard sciences or reading peer-reviewed articles in such fields * are able to learn all they need from a day or so of self-study, showing low inferential distance in the fields and thus low knowledge content * mention high involvement in graduate level work, where the implications of their success are much more significant.
3Desrtopa
But they do mention googling sources and doing literature review, and "Ed Dante" says he will write about anything that does not require him to do any math (or animal husbandry.) For original research in hard sciences, there's probably not going to be much of anything that doesn't at least require some statistics, but for undergraduate literature review papers, it probably wouldn't be hard to get away with.
6gwern
You may enjoy this: http://chronicle.com/article/The-Shadow-Scholar/125329/ EDIT: whoops, Desrtopa beat me to it by a minute. Serves me right for trying to refind that article through my Evernote clippings instead of just googling for it!
3nazgulnarsil
Not much to tell really as I don't do it for a living. I have a lot of free time at my normal job so this just lets me pick up a little extra. It started off with people I knew directly attending those schools and traveled via word of mouth from there. But the standards at these places really are a joke. I skim the class material, write them in one sitting, and have only had 1 paper get a "B" out of 50 or so. The part that stops most people is probably the ability to imitate "voice". Read a paper then try to write a few paragraphs in the same style. Ask a neutral judge if they look like they are written by the same author. It's a learned skill and if you don't enjoy writing you'll probably hate it.
0shokwave
I would subtract the 'intellectual' there. This is true of empirical sciences (cf ever-larger particle colliders), but not anywhere near as true for softer sciences (cf schools of thought). While not strictly inventing 'genuinely new' tools, I think LessWrong is definitely one of the first communities where everyone is required to use high-quality power tools instead of whatever old hammer is lying around.
0John_Maxwell
That doesn't seem obvious to me. If you were to look at a map of the known world drawn by a member of an ancient civilization, I don't think all the edges of the map would be regions that were particularly hard to traverse. Maybe they'd be the edges just because they were far from the civilization's population centers and no explorer had wandered that far yet. In a similar way, perhaps the boundaries of our knowledge are what they are just because to reach the boundary and make progress, you first have to master a lot of prerequisite concepts.

I think that most people just don't believe that philosophy has any value. I used to believe that it didn't, gradually concluded that it did, but then gradually concluded that yes, 99.9% of it really is worthless such that even reading contemporary famous people or summaries of their arguments (though not discussing such arguments with your epistemic peers who are familiar with them, and not reading pre-WWII philosophers) really is a waste of time.

I agree that 99.9% of philosophy is very close to worthless. Its signal-to-noise ratio is much lower than in the sciences or in mathematics.

This brings to mind Eliezer's comment that "...if there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it."

But reductionist-grade naturalistic cognitive philosophy is probably an even larger sub-field of philosophy than the formal epistemology I mentioned above. Names that come immediately to mind are: John Bickle, Pat & Paul Churchland, Paul Thagard, Tim Schroeder, William Calvin, Georg Northoff, Thomas Metzinger.

There's some good philosophy out there. Unfortunately, you normally only encounter it after you've spent quite a while studying bad philosophy. Most people are introduced to philosophy through Plato, Aristotle, Aquinas, Kant, and Hegel, and might never suspect a neurophilosopher like John Bickle exists.

Which reminds me of the old Bertrand Russell line:

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.

0diegocaleiro
Having been one of the exceptions, I wonder if there are enough exceptions to create critical mass for philosophy to take off, or if we will always be condemned (in a good sense) to merge with fields that enjoy precision, such as cog psy, chemestry, physics, maths, neuroscience, etology, evo psy and so on......... Not that I mind being partly neuro/psycho/evo........ it's just that there are, summing all these fields, too many papers to read in a lifetime.........
0Desrtopa
I think that the state of the field is still something of a barrier to the sort of people who would be of most benefit to it. I personally dropped my double major in philosophy after becoming fed up with how much useless and vacuous material I was being required to cover.
[-]Jack170

I don't know what percentage of writing that gets called "philosophy" is worthwhile but it isn't that hard to narrow your reading material down to relevant and worthwhile texts. It's really weird to see comments like this here because so much of what I've found on Less Wrong are ideas I've seen previously in philosophy I've read. Moreover, a large fraction of my karma I got just by repeating or synthesizing things I learned doing philosophy- and I'm not the only one whose gotten karma this way.

I find it particularly perplexing that you think it's a good idea to only read pre-WWII philosophers as their ideas are almost always better said by contemporary authors. One of my major problems with the discipline is that it is mostly taught by doing history of philosophy- forcing students to struggle with the prose of a Plato translation and distilling the philosophy from the mysticism instead of just reading Bertrand Russell on universals.

5Kaj_Sotala
Examples would probably make your point much more persuasive (not that I'm saying that it's unpersuasive, just that it feels a bit abstract right now).
[-]Jack260

Agreed. I was just being lazy.

I already didn't believe in the Copenhagen Interpretation because of a Philosophy of Physics course where my professor took Copenhagen to be the problem statement instead of a possible solution. That whole sequence is more or less something one could find in a philosophy of physics book- though I don't myself think it is Eliezer's best series.

Before coming here my metaethics were already subjectivist/anti-realist. There's about a century's worth of conceptual distinctions that would make the Metaethics Sequence clearer- a few of which I've made in comments leading to constructive discussion. I feel like I'm constantly paraphrasing Hume in these discussions where people try to reason their way to a terminal value.

There is Philosophy of Math, where there was a +12 comment suggesting the suggestion be better tied to academic work on the subject. My comments were well upvoted and I was mostly just prodding Silas with the standard Platonist line plus a little Quine.

History and Philosophy of Science comes up. That discussion was basically a combination of Kuhn and Quine (plus a bunch of less recognizable names who talk about the same things).

Bayesian epistem... (read more)

3Will_Sawin
I would not underestimate the value of synthesizing the correct parts of philosophy vs. being exposed to a lot of philosophy. The Bayesian epistemology stuff looks like something I should look into. The central logic of Hume was intuitively obvious to me, philosophy of math doesn't strike me as important once you convince yourself that you're allowed to do math, philosophy of science isn't important once you understand epistemology, personal identity isn't important except as it plays into ethics, which is too hard. I'm interested in the fact that you seem to suggest that the decision theory stuff is cutting-edge level. Since that is the part I spend the most time thinking and talking about, is my activity relatively less susceptible to the scholastic critique? Is there academic philosophy that has things to say to TDT, UDT, and so on?
0Jack
No, it makes you more susceptible- if you're actually working on a problem in the field that's all the more reason to know the scholarly work. Obviously, since TDT and UDT were invented like two years ago and haven't been published, academic philosophy says nothing directly about them. But there is a pretty robust literature on Causal vs. Evidential Decision theory and Newcomb's problem. You've read Eliezer's paper haven't you? He has a bibliography. Where did you think the issue came from? The whole thing is a philosophy problem. Also see the SEP.
1Will_Sawin
"To say to" means something different then "to talk about". For example, if someone makes epistemological claim XYZ, even if no Bayesian epistemologist has refuted that exact claim, their general arguments can be used in evaluating the claim. If mainstream philosophers had come up with a decision theory better than evidential and causal (which are both wrong), then people who had already surpassed EDT and CDT would be forced to read them. But if they haven't, then lesswrong has already surpassed the limit of the philosophical literature. That's what I'm asking. I will look at the SEP when I next have time. You think that the one who ignores the literature while working on a problem that is unsolved in the literature is more blameworthy than one who ignores the literature while working on a problem that is solved in the literature?
2Jack
I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy. Right, I don't know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with. There have been lots of attempts to solve Newcomb's problem-by amending EDT or CDT, or inventing a new decision theory. Many, perhaps most of these, use concepts related to TDT/UDT- possible worlds, counterfactuals, and Jeffrey's notion of ratifiability (all three of these concepts are mentioned in Eliezer's paper). Again, I don't know the details of the major proposals, though skimming the literature it looks like none have been conclusive or totally convincing. But it seems plausible that the arguments which sink those theories might also sink the Less Wrong developed ones. It also seems very plausible that the theoretical innovations involved in those theories might be fruitful things for LW decision theorists to consider. There have also been lots of things written about Newcomb's problem- papers that don't claim to solve anything but which claim to point out interesting features of this problem. I don't really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?
0Will_Sawin
I was previously aware that Newcomb's problem was somewhere between partly solved and not solved at all, which is at least something. With the critique brought to my attention, I attempted cheap ways of figuring it out, first asking you and then reading the SEP article on your recommendation. That is a point. I also didn't say what I think I really wanted to say, which is that: If I read someone advocating a non-Bayesian epistemology, I react: "This is gibberish. Come back to me once you've understood Bayesian epistemology and adopted it or come up with a good counterargument." The same thing is true of the is-ought distinction: An insight which is obviously fundamental to further analysis in its field. Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory - these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I'd seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page. I conclude (with two pretty big ifs) that while philosophers have insights, they don't have very good insights. 1. I freely admit to some motivated cognition here. Reading papers is not fun, or, at least, less fun than thinking about problems, while believing that insight is restricted to a cloistered community is fun. 2. You make claim X, I see possible counterargument Y, responding argumentatively with Y is a good way to see whether you have any data on Y that sheds light on the specifics of X. 3. Knowning what I know about academic philosophy and the minds behind lesswrong's take on decision theory, that strikes me as totally possible.
9Jack
Well presumably you find Nozick's work, formulating Newcomb's and Solomon's problems insightful. Less Wrong's decision theory work isn't sui generis. I suspect a number of things on that page are insightful solutions to problems you hadn't considered. That some of them are made in the context of CDT might make them less useful to those seeking to break from CDT, but it doesn't make them less insightful. Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb's problem or any other decision theory problem. It's going to be a lot of people defending two-boxing. And it's an encyclopedia article, which means there isn't a lot of room to motivate or explain in detail the proposals. To see Eliezer's insights into decision theory it really helps to read his paper, not just his blog posts. Same goes for other philosophers. I just linked to to the SEP because it was convenient and I was trying to show that yes, philosophers do have things to say about this. If you want more targeted material you're gonna have to get access to an article database and do a few searches. Also, keep in mind that if you don't care about AI decision theory is a pretty parochial concern. If Eliezer published his TDT paper it wouldn't make him famous or anything. Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic. From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does. The Death in Damascus/decision instability problem is something for TDT/UDT to address. In general, I'm not at all equipped to give you a guided tour of the philosophical literature. I know only the vaguest outline of the subfield. All I know is that if I was really interested in a problem and someone told me "Look, over here there's a bunch of papers written by people from the moderately intelligent to the genius on your subject and closely related subjects" I'd be
3Will_Sawin
I'm not really interested in decision theory. It is one of several fun things I like to think about. To demonstrate an extreme version of this attitude, I am thinking about a math problem right now. I know that there is a solution in the literature - someone told me. I do not plan to find that solution in the literature. Now, I am more interested in getting the correct answer vs. finding the answer myself in decision theory than that. But the primary reason I think about decision theory is not because I want to know the answer. So if someone was like, "here's a paper that I think contains important insights on this problem," I'd read it, but if they were like, "here's a bunch of papers written by a community whose biases you find personally annoying and do not think are conducive to solving this particular problem, some of which probably contain some insights," I'll be more wary. It should be noted that I do agree with your point to some extent, which is why we are having this discussion. Indeed. That did not appear to be the case when I looked. which you linked to because, AFAICT, it is one of only three SEP pages that mentions Newcomb's Problem, two of which I have read the relevant parts of and one of which I will soon. To see that he has insights, you just need to read his blog posts, although to be fair many of the ideas get less than a lesswrong-length post of explanation. I'd expect the best ones to. It seems like, once I exhaust your limited but easily-accessible knowledge, which seems like about now, I should look up philosophical decision theory papers at the same leisurely pace I think about decision theory. My university should have some sort of database. It seems like it does just the wrong thing to me. For example, it two-boxes on Newcomb's problem. However, the amount of sense it seems to make leads me to suspect that I don't understand it. When I have time, I will read the appropriate paper(s?) until I'm certain I understand what he means.
9PhilGoetz
Curious about the WW2 comment. Trouble parsing it. Do you think pre-WW2, or post-WW2, philosophers are more worthwhile? I would say pre-Nietzsche philosophers are no longer very worthwhile for helping you solve contemporary problems of philosophy, although some (like Berkely, Hume, and Spinoza) were worthwhile for a time. (This is partly because I think causation and epistemology are not as important as issues like values, ethics, categorization, linguistic meaning, and self-identity.) Some, like Kant, provide definitions that may help clarify things for you, and that you will need if you want to talk to philosophers. Ancient Greek and Roman poets and orators are worthwhile, because they describe an ethical system that contrasts dramatically with ours. But I read (pre-20th century) Native American speeches for the same reason, and lend them the same credence.
6Will_Newsome
Really? Who is 'ours'? I've agreed with most of what I've seen of Greek ethical philosophy, and I thought most Less Wrong people would too. (I'm thinking of arete, eudaimonia, et cetera... their ethical ontology always seemed pretty reasonable to me, which is to be expected since we're all pretty Greek memetically speaking.)

Classical Greek ethicists propounded values that were in many ways similar to modern ones. Ancient Greece is the time period in which works like the Illiad were put to writing, and those demonstrate some values that are quite foreign to us.

Nietzsche gives one take on this distinction, when he contrasts "good vs. bad" or "master" moralities with "good vs. evil" or "slave" moralities. An evil man is one with evil goals; a bad man is one who is inept at achieving his goals.

Another contrast is that if the Greeks or the Romans had been utilitarians, they would never have been average utilitarians, and I don't think they would even have been total utilitarians. They might have been maximum utilitarians, believing that a civilization's measure was the greatness of its greatest achievements and its greatest people. Americans must have at least briefly believed something like this when they supported the Apollo program.

(I must be overgeneralizing any time I am speaking of the morals of both Athens and Sparta.)

0NancyLebovitz
Philosophy seems to offer a very low chance of doing something extremely valuable. I suspect it's the valuable human activity with the most extreme odds against success.

In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?”

IMO, it is perfectly reasonable to object with: "Who designed the Designer?".

The logic being objected to is: it takes a big complex thing to create another big complex thing. Observing that Darwinian evolution makes big complex things from scratch is the counter-example. The intuition that a complex thing (humans) requires another complex thing to create it (god) is wrong - and it does tend to lead towards an escalator of ever-more-complex creators.

Simplicity creating complexity needs to happen somewhere, to avoid an infinite regress - and if such a principle has to be invoked somewhere, then before the very first god is conjoured seems like a good place.

Checking with the "common sense atheism" link quite a few people are saying similar things in the comments.

timtyler,

Hitchens did not mention complexity or simplicity as you propose. And he did not mention evolution as you propose. If you read the Hitchens quote, you will see he only gave the why-regress objection, which is just as valid against any scientific hypothesis as it is against a theistic one.

There are ways to make the "Who designed the Designer?" objection stick, but Hitchens did not use one of them. If you read the Hitchens quote, you'll see that he explicitly gave the why-regress objection that could be just as accurately be given to any scientific hypothesis ever proposed.

Here, let's play Quick Word Substitution. Let's say a physicist gives a brilliant demonstration of why his theory of quarks does a great job explaining a wide variety of observed subatomic phenomena. Now, Hitchens objects:

"But what explains the quarks? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?"

Hitchens explicitly gave the why-regress objection that is just as potent against scientific explanations as it is against theistic explanations.

-1PhilGoetz
The regress down into smaller and smaller particles may be a special case. Can we throw out particle physics, and still say we have science? I think so.
9lukeprog
PhilGoetz, The why-regress is not concerned with ontological reduction into smaller and smaller bits. It is concerned with explanatory reduction into more and more fundamental explanations. The why-regress is not limited to particle physics. It is just as present at higher-level sciences. When neuroscientists successfully explain certain types of pleasure in terms of the delivery of dopamine and endorphins to certain parts of the brain, it does not defeat this explanation to say, "But what explains this particular way of sending dopamine and endorphins to certain parts of the brain? Don't you run the risk of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" The point is that all explanations are subject to the why-regress, whether they are theistic or scientific explanations.
0lukeprog
Also, see the part of Yudkowsky's Technical Explanation of Technical Explanation that begins with "Beware of checklist thinking..."

More specifically it is completely rational to use that argument against theists, because one of their arguments for god is that the world is too complex not to have been designed; so in that circumstance you are just pointing out that their claim is just pushing the complexity back one step. If the world is so complex that it needs a designer, then so is god.

3h-H
I think tighter definitions are needed here, some theistic traditions consider all existence to be 'god' etc.
0Vaniver
Unless God is too complex to be designed :P
6Polymeron
Ooh, I like that one. Call it the "sweet spot" theory of intelligent design - things of high enough complexity must be designed, but only if they are under a certain complexity, at which point they must be eternal. (And apparently also personal and omnibenevolent, for some reason). At any rate, this would all be nice and dandy were it not completely arbitrary... Though if we had an agreed upon measure for complexity and could measure enough relevant objects, we might possibly actually be able to devise a test of sorts for this. Well, at least for the lower bound. Seeing as we can't actually show that something is eternal, the upper bound can always be pushed upwards a-la the invisible dragon's permeability to flour.

(And apparently also personal and omnibenevolent, for some reason).

Well, if it's eternal and sufficiently powerful, a kind of omnibenevolence might follow, insofar as it exerts a selection pressure on the things it feels benevolent towards, which over time will cause them to predominate.

After all, even humans might (given enough time in which to act) cause our environment to be populated solely with things towards which we feel benevolent, simply by wiping out or modifying everything else.

The canonical Christian Hell might also follow from this line of reasoning as the last safe place, where all the refugees from divine selection pressure ended up.

Granted, most Christians would be horrified by this model of divine omnibenevolence; the canonical version presumes an in-principle universal benevolence, not a contingent one.

1andrew sauer
Unless it decides that it wants to keep things it hates around to torture them
5Liron
Or God is in the first Quine-capable level of some designer hierarchy, like a Universal Turing Machine among lesser models of computation.
4Paul Crowley
If God is complex, then I guess he's not real :-)
2bentarm
ObNitpick - actually, R is a subset of C, so this doesn't follow.
1Tiiba
God = 3.
3jimrandomh
There is an upper bound to the complexity of things designed by humans, but why would there be an upper bound on the complexity of things that are designed, in general?
2Polymeron
Indeed. Pointing out that setting a rule leads to infinite regress is not the same as requiring that everything being used to explain must also be explained. In fact, this is a flaw with Intelligent Design, not its critics. Now, the theists have a loophole to answer the question ("only physical complex things require a designer" special pleading), but it does not render the question "who designed the designer" - which should be rephrased "why doesn't necessitating a designer lead to infinite regress" - meaningless under the rules of science. Not the greatest example in this, Luke. Especially jarring since you just recently quoted Maitzen on the "so what" infinite regress argument against Ultimate Purpose.
3lukeprog
Polymeron, Which part of my example do you disagree with? Do you disagree with my claim that Hitchens' objection concerned the fact that the theistic explanation is subject to the why-regress? Do you disagree with my claim that all scientific explanations are also subject to the why-regress? The discussion of Maitzen and Craig did not involve a why-regress of causal explanations. I'm not sure why you think that discussion is relevant here.
3Polymeron
lukeprog, I disagree with the claim that Hitchens' objection invokes the why-regress as it applies to science. It invokes an infinite regression that is a consequence of the Intelligent Design claim (things above a certain threshold necessitate a designer); much like Maitzen invoking an infinite regress that might be entailed by applying the "so what" question to every purpose statement. To make this clearer: The problem with Intelligent Design is precisely that it demands an explanation exist, and that the explanation be a designer. Hitchens' objection is in-line with us not requiring an explanation for the fundamentals. Science is not subject to the same infinite regress, because science does not set a rule that everything must have an explanation, and certainly not an explanation of a certain kind. Science may define a certain class of phenomena as having a certain explanation, but it never sets the explanation as necessarily requiring the same explanation to explain it. Hitchens points this flaw as a logical consequence of the ID claim.

Surprised no one has linked to Don't Revere The Bearer of Good Info yet.

1lukeprog
A great post; thanks for pointing me to it!

This post reminded me of this quote from Bertrand Russell's epic polemic A History of Western Philosophy:

It is noteworthy that modern Platonists, almost without exception, are ignorant of mathematics, in spite of the immense importance that Plato attached to arithmetic and geometry, and the immense influence that they had on his philosophy. This is an example of the evils of specialization: a man must not write on Plato unless he has spent so much of his youth on Greek as to have had no time for the things that Plato thought important.

[-]djcb30

I'm not going to argue that scholarship is not tremendously valuable, but in the kind of live discussions that are mentioned here, I'm not sure if it helps that much against the kind of 'dark arts' techniques that are employed. In live-discussions, someone can always refer to some information or knowledge that the opponent may not have handy ('Historians have established this fact ...'), and only some of that can be counter-acted by scholarship.

4Benquo
The examples in the post were debate-specific, but I would suggest that the virtue of scholarship is more broadly applicable. Like many parts of rationality, the most important thing is not to use scholarship to win arguments, but to use scholarship to find the right thing to argue for, or more generally, to become correct.
[-][anonymous]30

This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.

Foundational knowledge is more vital in the hard sciences than in psychology, which confronts you immediately with questions about what is the foundation. You have to make at least a tentative decision about which framework you're going to get up to speed o... (read more)

5David_Gerard
You haven't noted the most horrible thing about this: that the fields are still valuable, even still necessary. Us being no good at them doesn't change this. c.f. medicine before germ theory and cell theory. c.f. postmodernism, which is notoriously BS-ridden, but anyone who aspires to write good fiction needs a working knowledge of postmodernist techniques, whether they call them that or not.
6bentarm
So Poe was an instinctive postmodernist?

On the topic of scholarship, I'd like to mention that if one takes the notion of surviving cryopreservation seriously, it's probably a good idea to read up on cryobiology. Have at least a basic understanding of what's going to happen to your cells when your time comes. There is a rich and complex field behind it which very few individuals have much grasp on.

If the bug bites you to do so, you may even be able to go into the field and make some breakthroughs. Huge advances have been made in recent decades by very small numbers of cryonics-motivated scientist... (read more)

Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...

Charlie Munger speaks of a "latticework of mental models". A good mix, though somewhat skewed to investing, is found here

http://www.focusinvestor.com/FocusSeriesPart3.pdf

Scholarship: Thumbs up.

Classic Scholarship: Thumbs down http://brainstormers.wordpress.com/2010/03/03/sobre-ler-os-classicos/

Just in case someone forgot all the Teacher Pasword, Cached Thoughts, and related posts from which I got the link to the above text.

diegocaleiro:

Classic Scholarship: Thumbs down http://brainstormers.wordpress.com/2010/03/03/sobre-ler-os-classicos/

That article is very poorly argued. Your argument is more or less correct in those fields where the progress of scholarship has a monotonous upward trend, in the sense that knowledge is accumulated without loss, and all existing insights continuously improved. This is true for e.g. Newtonian physics, and indeed, nobody would ever read Newton's original works instead of a modern textbook except for historical interest.

What you fail to understand, however, is that in many fields there is no such monotonous upward trend. This means that in the old classics you'll often find insight that has been neglected and forgotten, and you'll also find ideas that have fallen out of fashion and ideological favor, and been replaced with less accurate (and sometimes outright delusional) ones. Almost invariably, these insights and ideas are absent from modern texts, even those dealing specifically with the old authors, and there is often nothing comparable being written nowadays that could open your eyes to the errors of the modern consensus.

As a rule of thumb, the softer and more ... (read more)

5David_Gerard
Yes. Anyone who thinks Chaucer and Shakespeare are valueless for being old has misunderstood the field. As long as humans are savannah apes, they will find their works of value. We still read Chaucer and Shakespeare not because they are antecedents, but because they're good now.
9PhilGoetz
Are Shakespeare's comedies - containing mainly sexual innuendo, mistaken identities, abuse, and puns, and using the same extremely improbable plot devices repeatedly - really great works of art? They're good, but are they really first-tier? Do any of Shakespeare's tragedies contain insights into human nature that are as important or as difficult for you to discover on your own as those you would find in a Jhumpa Lahiri novel? I think not. (Honestly, is King Lear deep? No; just dramatic and well-written. Any idiot knows by Act II what will happen.) We still read Shakespeare today partly because Shakespeare was great when he wrote; but partly because Shakespeare was a master of individual phrases and of style, and literature departments today are dominated by postmodernists who believe there is no such thing as substance, and therefore style is all that matters. (Or perhaps the simpler explanation is that people who make and critique films tend to be more impressed by visual effects than by content; and people who make and critique books tend to be more impressed by verbal effects than by content.) (Don Quixote, though, is golden. :)
[-]Jack150

We still read Shakespeare today partly because Shakespeare was great when he wrote; but partly because Shakespeare was a master of individual phrases and of style, and literature departments today are dominated by postmodernists who believe there is no such thing as substance, and therefore style is all that matters.

Shakespeare's centrality in English Lit curricula comes from it's historic place in the Western canon. Post-modernists are distinguished in particular by their opposition to any kind of canon.

5PhilGoetz
Good point! And yet, I know English lit people who simultaneously love postmodernism and Shakespeare. There is a pervasive emphasis of style over content, which I have been attributing to postmodernism; but maybe I oversimplify.
8Jack
Postmodernism isn't really characterized by a position on which works should be read so much as how they should be read. While postmodern thinking opposes canons it also supports reading culturally relevant texts with a critical/subversive eye. Shakespeare is rich with cultural context while also being complex and ambiguous enough to provide a space for lit critics to play with meanings and interpretations and get interesting results. Hamlet, which is far and away Billy Shake's best work, is particularly conducive to this. They do the same thing with Chaucer, actually, particularly the Wife of Bath's tale. I don't think it is about style over substance but about the freedom to play with cultural meaning and interpretation. You can't say Hamlet is short on substance, anyway. But the extent to which authors like Chaucer and Shakespeare have become less central in lit departments is almost entirely due to this crowd- it's archetypal postmodernism which gives genre films and television the same importance as the historical Western canon. Rosencrantz and Guildenstern are Dead probably boosts the Bard's popularity in the pro-postmodern scene.

Another reason to be familiar with the canonical works in a culture is precisely because they're canonical. It's like a common currency. By now, English-speaking culture is so rooted in Shakespeare that you'd be missing out if you didn't recognize the references.

Any idiot knows by Act II what will happen.

We do now! But apparently, the original Elizabethan audiences went in expecting a happy ending -- and were shocked when it turned out to be a tragedy. Tricky fellow, that Willy S.

3David_Gerard
Yes. Same reason some familiarity with the King James Version of the Bible is culturally useful.
3Paul Crowley
cf Richard Dawkins on his lifelong love of the King James Bible
0PhilGoetz
I didn't mean they would know how it would end - I meant they would know that Lear used shallow indicators to judge character, and Cordelia would turn out to be the faithful daughter.
4Costanza
It looks like audiences since before Shakespeare's time would have gone in knowing the outline of the story. But I'm mostly replying to confess - the same Wikipedia article that I myself quoted makes it clear that there was no really happy ending to King Lear until 1681. I wasn't paying close enough attention.
[-][anonymous]160

Reading the masters (the little I've done of it) has taught me the following things:

  1. Almost no ideas are good
  2. Almost no ideas are new

Plato's ideas were, at least, new. And (per 2) they're the most influential ideas ever to be put on paper. There's value in seeing that for yourself.

9David_Gerard
This counts as vast insight. When looking at the output of lots of ridiculously smart people, you discover that most intelligence is used to justify stupidity, and the most important thing about most new ideas is that they are wrong.
5Jayson_Virissimo
Much of Plato's thought comes from Pythagoras, Parmenides, Heraclitus, and Socrates. If I were to pick an ancient philosopher that didn't have obvious intellectual antecedents, I would choose Thales.

One counterpoint:

In The Failures of Eld Science, Eliezer's character points out that most scientists were never trained to recognize and navigate a genuine scientific controversy; instead, we hand our undergraduates the answers on a silver platter and have them do textbook problems. He proposes that if scientists had first had to think through and defeat phlogiston themselves, they would have been less stymied by the interpretation of quantum mechanics.

Similarly, I think I'm better off for having encountered some of the grand old systems of philosophy in their earliest and most viral forms, without all the subsequent criticisms and rebuttals attached. Of course I ran the risk of getting entrapped permanently in Plato or Nietzsche, but I learned things about rationality and about myself this way, and I don't think I would have learned those had I started by reading a modern digest of one or the other (with all the mistakes pointed out). (Of course, I have since read modern critiques and profited from them.)

On the other hand, some Great Books schools like to teach higher mathematics by having the students read Euclid, and I agree that's insane and not worth all the extra effort.

6mwengler
Interesting about pushing students through Phlogiston. Without it being required of physics majors, I took "philosophy of science" as an undergrad philosophy minor and read, among others, Popper. It has stuck with me like one of those viruses, let me know if I have much to gain by finally dropping some of what I think I learned from him. I personally loved looking at all science afterwards and listening in all discussions and thinking: "is this a difference that makes a difference?" Is there testable difference here or can I just skip it? In a graduate course on superconducting electronics I once taught a wildly simple theory of electron pairing treating the electron wave functions as 1-d sine waves in the metal. I told the students: "the theory I am teaching you is wrong, but it illustrates many of the true features of the superconducting wave function. If you don't understand why it is wrong, you will be better off thinking this than not thinking this, while if you get to the point where you see why it is wrong, you will really understand superconductivity pretty well." It never occurred to me to try to insert Popper into any of the classes I was teaching. I was not a very imaginitive professor. By the way, on your name orthonormal, on what basis did you choose it? :)
2Benquo
On the Euclid point, it depends on where you're starting from and what you're trying to do. I've seen people who thought they hated math, converted by going through some of Euclid. The geometrical method of exposition is beautiful in itself, and very different from the analytical approach most modern math follows. If you're already a math enthusiast, it would not benefit you quite as much.
7orthonormal
But there are more readable modern textbooks which use the geometrical method of exposition; I just taught out of one last semester.
6Benquo
I envy your students.
[-]Benquo100

On the whole I'd agree that most of the time it's better to focus on high-quality up-to-date summaries/textbooks than high-quality classical sources.

But I'd suggest a few caveats:

1) It is much easier to find high-quality classics than it is to find high-quality contemporary stuff. Everyone knows who Darwin was, I don't even know how to find a good biology textbook, and I personally got a lot more out of reading and thinking about Darwin than by reading my high school biology textbook. This is a consideration for students and autodidacts, less so for smart and well-informed teachers who know how to find the good stuff.

2) Many summarizers are simply not as smart as the greats, and don't pick up on a lot of good stuff the classics contain. This is less important for a survey that has only a small amount of time to spend on each topic, but if you want deep understanding of a discipline, you will sometimes have to go beyond the available summaries.

3) The ancients are the closest we have to space aliens; people who live in a genuinely different world with different preconceptions.

9PhilGoetz
That post says, "You might find it more enjoyable to read Plato rather than more modern work just as someone else might prefer to have their philosophical arguments interspersed in Harry Potter slash," and was posted February 25, 2010. The first chapter of Harry Potter and the Methods of Rationality was posted Feb. 28, 2010. Coincidence?

in Harry Potter slash

Upvoted because it gives us hope that we'll see those Harry/Draco scenes in MoR after all.

1Will_Newsome
Eliezer actually did mention the allegedly preposterous idea of getting some kind of wisdom (philsophical? ethical?) from Harry Potter in a comment reply back in the OB days. I'm too busy/lazy to find a link though.
4Costanza
Is this it?
4lukeprog
Completely agreed. I wrote very much the same thing in How to Do Philosophy Better.
3jsalvatier
Indeed. If you're asking students to read the initial source material, there's a 90% chance you're doing it wrong.
1MichaelVassar
Wow. I disagree exactly.

I think some justification would be helpful for your readers, especially those who don't know about your relatively high personal efficacy :-)

You asserted something similar and with more original content right next door and I think your implicit justification was spelled out a while ago in the article For progress to be by accumulation and not by random walk, read great books. I'm curious if these links capture the core justification well, or is more necessary to derive your conclusions?

It feels like lots of details deployed to justify your advice to "read the classics" and lots of the details deployed to justify the advice "avoid the classics" are basically compatible and some more nuanced theory should be available that is consistent with the totality of the facts like "In cases X and Y read the classics, and in case N and M avoid them" and perhaps the real disagreement is about the nature of the readership and which case better describes the majority of them... or the most important among them?

For example, I think maybe people in their late 20's or older who were clicky while young and are already polymaths might be helped reading the classics in d... (read more)

3MichaelVassar
I think those links are about right, as is the analysis. Thanks.
6Desrtopa
Could you elaborate?

Why did my post appear correctly in the editor, but when posted to the site, lose the spaces just before an apparently random selection of my hyperlinks?

This happens when

tags are included, anywhere. I've deleted them.

2[anonymous]
I will vouch for the fact that div-tags are the work of the devil. They impede my formatting on Blogger too. Very evil. I destroy all such tags and then I do my victory laugh.
0lukeprog
Sweet! Thanks. I certainly didn't add div tags on purpose, so I'll be sure to watch out for them in the future.
2Clippy
One way to avoid formatting problems is to write your article in an external word processor (such as Microsoft Word) and then copy/paste it into the Website:LessWrong.com article entry field. (You cannot add the summary break this way, so that must be done afterward.)
0Randaly
This seems to always occur around hyperlinks for me- and some other formatting too (e.g. "I've said almostnothing that is original.") I don't know why; I usually manually input an extra space.
2diegocaleiro
That happened to me as well......... Lots of extra effort...... isn't this meta-fixable?
0Mass_Driver
I don't know. It's a great post, though: go fix it!
2lukeprog
For now, I simply added two spaces where Less Wrong wanted to collapse my single space into nothing. Hopefully someone will be able to figure out a more elegant solution. I'm on Snow Leopard, Google Chrome.
0Dreaded_Anomaly
You could try using the HTML character entity reference for a non-breaking space, &*nbsp; (remove the asterisk). It's not really more elegant, but it will look nicer.
[-]Ben10

I am not going to dispute the contention that "knowing more is good". However, I think people's opinions are often worth hearing, and I think people with interesting ideas about politics/philosophy or religion often don't say what they are thinking because they have some idea that "I need to complete several volumes of recommended reading before my opinions matter" - which they don't. I have had really interesting conversations with younger people (18ish) who produce fascinating ideas - they don't know that their pet-theory of a better political system is ... (read more)

This discussion has been largely philosophy-based, which is understandable given the site's focus. But are people interested in knowing something about many different fields? Below is my attempt at different levels of liberal arts education. I have been working on either taking a class or reading a textbook in each of these areas, preferably a textbook for the people that will be majoring in this subject (I have 3 more to do). Then if I can retain it, I can know the basic vocabulary to communicate with people in almost any field, and also look for common t... (read more)

Seems to me you're asking the wrong question. I say, don't ask if there is a omnipotent God, that is making an unwarranted narrowing assumption. Why should it be either/or? Lots and lots of room in between the 'omnipotent God' theory and the 'no god at all' theory for 'medium potent god(s) theories.

And 'medium potent god' theories are not only inherently more likely than either of the extremes, they seem a lot more fruitful and interesting to think about, in terms of possible consequences.

I say, ask if there are beings of ANY kind that are mor... (read more)

Updated link to Piers Steel's meta-analysis on procrastination research (at least I think it's the correct paper): http://studiemetro.au.dk/fileadmin/www.studiemetro.au.dk/Procrastination_2.pdf

In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science.

I agree that Hitchens should have looked to see what answers theists give to that question. (And he might have; since theists usually respond instead by saying that God is eternal, meaning outside of time and cause and effect, a... (read more)

4lukeprog
PhilGoetz, And I'll give the same reply as i gave to Tim Tyler. :) Hitchens did not mention entropy or complexity. He mentioned exactly and only the why-regress, the exact same why-regress that all scientific hypotheses are subject to. Perhaps the objection you raise to theism would have been good for Hitchens to give, but it is not the objection Hitchens gave. It looks to me like people are trying to make Hitchens look good by putting smarter words in his mouth than the ones he actually spoke.
7Kaj_Sotala
I think it's more the principle of charity. Unless the other person has been mentally designated as an enemy, people tend to look for the most charitable plausible interpretation of his words. People are pointing out that what you gave as an example is a poor example to give, because your wording doesn't do enough to exclude the most charitable interpretation of Hitchens' words from the set of plausible interpretations. Therefore people will, upon hearing your example, automatically assume that this is actually what Hitchens was trying to say. (I've been known to take this even further. Sometimes I'll point an article to a friend, have the friend ruthlessly criticize the article, and then I'll go "oh, of course the thing that the author is actually saying is pretty dreadful, but why would you care about that? If you read it as being about [this semi-related insightful thing he could have been saying instead if he'd thought about it a bit more], then it's a great article!")
8lukeprog
Kaj_Sotala, If Hitchens meant what people are charitably attributing to him, why didn't he make those points in the following rebuttal periods or during the Q&A? Craig gave the exact rebuttal that I just gave, so if Hitchens had intended to make a point about complexity or entropy rather than the point about infinite regress he explicitly made, he had plenty of opportunity to do so. You are welcome to say that there are interesting objections to theism related to the question "Who designed the designer?" What confuses me is when people say I gave a bad example of non-scholarship because I represented Hitchens for what he actually said, rather than for what he did not say, not even when he had an opportunity to respond to Craig's rebuttal. The argument people here are attributing to Hitchens is not the argument he gave. Hitchens gave an objection concerning an infinite regress of explanations. The argument being attributed to Hitchens is a different argument that was given in one form by Richard Dawkins as The Ultimate Boeing 747 Gambit. Dawkins' argument is unfortunately vague, though it has been reformulated with more precision (for example, Kolmogorov complexity) over here.
7Kaj_Sotala
I didn't suggest that he meant that, I suggested that what you said didn't do enough to exclude it from the class of reasonable interpretations of what he might have meant. Suppose someone says to me, like you did, "there's this guy Hitchens, he said the following: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?'". The very first thing that comes to mind, and which came to my mind even before I'd read the next sentence, is "oh, I've used that argument myself, when some religious person was telling me 'but the Big Bang had to come from somewhere', that must be what Hitchens meant". That's the default interpretation that will come to the mind of anyone who's willing to give Hitchens the slightest benefit of doubt. Yes, if people click on the links you provided they will see that the interpretation is wrong, but most people aren't going to do that. And people shouldn't need to click on a link to see that the most plausible-seeming interpretation of what they've read is, in fact, incorrect. If it's important for conveying your message correctly, then you should state it outright. If you give an example about a person's non-scholarship and people start saying "oh, but that doesn't need to be an example of non-scholarship", then it's a much worse example than one that doesn't prompt that response.
3Sly
Another thing to think about was that Hitchens was in a debate. The Christians in the audience that he is trying to convince will not be charitable.
1PhilGoetz
You are technically correct. Your initial remarks misled me, for the reasons given by Kaj Sotala below. But it's a good example, if I read it carefully and literally, so don't take that as a criticism.
1lukeprog
Thanks.
2insigniff
Whether or not the first cause argument should be a concern in science, i think Bertrand Russell summarizes its problems quite well: "Perhaps the simplest and easiest to understand is the argument of the First Cause. It is maintained that everything we see in this world has a cause, and as you go back in the chain of causes further and further you must come to a First Cause, and to that First Cause you give the name of God. That argument, I suppose, does not carry very much weight nowadays, because, in the first place, cause is not quite what it used to be. The philosophers and the men of science have got going on cause, and it has not anything like the vitality that it used to have; but apart from that, you can see that the argument that there must be a First Cause is one that cannot have any validity. I may say that when I was a young man, and was debating these questions very seriously in my mind, I for a long time accepted the argument of the First Cause, until one day, at the age of eighteen, I read John Stuart Mill's Autobiography, and I there found this sentence: "My father taught me that the question, Who made me? cannot be answered, since it immediately suggests the further question, Who made God?" That very simple sentence showed me, as I still think, the fallacy in the argument of the First Cause. If everything must have a cause, then God must have a cause. If there can be anything without a cause, it may just as well be the world as God, so that there cannot be any validity in that argument. It is exactly of the same nature as the Hindu's view, that the world rested upon an elephant, and the elephant rested upon a tortoise; and when they said, "How about the tortoise?" the Indian said, "Suppose we change the subject." The argument is really no better than that. There is no reason why the world could not have come into being without a cause; nor, on the other hand, is there any reason why it should not have always existed. There is no reason to suppose t
0[anonymous]
I think the logical incoherence of theism is a stronger knock down argument. The most devastating criticism of theism relates not to what caused god but what causes his actions. God is conceived as an all-powerful will, subjecting it to the same simple argument that disposes of libertarian "free will." Either God's conduct is random or determined. But conceiving of god as something other than a will makes god otiose. If god acts randomly, the description is indistinguishable from the universe simply being random; if god is determined that is indistinguishable from the universe is simply determined. Who created the creator is a good argument, but it isn't decisive. To say god must be more complex than the universe 1) is denied by theists, who call god uniquely simply; and 2) leaves the theist with one (weak) counterargument, inasmuch as it means treating god as a mechanism rather than something that is, well, supernatural. The theist says the causal requirements that govern matter don't apply, and we're unwarranted in generalizing our observations about the material world to the characteristics of god. Ultimately, you can't avoid getting down to the really basic question: what is this god. If he's not a deterministic entity, what's the alternative to his behavior being random? [Actually, I'm not sure raw randomness is coherent either, but you don't have to take the argument that far.]