Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Neglected Virtue of Scholarship

166 Post author: lukeprog 05 January 2011 07:22AM

Eliezer Yudkowsky identifies scholarship as one of the Twelve Virtues of Rationality:

Study many sciences and absorb their power as your own. Each field that you consume makes you larger... It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...

I think he's right, and I think scholarship doesn't get enough praise - even on Less Wrong, where it is regularly encouraged.

First, consider the evangelical atheist community to which I belong. There is a tendency for lay atheists to write "refutations" of theism without first doing a modicum of research on the current state of the arguments. This can get atheists into trouble when they go toe-to-toe with a theist who did do his homework. I'll share two examples:

  • In a debate with theist Bill Craig, agnostic Bart Ehrman paraphrased David Hume's argument that we can't demonstrate the occurrence of a miracle in the past. Craig responded with a PowerPoint slide showing Bayes' Theorem, and explained that Ehrman was only considering prior probabilities, when of course he needed to consider the relevant conditional probabilities as well. Ehrman failed to respond to this, and looked as though he had never seen Bayes' Theorem before. Had Ehrman practiced the virtue of scholarship on this issue, he might have noticed that much of the scholarly work on Hume's argument in the past two decades has involved Bayes' Theorem. He might also have discovered that the correct response to Craig's use of Bayes' Theorem can be found in pages 298-341 of J.H. Sobel’s Logic and Theism.

  • In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?” Craig pointed this out and handily won that part of the debate. Had Hitchens had a passing understanding of science or explanation, he could have avoided looking foolish, and also spent more time on substantive objections to theism. (One can give a "Who made God?" objection to theism that has some meat, but that's not the one Hitchens gave. Hitchens' objection concerned an infinite regress of explanations, which is just as much a feature of science as it is of theism.)

The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of diligent workers before you, and get off somewhere near where the road finally fades into fresh jungle. Study enough to have a view of the current state of the debate so you don't waste your time on paths that have already dead-ended, or on arguments that have already been refuted. Catch up before you speak up.

This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.

The Less Wrong community is a particularly smart and well-read bunch, but of course it doesn't always embrace the virtue of scholarship.

Consider the field of formal epistemology, an entire branch of philosophy devoted to (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory. These are central discussion topics at Less Wrong, and yet my own experience suggests that most Less Wrong readers have never heard of the entire field, let alone read any works by formal epistemologists, such as In Defense of Objective Bayesianism by Jon Williamson or Bayesian Epistemology by Luc Bovens and Stephan Hartmann.

Or, consider a recent post by Yudkowsky: Working hurts less than procrastinating, we fear the twinge of starting. The post attempts to make progress against procrastination by practicing single-subject phenomenology, rather than by first catching up with a quick summary of scientific research on procrastination. The post's approach to the problem looks inefficient to me. It's not standing on the shoulders of giants.

This post probably looks harsher than I mean it to be. After all, Less Wrong is pretty damn good at scholarship compared to most communities. But I think it could be better.

Here's my suggestion. Every time you're tempted to tackle a serious question in a subject on which you're not already an expert, ask yourself: "Whose giant shoulders can I stand on, here?"

Usually, you can answer the question by doing the following:

  1. Read the Wikipedia article on the subject, and glance over the references.
  2. Read the article on the subject in a field-specific encyclopedia. For example if you're probing a philosophical concept, find the relevant essay(s) in The Routledge Encyclopedia of Philosophy or the Internet Encyclopedia of Philosophy or the Stanford Encyclopedia of Philosophy. Often, the encyclopedia you want is at your local library or can be browsed at Google Books.
  3. Read or skim-read an entry-level university textbook on the subject.

There are so many resources for learning available today, the virtue of scholarship has never in human history been so easy to practice.

Comments (150)

Comment author: ksvanhorn 10 March 2012 10:10:37PM 11 points [-]

In my experience, Ph.D. dissertations can be a wonderful resource for getting an overview of a particular academic topic. This is because the typical -- and expected -- pattern for a dissertation is to first survey the existing literature before diving into one's own research. This both shows that the doctoral candidate has done his/her homework, and, just as importantly, brings his/her committee members up to speed on the necessary background. For example, a lot of my early education in Bayesian methods came from reading the doctoral dissertations of Wray Buntine, David J. C. MacKay, and Radford Neal on applications of Bayesian methods to machine learning. Michael Kearns' dissertation helped me learn about computational learning theory. A philosophy dissertation helped me learn about temporal logic.

Of course, this requires that you already have some background in some related discipline. My background was in computer science when I read the above-mentioned dissertations, along with a pretty good foundation in mathematics.

Comment author: passive_fist 12 December 2013 09:39:17AM 2 points [-]

Research moves fast though; a dissertation just 3 or 4 years old may already be hopelessly out of date. Also, they are written by PhD students who, while masters in their own field of expertise, are really only 'apprentices' in training and many not be very knowledgable about areas only slightly outside their domain.

Scientific journals often publish 'review' articles where people with decades of intimate knowledge about a field summarize recent developments. They are usually more concise than dissertations, and often written much better too. They are also peer-reviewed, just like dissertations and other papers.

Comment author: JenniferRM 05 January 2011 04:22:17PM *  39 points [-]

There are so many resources for learning available today, the virtue of scholarship has never in human history been so easy to practice.

Indeed.

I followed the links to In Defense of Objective Bayesianism by Jon Williamson and Bayesian Epistemology by Luc Bovens and Stephan Hartmann. They were expensive and unreviewed and my book reading heuristics generally require three independent suggestions before I start taking a book seriously.

A cheaper trick was to search the Stanford Encyclopedia of Philosophy for Bovens, Hartmann, and Williamson which lead to a nest of articles, some of which mentioned several of them. I listed and prioritized them using ad hoc scoring (points for mentioning each person and a good title). Hartmann jumped out because he had wider ranging interests and was tapped to co-author the encyclopedia article "Models In Science". To reduce the trivial inconvenience of starting to read, I reproduce the suggested reading list with my ad hoc numerical priorities right here:

Comment author: lukeprog 05 January 2011 04:55:57PM 6 points [-]

I see that you have some experience applying the virtue of scholarship... :)

Comment author: Mass_Driver 05 January 2011 08:36:00AM 23 points [-]

In general I'm very sympathetic to this point of view, and there are some good examples in your post.

One bad example, in my opinion, is Eliezer's recent procrastination post vs. the survey of "scientific research on procrastination." I read the chapter, and it appears to mostly cite studies that involved casual surveys, subjective description, and fuzzy labeling. Although there are many valid scientific endeavors that involve nothing but categorization (it is interesting to know how many species of tree frog there are and what they look and sound like even if we do not make any predictions beyond what is observed), categorization should at least be rigorous enough that we can specify what we expect to see with a modicum of precision.

When a biologist says that frogus neonblueicus has neon blue spots and chirps at 500 Hz, she will give you enough information that you can go to Costa Rica and check for yourself whether you have found one of the rare neonblueicus specimens. Although there will be some controversies around the edges, your identification of any particular frog will not correlate with your political biases or personal problems, and repeated observation of the same frog population by a few different researchers will tend to decrease error.

When a psychologist says that procrastinators can be divided into "relaxed" types and "tense-afraid" types, the "science" being done is not merely descriptive, but also horrifyingly vague. What does it mean for a human to be "tense-afraid" when "procrastinating"? The three paragraphs or so of context on the topic give you enough of an idea of what the researcher is saying to conjure up a mental image, but not nearly enough to carve thing-space at the joints.

In my experience, this is a very serious problem in social and human sciences -- there are whole subfields where the authors do not know how little they know, and proceed to wax eloquently about all of the empty concepts they have coined. There are other subfields where the researchers suspect that they might not have done very good research, and they cover their tracks with advanced statistics and jargon. After you dig through a few of these booby-trapped caves of wonder, you start to lose, if not respect for scholarship, at least some of the urge to do the moderately hard work of digesting literature reviews yourself on a regular basis. It is dangerous to assume that casually studying the leading textbook in a soft field will usually make you smarter.

Comment author: shokwave 05 January 2011 09:23:35AM *  7 points [-]

It is dangerous to assume that casually studying the leading textbook in a soft field will usually make you smarter.

However, enough rationality training will have alarm bells ringing when reading soft textbooks and studies. That in itself - "this field is overpopulated with concepts and undermeasured" - is marginally more useful than knowing nothing about the field.

Comment author: lukeprog 05 January 2011 10:24:56AM *  10 points [-]

If you haven't already, you should try reading postmodern philosophy. An uninterrupted wall of alarm bells. :)

Comment author: shokwave 05 January 2011 01:12:24PM 12 points [-]

I was a philosophy student for my brief attempt at tertiary education - I know what you mean. Our lecturer would describe the text as 'dense' - more aptly, I thought, the author is dense.

An anecdote from that class: after a lecture on Wittgenstein, a student asked the lecturer if the rest of the semester's lectures were to be canceled.

Comment author: Will_Sawin 05 January 2011 03:06:00PM 1 point [-]

I cannot think of a single obvious interpretation for why this occurred, but I can think of a few possible ones. Could you please clarify?

Comment author: gwern 05 January 2011 04:06:57PM *  5 points [-]

There is an obvious one, actually - a frequent (perhaps inaccurate) interpretation of the last parts of the Tractatus is as a denial of the possibility of any real philosophy (including Wittgenstein's).

Since one would naturally cover the Tractatus before The Philosophical Investigations or other works, a rather juvenile response would be exactly that anecdote.

Comment author: shokwave 05 January 2011 04:17:39PM 2 points [-]

Yep. The lecture presented the view that Wittgenstein had explained away most of philosophy - in his own words, that he had resolved all philosophical problems.

Comment author: PhilGoetz 05 January 2011 06:26:42PM *  5 points [-]

How silly of Wittgenstein! Didn't he know that Hegel had already completed philosophy?

Comment author: PatrickAchtelik 05 January 2011 08:54:45PM 15 points [-]

Oh, Hegel. I remember a lecture where the professor read from Hegel's Wissenschaft der Logik like it was a holy scripture. When he was finished, he looked up and said: "With this, everything is said". I didn't understand anything, it was a jungle of words like being and not-being and becoming and how one thing becomes the other. I said that I didn't understand anything, and what did the lecturer reply with a smile? "It's good you don't understand it!" I seriously had the intense urge to shout at him, but instead I just didn't show up anymore.

Comment author: insigniff 05 July 2013 09:18:44AM 0 points [-]

A perhaps equally juvenile concern of mine, is whether Wittgenstein himself failed to stand on the shoulders of giants (at least in the Tractatus), by essentially starting from scratch with his own propositions, drawing logical conclusions from them rather than using or at least referring to previous work.

Comment author: lukeprog 05 January 2011 08:50:26AM *  12 points [-]

I very much agree with your final sentence.

Do you think Eliezer's post is more precise and useful than the controlled experiments published in peer-reviewed journals described in the book I linked to? I find that most writing on psychology is necessarily pretty soft, because the the phenomena it is trying to describe are vastly more complicated than those of the hard sciences.

Comment author: Mass_Driver 05 January 2011 10:04:06AM 6 points [-]

Now, that link is a must-read. I got through the whole first chapter before I could look away, and I'll be going back for the rest.

I have nothing against psychology or psychologists or social science in general -- AP Psych was my second favorite class in high school, my mom has a master's degree in it, my bachelor's degree is in political science, etc. It's noble, hard work, and we even have a little bit of knowledge to show for it.

As for the "controlled experiments" described in the book you linked to, I'm afraid I missed them, for which I apologize. I only saw descriptive papers. Maybe a page reference when you get a chance? Or just link directly to one or two of the studies or the abstracts?

Comment author: lukeprog 05 January 2011 10:22:24AM 2 points [-]

Oops, you're right that my link does not mention controlled experiments. A few controlled experiments are instead mentioned in other sections of the book on techniques applicable to a greater variety of behavior change goals.

Unfortunately, the author of Psychological Self-Help died last year, and his book has not been updated much in the past decade. Of course, more work on procrastination has been done in recent years, though I'm not sure if it is collected nicely anywhere.

Comment author: shokwave 05 January 2011 09:31:37AM 6 points [-]

I find that most writing on psychology is necessarily pretty soft, because the the phenomena it is trying to describe are vastly more complicated than those of the hard sciences.

Is there one more step in there? Vastly more complicated -> science happens at much higher levels of abstraction -> high level abstract science is necessarily pretty soft? Because it seems to me psychology is necessarily soft because it doesn't want to turn into thirty years of neurobiology before it can talk about human behaviour.

Comment author: TheOtherDave 05 January 2011 04:39:38PM 17 points [-]

Because it seems to me psychology is necessarily soft because it doesn't want to turn into thirty years of neurobiology before it can talk about human behaviour.

I hear this sentiment echoed a lot, and I have to admit to either not understanding it or strongly disagreeing with it.

Claiming that psychology has nothing useful to say about human behavior until it can be fully cashed out in neurobiology strikes me as mistaken in many of the same ways that claiming that ballistics has nothing useful to say about missile trajectories until it can be fully cashed out in a relativistic understanding of gravity is.

Yes, our missiles don't always hit where we want them to, even after thousands of years of work in ballistics. But a deeper understanding of gravity won't help with that. If we want to improve our practical ability to hit a target, we have to improve our mastery of ballistics at the level of ballistics.

That isn't quite as true for psychology and neurobiology, granted: the insights afforded by neurobiology often do improve our practical ability to "hit a target." (Most strikingly in the last few decades, they have allowed us to develop an enormously powerful medical technology for achieving psychological results, which is nothing short of awesome.)

But I think it's a mistake to conclude from that, that everything about human cognition and behavior can be more usefully described at the level of neurobiology than psychology. There's a difference between reductionism and greedy reductionism.

If the state of the art in psychology is too soft, too vague, or too contingent, then the goal to strive for is to make it more rigorous, more specific, or more reliable... not to give it up altogether and work exclusively in neurobiology instead.

Comment author: shokwave 05 January 2011 05:01:22PM *  5 points [-]

Claiming that psychology has nothing useful to say about human behavior until it can be fully cashed out in neurobiology strikes me as mistaken

Ha, no, I'm on your side. Psychology can say useful things precisely because it isn't cashed out in neurobiology. The point I was making was that in order to have simple rules for brains, in all their hundred-billion-neuron complexity, you need to have softer edges on your predictions.

I don't mean soft in any derogatory way. The concept I was aiming for was something like Eliezer's "well, you could simulate an aeroplane prototype from the quark-level up, but that's inefficient. The field of aerodynamics has good approximations for that macro-scale behaviour,": even if you did drop psychology for neurobiology, describing human behaviour from the neuron-level up is inefficient. Psychology is soft in that it uses approximations of human behaviour, in order to be useful on human timescales. This is a good thing, made no worse by the fact that it necessitates some level of 'soft'ness.

(I think the concern some people have with psychology is that they perceive it as too soft. Availability bias has them drawing generalisations from the describes-everything Freudian analysis, and so forth.)

Comment author: TheOtherDave 05 January 2011 05:07:13PM 2 points [-]

(nods) Fair enough, and agreed throughout.

I stand by my response in and of itself, but I sheepishly admit that it's not actually a response to you at all. Rereading your comment, I conclude that I was overtrained on the kind of objections I responded to, which you didn't actually make... sorry about that.

Comment author: shokwave 05 January 2011 05:14:32PM *  0 points [-]

Doesn't bother me in the slightest. In fact, I almost included another parenthetical:

(Hard scientists probably do think hard is good and soft is bad, but that's because they're hard scientists. Soft scientists are probably sensitive to the negative connotations the hard scientists attach to these terms, because there is something of a rivalry between hard and soft science.)

I guess you've studied some kind of soft science at a college or university?

(I feel like I have overused the terms, though. I make sound as if there is a strict divide, when in my mind it's an evenly distributed spectrum.)

Comment author: billswift 05 January 2011 02:19:32PM 4 points [-]

I think it is more: Complication allows the researchers' biases to slip in more easily, since among other things any sort of cross-check is nearly impossible, which leads to softer results, especially when being evaluated by someone with different biases.

Comment author: [deleted] 05 January 2011 10:31:01PM *  0 points [-]

The book you linked to is mostly irrelevant to the problem Eliezer was addressing. The author writes, "Both types of procrastinators dislike the chores they are avoiding." Eliezer's hypothesis is a contribution even if (like me) you don't think it true. Eliezer recognized that ordinary hyperbolic discounting can't explain procrastination such as he experiences, where he decidedly does not dislike the activities, which can't be described as "chores." His clever solution is to apply hyperbolic-discounting considerations to mental acts.

I don't think it's accurate to say Eliezer posted in ignorance of the literature on procrastination. Everything the book you linked to mentions is well-known, truistic by now, except the distinction between relaxed and tense procrastinators--a dispensable classification.

Hyperbolic discounting is pretty much clearly the correct overarching framework for the kind of procrastination the author of the linked book discusses—but you don't learn that from the linked book (unless I missed it).

Comment author: lukeprog 18 January 2011 11:17:20PM 2 points [-]

Perhaps my choosing a recent Eliezer article as one example of an underuse of schholarship is an instance of "people trying to show off how willing they are to disagree with" Eliezer Yudkowsky!

Comment author: MichaelVassar 05 January 2011 05:55:15PM 13 points [-]

I think that most people just don't believe that philosophy has any value. I used to believe that it didn't, gradually concluded that it did, but then gradually concluded that yes, 99.9% of it really is worthless such that even reading contemporary famous people or summaries of their arguments (though not discussing such arguments with your epistemic peers who are familiar with them, and not reading pre-WWII philosophers) really is a waste of time.

Comment author: lukeprog 05 January 2011 07:11:06PM *  29 points [-]

I agree that 99.9% of philosophy is very close to worthless. Its signal-to-noise ratio is much lower than in the sciences or in mathematics.

This brings to mind Eliezer's comment that "...if there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it."

But reductionist-grade naturalistic cognitive philosophy is probably an even larger sub-field of philosophy than the formal epistemology I mentioned above. Names that come immediately to mind are: John Bickle, Pat & Paul Churchland, Paul Thagard, Tim Schroeder, William Calvin, Georg Northoff, Thomas Metzinger.

There's some good philosophy out there. Unfortunately, you normally only encounter it after you've spent quite a while studying bad philosophy. Most people are introduced to philosophy through Plato, Aristotle, Aquinas, Kant, and Hegel, and might never suspect a neurophilosopher like John Bickle exists.

Which reminds me of the old Bertrand Russell line:

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.

Comment author: diegocaleiro 07 January 2011 05:38:59AM 0 points [-]

Having been one of the exceptions, I wonder if there are enough exceptions to create critical mass for philosophy to take off, or if we will always be condemned (in a good sense) to merge with fields that enjoy precision, such as cog psy, chemestry, physics, maths, neuroscience, etology, evo psy and so on.........

Not that I mind being partly neuro/psycho/evo........ it's just that there are, summing all these fields, too many papers to read in a lifetime.........

Comment author: Desrtopa 07 January 2011 06:24:23AM 0 points [-]

I think that the state of the field is still something of a barrier to the sort of people who would be of most benefit to it. I personally dropped my double major in philosophy after becoming fed up with how much useless and vacuous material I was being required to cover.

Comment author: Jack 06 January 2011 03:49:02AM 14 points [-]

I don't know what percentage of writing that gets called "philosophy" is worthwhile but it isn't that hard to narrow your reading material down to relevant and worthwhile texts. It's really weird to see comments like this here because so much of what I've found on Less Wrong are ideas I've seen previously in philosophy I've read. Moreover, a large fraction of my karma I got just by repeating or synthesizing things I learned doing philosophy- and I'm not the only one whose gotten karma this way.

I find it particularly perplexing that you think it's a good idea to only read pre-WWII philosophers as their ideas are almost always better said by contemporary authors. One of my major problems with the discipline is that it is mostly taught by doing history of philosophy- forcing students to struggle with the prose of a Plato translation and distilling the philosophy from the mysticism instead of just reading Bertrand Russell on universals.

Comment author: Kaj_Sotala 06 January 2011 09:57:07AM 3 points [-]

so much of what I've found on Less Wrong are ideas I've seen previously in philosophy I've read. Moreover, a large fraction of my karma I got just by repeating or synthesizing things I learned doing philosophy- and I'm not the only one whose gotten karma this way.

Examples would probably make your point much more persuasive (not that I'm saying that it's unpersuasive, just that it feels a bit abstract right now).

Comment author: Jack 06 January 2011 09:08:57PM *  21 points [-]

Agreed. I was just being lazy.

I already didn't believe in the Copenhagen Interpretation because of a Philosophy of Physics course where my professor took Copenhagen to be the problem statement instead of a possible solution. That whole sequence is more or less something one could find in a philosophy of physics book- though I don't myself think it is Eliezer's best series.

Before coming here my metaethics were already subjectivist/anti-realist. There's about a century's worth of conceptual distinctions that would make the Metaethics Sequence clearer- a few of which I've made in comments leading to constructive discussion. I feel like I'm constantly paraphrasing Hume in these discussions where people try to reason their way to a terminal value.

There is Philosophy of Math, where there was a +12 comment suggesting the suggestion be better tied to academic work on the subject. My comments were well upvoted and I was mostly just prodding Silas with the standard Platonist line plus a little Quine.

History and Philosophy of Science comes up. That discussion was basically a combination of Kuhn and Quine (plus a bunch of less recognizable names who talk about the same things).

Bayesian epistemology is, itself, a subfield of philosophy but people here seem mostly unfamiliar with the things academics consider to be open problems. Multiple times I've seen comments that take a couple paragraphs to hint at the fact that logical fallibility is an open problem for Bayesian epistemology- which suggests the author hadn't even read the SEP entry on the subject. The Dutch Book post I made recently (which I admittedly didn't motivate very well) was all philosophy.

Eliezer's posts on subjective probability are all Philosophy of Probability. Eliezer and other have written more generally about epistemology and in these cases they've almost always been repeating or synthesizing things said by people like Popper and Carnap.

On the subject of personal identity much of what I've said comes from a few papers I wrote on the subject and many times I've thought the discussion here would be clearer if supplemented by the concepts invented by people like Nozick and Parfit. In any case, this is a well developed subfield.

The decision theory stuff on this site, were it to be published, would almost certainly be published in a philosophy journal.

Causality hasn't been discussed here much except for people telling other people to read Judea Pearl (and sometimes people trying to summarize him, though often poorly). I heard about Pearl's book because it argues much the same thing as Making Things Happen which I read for a philosophy class. Woodward's book is a bit less mathy and more concerned with philosophical and conceptual issues. Nonetheless, both are fairly categorized as contemporary philosophy. Pearl may not hold a teaching position in philosophy- but he's widely cited as one and Causality won numerous awards from philosophical institutions.

The creator of the Sleeping Beauty Problem is a philosopher.

I'm certain I'll think of more examples after I publish this.

Comment author: Will_Sawin 07 January 2011 10:35:03PM 3 points [-]

I would not underestimate the value of synthesizing the correct parts of philosophy vs. being exposed to a lot of philosophy.

The Bayesian epistemology stuff looks like something I should look into. The central logic of Hume was intuitively obvious to me, philosophy of math doesn't strike me as important once you convince yourself that you're allowed to do math, philosophy of science isn't important once you understand epistemology, personal identity isn't important except as it plays into ethics, which is too hard.

I'm interested in the fact that you seem to suggest that the decision theory stuff is cutting-edge level. Since that is the part I spend the most time thinking and talking about, is my activity relatively less susceptible to the scholastic critique? Is there academic philosophy that has things to say to TDT, UDT, and so on?

Comment author: Jack 18 January 2011 02:36:25PM *  0 points [-]

Since that is the part I spend the most time thinking and talking about, is my activity relatively less susceptible to the scholastic critique?

No, it makes you more susceptible- if you're actually working on a problem in the field that's all the more reason to know the scholarly work.

Is there academic philosophy that has things to say to TDT, UDT, and so on?

Obviously, since TDT and UDT were invented like two years ago and haven't been published, academic philosophy says nothing directly about them. But there is a pretty robust literature on Causal vs. Evidential Decision theory and Newcomb's problem. You've read Eliezer's paper haven't you? He has a bibliography. Where did you think the issue came from? The whole thing is a philosophy problem. Also see the SEP.

Comment author: Will_Sawin 18 January 2011 04:27:56PM 1 point [-]

Obviously, since TDT and UDT were invented like two years ago and haven't been published, academic philosophy says nothing directly about them. But there is a pretty robust literature on Causal vs. Evidential Decision theory and Newcomb's problem. You've read Eliezer's paper haven't you? He has a bibliography. Where did you think the issue came from? The whole thing is a philosophy problem. Also see the SEP.

"To say to" means something different then "to talk about". For example, if someone makes epistemological claim XYZ, even if no Bayesian epistemologist has refuted that exact claim, their general arguments can be used in evaluating the claim.

If mainstream philosophers had come up with a decision theory better than evidential and causal (which are both wrong), then people who had already surpassed EDT and CDT would be forced to read them. But if they haven't, then lesswrong has already surpassed the limit of the philosophical literature. That's what I'm asking.

I will look at the SEP when I next have time.

No, it makes you more susceptible- if you're actually working on a problem in the field that's all the more reason to know the scholarly work.

You think that the one who ignores the literature while working on a problem that is unsolved in the literature is more blameworthy than one who ignores the literature while working on a problem that is solved in the literature?

Comment author: Jack 18 January 2011 06:08:59PM 2 points [-]

You think that the one who ignores the literature while working on a problem that is unsolved in the literature is more blameworthy than one who ignores the literature while working on a problem that is solved in the literature?

I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy.

For example, if someone makes epistemological claim XYZ, even if no Bayesian epistemologist has refuted that exact claim, their general arguments can be used in evaluating the claim.

Right, I don't know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with.

If mainstream philosophers had come up with a decision theory better than evidential and causal (which are both wrong), then people who had already surpassed EDT and CDT would be forced to read them. But if they haven't, then lesswrong has already surpassed the limit of the philosophical literature. That's what I'm asking.

There have been lots of attempts to solve Newcomb's problem-by amending EDT or CDT, or inventing a new decision theory. Many, perhaps most of these, use concepts related to TDT/UDT- possible worlds, counterfactuals, and Jeffrey's notion of ratifiability (all three of these concepts are mentioned in Eliezer's paper). Again, I don't know the details of the major proposals, though skimming the literature it looks like none have been conclusive or totally convincing. But it seems plausible that the arguments which sink those theories might also sink the Less Wrong developed ones. It also seems very plausible that the theoretical innovations involved in those theories might be fruitful things for LW decision theorists to consider.

There have also been lots of things written about Newcomb's problem- papers that don't claim to solve anything but which claim to point out interesting features of this problem.

I don't really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?

Comment author: Will_Sawin 18 January 2011 09:58:12PM *  0 points [-]

I suppose it is about the same. I think anyone working on a problem while not knowing if it has been solved, partly solved or not solved at all in the literature is very blameworthy.

I was previously aware that Newcomb's problem was somewhere between partly solved and not solved at all, which is at least something. With the critique brought to my attention, I attempted cheap ways of figuring it out, first asking you and then reading the SEP article on your recommendation.

Right, I don't know the field nearly well enough to answer this question. I would be surprised if nothing in the literature was a generalizable concern that TDT/UDT should deal with.

That is a point.

I also didn't say what I think I really wanted to say, which is that: If I read someone advocating a non-Bayesian epistemology, I react: "This is gibberish. Come back to me once you've understood Bayesian epistemology and adopted it or come up with a good counterargument." The same thing is true of the is-ought distinction: An insight which is obviously fundamental to further analysis in its field.

Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory - these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I'd seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.

I conclude (with two pretty big ifs) that while philosophers have insights, they don't have very good insights.

I don't really understand the resistance to reading the literature. Why would you think insight in this subject area would be restricted to a cloistered little internet community (wonderful though we are)?

  1. I freely admit to some motivated cognition here. Reading papers is not fun, or, at least, less fun than thinking about problems, while believing that insight is restricted to a cloistered community is fun.

  2. You make claim X, I see possible counterargument Y, responding argumentatively with Y is a good way to see whether you have any data on Y that sheds light on the specifics of X.

  3. Knowning what I know about academic philosophy and the minds behind lesswrong's take on decision theory, that strikes me as totally possible.

Comment author: Jack 19 January 2011 09:28:12PM 7 points [-]

Reflective consistency, the question of why you build an agent with a Could-Should Architecture, Updateless decision theory - these seem like those kinds of insights in decision theory. Nothing on the SEP page (most of which I'd seen before, in the TDT paper or wikipedia or whatever), seemed like that. I presume that if philosophers had insights like that, they would put them on the page.

Well presumably you find Nozick's work, formulating Newcomb's and Solomon's problems insightful. Less Wrong's decision theory work isn't sui generis. I suspect a number of things on that page are insightful solutions to problems you hadn't considered. That some of them are made in the context of CDT might make them less useful to those seeking to break from CDT, but it doesn't make them less insightful. Keep in mind- this is the SEP page on Causal Decision Theory, not Newcomb's problem or any other decision theory problem. It's going to be a lot of people defending two-boxing. And it's an encyclopedia article, which means there isn't a lot of room to motivate or explain in detail the proposals. To see Eliezer's insights into decision theory it really helps to read his paper, not just his blog posts. Same goes for other philosophers. I just linked to to the SEP because it was convenient and I was trying to show that yes, philosophers do have things to say about this. If you want more targeted material you're gonna have to get access to an article database and do a few searches.

Also, keep in mind that if you don't care about AI decision theory is a pretty parochial concern. If Eliezer published his TDT paper it wouldn't make him famous or anything. Expecting all the insights on a subject to show up in an online encyclopedia article about an adjacent subject is unrealistic.

From what I see on the SEP page ratification, in particular seems insightful and capable of doing some of the same things TDT does. The Death in Damascus/decision instability problem is something for TDT/UDT to address.

In general, I'm not at all equipped to give you a guided tour of the philosophical literature. I know only the vaguest outline of the subfield. All I know is that if I was really interested in a problem and someone told me "Look, over here there's a bunch of papers written by people from the moderately intelligent to the genius on your subject and closely related subjects" I'd be like "AWESOME! OUT OF MY WAY!". Even if you don't find any solutions to your problems, the way other people formulate the problems is likely to provoke insights.

I conclude (with two pretty big ifs) that while philosophers have insights, they don't have very good insights.

Concluding anything about philosopher's insights when you haven't read any papers and two days ago you weren't aware there were any papers is a bit absurd.

Knowning what I know about academic philosophy and the minds behind lesswrong's take on decision theory, that strikes me as totally possible.

As far as I can tell you don't know much at all about academic philosophy. As for the minds behind the LW take on decision theory, I'm not sure what it is they've accomplished besides writing some insightful things about decision theory.

I mean, christ consider the outside view!

Comment author: PhilGoetz 05 January 2011 06:13:27PM *  5 points [-]

Curious about the WW2 comment. Trouble parsing it. Do you think pre-WW2, or post-WW2, philosophers are more worthwhile?

I would say pre-Nietzsche philosophers are no longer very worthwhile for helping you solve contemporary problems of philosophy, although some (like Berkely, Hume, and Spinoza) were worthwhile for a time. (This is partly because I think causation and epistemology are not as important as issues like values, ethics, categorization, linguistic meaning, and self-identity.) Some, like Kant, provide definitions that may help clarify things for you, and that you will need if you want to talk to philosophers.

Ancient Greek and Roman poets and orators are worthwhile, because they describe an ethical system that contrasts dramatically with ours. But I read (pre-20th century) Native American speeches for the same reason, and lend them the same credence.

Comment author: Will_Newsome 06 January 2011 02:28:56AM 4 points [-]

Ancient Greek and Roman poets and orators are worthwhile, because they describe an ethical system that contrasts dramatically with ours.

Really? Who is 'ours'? I've agreed with most of what I've seen of Greek ethical philosophy, and I thought most Less Wrong people would too. (I'm thinking of arete, eudaimonia, et cetera... their ethical ontology always seemed pretty reasonable to me, which is to be expected since we're all pretty Greek memetically speaking.)

Comment author: PhilGoetz 06 January 2011 03:44:19AM *  8 points [-]

Nietzsche gives one take on this distinction, when he contrasts "good vs. bad" or "master" moralities with "good vs. evil" or "slave" moralities. An evil man is one with evil goals; a bad man is one who is inept at achieving his goals.

Another contrast is that if the Greeks or the Romans had been utilitarians, they would never have been average utilitarians, and I don't think they would even have been total utilitarians. They might have been maximum utilitarians, believing that a civilization's measure was the greatness of its greatest achievements and its greatest people. Americans must have at least briefly believed something like this when they supported the Apollo program.

(I must be overgeneralizing any time I am speaking of the morals of both Athens and Sparta.)

Comment author: Desrtopa 06 January 2011 02:56:19AM 6 points [-]

Classical Greek ethicists propounded values that were in many ways similar to modern ones. Ancient Greece is the time period in which works like the Illiad were put to writing, and those demonstrate some values that are quite foreign to us.

Comment author: NancyLebovitz 06 January 2011 04:00:54PM 0 points [-]

Philosophy seems to offer a very low chance of doing something extremely valuable.

I suspect it's the valuable human activity with the most extreme odds against success.

Comment author: spencerth 06 January 2011 11:09:59AM *  15 points [-]

Though I agree with you strongly, I think we should throw the easy objection to this out there: high-quality, thorough scholarship takes a lot of time. Even for people who are dedicated to self-improvement, knowledge and truth-seeking (which I speculate this community has many of), for some subjects, getting to the "state of the art"/minimum level of knowledge required to speak intelligently, avoid "solved problems", and not run into "already well refuted ideas" is a very expensive process. So much so that some might argue that communities like this wouldn't even exist (or would be even smaller than they are) if we all attempted to get to that minimum level in the voluminous, ever-growing list of subjects that one could know about.

This is a roundabout way of saying that our knowledge-consumption abilities are far too slow. We can and should attempt to be widely, broadly read knowledge-generalists and stand on the shoulders of giants; climbing even one, though, can take a dauntingly long time.

We need Matrix-style insta-learning. Badly.

Comment author: scav 06 January 2011 03:55:02PM 19 points [-]

getting to the "state of the art"/minimum level of knowledge required to speak intelligently, avoid "solved problems", and not run into "already well refuted ideas" is a very expensive process.

So is spending time and effort on solved problems and already well refuted ideas.

Comment author: FiftyTwo 10 October 2011 12:57:55AM 4 points [-]

True. But there are also personal benefits to working on problems (increased cognitive ability, familiarity with useful methods, etc.) that arise even if the problem itself is already 'solved.'

Comment author: Davidmanheim 28 January 2013 08:07:48PM 0 points [-]

And worse, by spending time on solved problems and refuted ideas in public, you can easily destroy your credibility with those that could help you.

This is a serious issue with how people like us, that have interdisciplinary interests, interact with and are respected by experts in fields touching on our own. Those that study, for instance, epistemology, view those that study, say, probability theory, fairly negatively, because they keep hearing uninformed and stupid opinions about things they know more about. This is especially bad because it happens instead of gaining from the knowledge of those experts, who are in a great position to help with thorny issues.

Comment author: greim 09 January 2011 08:21:54PM 2 points [-]

need Matrix-style insta-learning. Badly.

Hear, hear! Arguably, resources like Wikipedia, the LW sequences, and SEP (heck even Google and the internet in general) are steps in that general direction.

Comment author: shokwave 05 January 2011 07:36:50AM *  12 points [-]

LessWrong often makes pretty impressive progress in its discussions; I would be thrilled to see that progress made beginning at the edge of a field.

Comment author: Tesseract 05 January 2011 04:28:24PM *  21 points [-]

I sincerely doubt that the discussions which began on the leading edge would return anywhere near the same amount of progress as those which start in the scholarly middle. After all, those problems are on the edge because they're difficult to solve given the intellectual tools we have today. Though Less Wrong is often insightful, I suspect it's the result not of discovering genuinely new tools, but of applying known tools in ways most readers haven't seen them used. For Less Wrong to make progress with a problem that a lot of smart people have been thinking about in detail for a long time either requires that the entire field is so confused that no one has been able to think as clearly about it as we can (probably hubristic), or that we have developed genuinely new intellectual techniques that no one has tried yet.

Comment author: MichaelVassar 05 January 2011 05:56:42PM 7 points [-]

Voted up, but I think that there are LOTS of fields that are that confused. Possibly every field without regular empirical tests (e.g. chemistry, engineering, computer science, applied physics, boxing) is that confused.

Comment author: nazgulnarsil 05 January 2011 06:14:02PM 12 points [-]

i ghost write papers for lazy rich undergrads at prestigious institutions and my experience has been that the soft sciences are a muddle of garbage with obscenely little worth given the billions of dollars poured into them that could be saving lives.

Comment author: Costanza 05 January 2011 06:30:40PM *  2 points [-]

I, for one, am dying to hear more about this "ghost write [soft science] papers for lazy rich undergrads at prestigious institutions" business. Probably far more than you would be willing to tell, especially if you plan to keep this gig for any length of time.

Comment author: Desrtopa 05 January 2011 06:41:11PM *  9 points [-]

It's not exactly a novel business model. You can read the testimony of a worker in that field here.

Comment author: Vladimir_M 05 January 2011 07:57:18PM *  5 points [-]

Another pseudonymous confession from a worker in that field:

http://www.eacfaculty.org/pchidester/Eng%20102f/Plagiarism/This%20Pen%20for%20Hire.pdf

Comment author: gwern 05 January 2011 06:42:53PM *  5 points [-]

You may enjoy this: http://chronicle.com/article/The-Shadow-Scholar/125329/

EDIT: whoops, Desrtopa beat me to it by a minute. Serves me right for trying to refind that article through my Evernote clippings instead of just googling for it!

Comment author: Costanza 05 January 2011 07:10:29PM *  7 points [-]

comments by gwern and Desertopa upvoted, their links read.

How long can this go on before the whole thing* comes crashing down? Those of us who are Americans are ruled mostly by people who were "lazy rich undergrads at prestigious institutions" and then became lazy rich graduate students getting J.D.s or M.B.A.s from prestigious institutions, having been admitted based on their supposed undergraduate accomplishments.

The only thing to hope for, it seems, is that our supposed leaders are still getting cheat sheets from underpaid, unknown smart people.

* My impression is that higher education in the hard sciences in America is still excellent.

Comment author: SilasBarta 05 January 2011 08:27:28PM *  6 points [-]

I've complained before about the same thing. My only answer is "it'll pass eventually, the only question is how much we'll have to suffer in the interim".

Fortunately, these ghost writers basically give us a rosetta stone for identifying the lost and valueless fields: anything they consistently produce work on and which can escape detection is such a field.

(Btw, change your last * to a \*.)

Comment author: Will_Sawin 06 January 2011 04:44:16PM 3 points [-]

With the caveat that low-level undergraduate assignment substance levels are not the same as cutting-edge research substance levels, though they are related.

Comment author: SilasBarta 06 January 2011 04:51:46PM 4 points [-]

See my reply to Desrtopa: A non-lost field should have a large enough inferential distance from a layshadow that the layshadow shouldn't be able to show proficiency from a brief perusal of the topic, even at the undergraduate levels.

Comment author: Costanza 06 January 2011 06:01:11PM 3 points [-]

I think you've started to identify an empirical test to sort the wheat from the chaff in universities. I've read your post from June, and agree. My guess would be that the proportions would turn out to show a lot of very expensive (and heavily subsidized) chaff for every unit of worthwhile wheat. This is a big issue, and I think you've called it correctly.

Comment author: Costanza 05 January 2011 09:11:10PM *  1 point [-]

(Btw, change your last * to a *.)

Thanks! Fixed!

[NOTE SilasBarta's point about formatting is right and appreciated -- too meta to warrant a whole new comment.]

Comment author: SilasBarta 05 January 2011 09:14:02PM 4 points [-]

That's not what I said! ;-)

Just so you know: the backslash escapes you out of Markdown, so to produce what you quoted, I put a double-backslash wherever you see a \.

Comment author: Desrtopa 05 January 2011 09:08:20PM *  1 point [-]

I don't think any field in which they can produce an essay without being detected is necessarily valueless. At an undergraduate level, students in hard sciences are often assigned essays that could reasonably be written by a layperson who takes the time to properly search through the available peer reviewed articles. That may be an indictment of how the classes are taught and graded, but it's not a demonstration that the fields themselves are lacking worth.

Comment author: SilasBarta 05 January 2011 09:20:29PM *  3 points [-]

All true, but the shadow authors:

  • don't mention doing work for the hard sciences or reading peer-reviewed articles in such fields
  • are able to learn all they need from a day or so of self-study, showing low inferential distance in the fields and thus low knowledge content
  • mention high involvement in graduate level work, where the implications of their success are much more significant.
Comment author: Desrtopa 05 January 2011 09:38:01PM *  2 points [-]

But they do mention googling sources and doing literature review, and "Ed Dante" says he will write about anything that does not require him to do any math (or animal husbandry.) For original research in hard sciences, there's probably not going to be much of anything that doesn't at least require some statistics, but for undergraduate literature review papers, it probably wouldn't be hard to get away with.

Comment author: nazgulnarsil 05 January 2011 08:37:58PM *  3 points [-]

Not much to tell really as I don't do it for a living. I have a lot of free time at my normal job so this just lets me pick up a little extra. It started off with people I knew directly attending those schools and traveled via word of mouth from there.

But the standards at these places really are a joke. I skim the class material, write them in one sitting, and have only had 1 paper get a "B" out of 50 or so.

The part that stops most people is probably the ability to imitate "voice". Read a paper then try to write a few paragraphs in the same style. Ask a neutral judge if they look like they are written by the same author. It's a learned skill and if you don't enjoy writing you'll probably hate it.

Comment author: shokwave 07 January 2011 04:37:30AM 0 points [-]

After all, those problems are on the edge because they're difficult to solve given the intellectual tools we have today

I would subtract the 'intellectual' there. This is true of empirical sciences (cf ever-larger particle colliders), but not anywhere near as true for softer sciences (cf schools of thought). While not strictly inventing 'genuinely new' tools, I think LessWrong is definitely one of the first communities where everyone is required to use high-quality power tools instead of whatever old hammer is lying around.

Comment author: John_Maxwell_IV 07 January 2011 04:06:54AM 0 points [-]

After all, those problems are on the edge because they're difficult to solve given the intellectual tools we have today.

That doesn't seem obvious to me. If you were to look at a map of the known world drawn by a member of an ancient civilization, I don't think all the edges of the map would be regions that were particularly hard to traverse. Maybe they'd be the edges just because they were far from the civilization's population centers and no explorer had wandered that far yet.

In a similar way, perhaps the boundaries of our knowledge are what they are just because to reach the boundary and make progress, you first have to master a lot of prerequisite concepts.

Comment author: Nick_Tarleton 18 January 2011 06:34:36PM 3 points [-]

Surprised no one has linked to Don't Revere The Bearer of Good Info yet.

Comment author: lukeprog 13 March 2011 09:20:57PM 1 point [-]

A great post; thanks for pointing me to it!

Comment author: djcb 06 January 2011 05:53:39PM 3 points [-]

I'm not going to argue that scholarship is not tremendously valuable, but in the kind of live discussions that are mentioned here, I'm not sure if it helps that much against the kind of 'dark arts' techniques that are employed. In live-discussions, someone can always refer to some information or knowledge that the opponent may not have handy ('Historians have established this fact ...'), and only some of that can be counter-acted by scholarship.

Comment author: Benquo 09 January 2011 03:29:56PM 4 points [-]

The examples in the post were debate-specific, but I would suggest that the virtue of scholarship is more broadly applicable. Like many parts of rationality, the most important thing is not to use scholarship to win arguments, but to use scholarship to find the right thing to argue for, or more generally, to become correct.

Comment author: timtyler 05 January 2011 10:40:51AM *  12 points [-]

In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?”

IMO, it is perfectly reasonable to object with: "Who designed the Designer?".

The logic being objected to is: it takes a big complex thing to create another big complex thing. Observing that Darwinian evolution makes big complex things from scratch is the counter-example. The intuition that a complex thing (humans) requires another complex thing to create it (god) is wrong - and it does tend to lead towards an escalator of ever-more-complex creators.

Simplicity creating complexity needs to happen somewhere, to avoid an infinite regress - and if such a principle has to be invoked somewhere, then before the very first god is conjoured seems like a good place.

Checking with the "common sense atheism" link quite a few people are saying similar things in the comments.

Comment author: billswift 05 January 2011 02:09:03PM 9 points [-]

More specifically it is completely rational to use that argument against theists, because one of their arguments for god is that the world is too complex not to have been designed; so in that circumstance you are just pointing out that their claim is just pushing the complexity back one step. If the world is so complex that it needs a designer, then so is god.

Comment author: h-H 09 January 2011 12:46:37AM *  2 points [-]

I think tighter definitions are needed here, some theistic traditions consider all existence to be 'god' etc.

Comment author: Vaniver 05 January 2011 02:14:26PM 1 point [-]

If the world is so complex that it needs a designer, then so is god.

Unless God is too complex to be designed :P

Comment author: Liron 05 January 2011 03:58:26PM *  4 points [-]

Or God is in the first Quine-capable level of some designer hierarchy, like a Universal Turing Machine among lesser models of computation.

Comment author: Polymeron 05 January 2011 03:35:56PM 4 points [-]

Ooh, I like that one. Call it the "sweet spot" theory of intelligent design - things of high enough complexity must be designed, but only if they are under a certain complexity, at which point they must be eternal. (And apparently also personal and omnibenevolent, for some reason).

At any rate, this would all be nice and dandy were it not completely arbitrary... Though if we had an agreed upon measure for complexity and could measure enough relevant objects, we might possibly actually be able to devise a test of sorts for this.

Well, at least for the lower bound. Seeing as we can't actually show that something is eternal, the upper bound can always be pushed upwards a-la the invisible dragon's permeability to flour.

Comment author: TheOtherDave 05 January 2011 04:55:38PM 10 points [-]

(And apparently also personal and omnibenevolent, for some reason).

Well, if it's eternal and sufficiently powerful, a kind of omnibenevolence might follow, insofar as it exerts a selection pressure on the things it feels benevolent towards, which over time will cause them to predominate.

After all, even humans might (given enough time in which to act) cause our environment to be populated solely with things towards which we feel benevolent, simply by wiping out or modifying everything else.

The canonical Christian Hell might also follow from this line of reasoning as the last safe place, where all the refugees from divine selection pressure ended up.

Granted, most Christians would be horrified by this model of divine omnibenevolence; the canonical version presumes an in-principle universal benevolence, not a contingent one.

Comment author: jimrandomh 05 January 2011 04:04:58PM 2 points [-]

There is an upper bound to the complexity of things designed by humans, but why would there be an upper bound on the complexity of things that are designed, in general?

Comment author: ciphergoth 05 January 2011 06:19:22PM 0 points [-]

If God is complex, then I guess he's not real :-)

Comment author: bentarm 06 January 2011 11:33:07AM 1 point [-]

ObNitpick - actually, R is a subset of C, so this doesn't follow.

Comment author: Tiiba 11 January 2011 05:03:39AM 1 point [-]

God = 3.

Comment author: lukeprog 05 January 2011 04:53:05PM *  12 points [-]

timtyler,

Hitchens did not mention complexity or simplicity as you propose. And he did not mention evolution as you propose. If you read the Hitchens quote, you will see he only gave the why-regress objection, which is just as valid against any scientific hypothesis as it is against a theistic one.

There are ways to make the "Who designed the Designer?" objection stick, but Hitchens did not use one of them. If you read the Hitchens quote, you'll see that he explicitly gave the why-regress objection that could be just as accurately be given to any scientific hypothesis ever proposed.

Here, let's play Quick Word Substitution. Let's say a physicist gives a brilliant demonstration of why his theory of quarks does a great job explaining a wide variety of observed subatomic phenomena. Now, Hitchens objects:

"But what explains the quarks? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?"

Hitchens explicitly gave the why-regress objection that is just as potent against scientific explanations as it is against theistic explanations.

Comment author: PhilGoetz 05 January 2011 06:20:15PM 0 points [-]

The regress down into smaller and smaller particles may be a special case. Can we throw out particle physics, and still say we have science? I think so.

Comment author: lukeprog 05 January 2011 06:51:29PM 6 points [-]

PhilGoetz,

The why-regress is not concerned with ontological reduction into smaller and smaller bits. It is concerned with explanatory reduction into more and more fundamental explanations.

The why-regress is not limited to particle physics. It is just as present at higher-level sciences. When neuroscientists successfully explain certain types of pleasure in terms of the delivery of dopamine and endorphins to certain parts of the brain, it does not defeat this explanation to say, "But what explains this particular way of sending dopamine and endorphins to certain parts of the brain? Don't you run the risk of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?"

The point is that all explanations are subject to the why-regress, whether they are theistic or scientific explanations.

Comment author: lukeprog 04 July 2011 09:30:37PM 0 points [-]

Also, see the part of Yudkowsky's Technical Explanation of Technical Explanation that begins with "Beware of checklist thinking..."

Comment author: Polymeron 05 January 2011 11:15:23AM *  2 points [-]

Indeed.

Pointing out that setting a rule leads to infinite regress is not the same as requiring that everything being used to explain must also be explained. In fact, this is a flaw with Intelligent Design, not its critics.

Now, the theists have a loophole to answer the question ("only physical complex things require a designer" special pleading), but it does not render the question "who designed the designer" - which should be rephrased "why doesn't necessitating a designer lead to infinite regress" - meaningless under the rules of science.

Not the greatest example in this, Luke. Especially jarring since you just recently quoted Maitzen on the "so what" infinite regress argument against Ultimate Purpose.

Comment author: lukeprog 05 January 2011 06:53:22PM *  3 points [-]

Polymeron,

Which part of my example do you disagree with? Do you disagree with my claim that Hitchens' objection concerned the fact that the theistic explanation is subject to the why-regress? Do you disagree with my claim that all scientific explanations are also subject to the why-regress?

The discussion of Maitzen and Craig did not involve a why-regress of causal explanations. I'm not sure why you think that discussion is relevant here.

Comment author: Polymeron 06 January 2011 10:29:17AM *  3 points [-]

lukeprog,

I disagree with the claim that Hitchens' objection invokes the why-regress as it applies to science. It invokes an infinite regression that is a consequence of the Intelligent Design claim (things above a certain threshold necessitate a designer); much like Maitzen invoking an infinite regress that might be entailed by applying the "so what" question to every purpose statement.

To make this clearer: The problem with Intelligent Design is precisely that it demands an explanation exist, and that the explanation be a designer. Hitchens' objection is in-line with us not requiring an explanation for the fundamentals.

Science is not subject to the same infinite regress, because science does not set a rule that everything must have an explanation, and certainly not an explanation of a certain kind. Science may define a certain class of phenomena as having a certain explanation, but it never sets the explanation as necessarily requiring the same explanation to explain it. Hitchens points this flaw as a logical consequence of the ID claim.

Comment author: [deleted] 05 January 2011 10:42:22PM 2 points [-]

This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.

Foundational knowledge is more vital in the hard sciences than in psychology, which confronts you immediately with questions about what is the foundation. You have to make at least a tentative decision about which framework you're going to get up to speed on. This requires doing some original (not necessarily novel) thinking from the start, unless the (unrealistic) plan is to thoroughly learn each of the competing frameworks. Also, in the softer fields, it's possible to contribute precisely on account of one's ignorance of what went before, too much knowledge of existing theories' assumptions sometimes standing in the way of real progress.

Comment author: David_Gerard 05 January 2011 11:24:22PM *  5 points [-]

Also, in the softer fields, it's possible to contribute precisely on account of one's ignorance of what went before, too much knowledge of existing theories' assumptions sometimes standing in the way of real progress.

You haven't noted the most horrible thing about this: that the fields are still valuable, even still necessary. Us being no good at them doesn't change this.

c.f. medicine before germ theory and cell theory.

c.f. postmodernism, which is notoriously BS-ridden, but anyone who aspires to write good fiction needs a working knowledge of postmodernist techniques, whether they call them that or not.

Comment author: bentarm 06 January 2011 01:03:14AM 4 points [-]

anyone who aspires to write good fiction needs a working knowledge of postmodernist techniques, whether they call them that or not

So Poe was an instinctive postmodernist?

Comment author: icebrand 08 January 2011 05:32:11AM *  3 points [-]

On the topic of scholarship, I'd like to mention that if one takes the notion of surviving cryopreservation seriously, it's probably a good idea to read up on cryobiology. Have at least a basic understanding of what's going to happen to your cells when your time comes. There is a rich and complex field behind it which very few individuals have much grasp on.

If the bug bites you to do so, you may even be able to go into the field and make some breakthroughs. Huge advances have been made in recent decades by very small numbers of cryonics-motivated scientists, suggesting that there is probably a lot of low-hanging fruit remaining. Even if there's not, it seems like relatively small amounts of incremental progress in this field could have a large total utility if cryonics somehow catches on and becomes widespread in the near future.

Note that Aschwin de Wolf has published a good deal of high quality technical information on his blog Depressed Metabolism, which is a good starting point. Leading cryobiologist Brian Wowk has also been answering all kinds of questions over on the Immortality Institute Cryonics Forum. Many of his publications are to be found here.

Comment author: diegocaleiro 05 January 2011 09:26:31AM 3 points [-]

Scholarship: Thumbs up.

Classic Scholarship: Thumbs down http://brainstormers.wordpress.com/2010/03/03/sobre-ler-os-classicos/

Just in case someone forgot all the Teacher Pasword, Cached Thoughts, and related posts from which I got the link to the above text.

Comment author: Sewing-Machine 05 January 2011 09:11:53PM 11 points [-]

Reading the masters (the little I've done of it) has taught me the following things:

  1. Almost no ideas are good
  2. Almost no ideas are new

Plato's ideas were, at least, new. And (per 2) they're the most influential ideas ever to be put on paper. There's value in seeing that for yourself.

Comment author: David_Gerard 05 January 2011 11:31:51PM *  7 points [-]
  1. Almost no ideas are good
  2. Almost no ideas are new

This counts as vast insight. When looking at the output of lots of ridiculously smart people, you discover that most intelligence is used to justify stupidity, and the most important thing about most new ideas is that they are wrong.

Comment author: Jayson_Virissimo 06 January 2011 12:30:11AM *  3 points [-]

Plato's ideas were, at least, new.

Much of Plato's thought comes from Pythagoras, Parmenides, Heraclitus, and Socrates. If I were to pick an ancient philosopher that didn't have obvious intellectual antecedents, I would choose Thales.

Comment author: PhilGoetz 05 January 2011 06:24:13PM 7 points [-]

That post says, "You might find it more enjoyable to read Plato rather than more modern work just as someone else might prefer to have their philosophical arguments interspersed in Harry Potter slash," and was posted February 25, 2010. The first chapter of Harry Potter and the Methods of Rationality was posted Feb. 28, 2010. Coincidence?

Comment author: Kaj_Sotala 05 January 2011 08:28:11PM *  7 points [-]

in Harry Potter slash

Upvoted because it gives us hope that we'll see those Harry/Draco scenes in MoR after all.

Comment author: Will_Newsome 06 January 2011 02:32:57AM 1 point [-]

Eliezer actually did mention the allegedly preposterous idea of getting some kind of wisdom (philsophical? ethical?) from Harry Potter in a comment reply back in the OB days. I'm too busy/lazy to find a link though.

Comment author: Costanza 06 January 2011 02:37:26AM 3 points [-]

Is this it?

Comment author: Benquo 05 January 2011 06:18:05PM *  7 points [-]

On the whole I'd agree that most of the time it's better to focus on high-quality up-to-date summaries/textbooks than high-quality classical sources.

But I'd suggest a few caveats:

1) It is much easier to find high-quality classics than it is to find high-quality contemporary stuff. Everyone knows who Darwin was, I don't even know how to find a good biology textbook, and I personally got a lot more out of reading and thinking about Darwin than by reading my high school biology textbook. This is a consideration for students and autodidacts, less so for smart and well-informed teachers who know how to find the good stuff.

2) Many summarizers are simply not as smart as the greats, and don't pick up on a lot of good stuff the classics contain. This is less important for a survey that has only a small amount of time to spend on each topic, but if you want deep understanding of a discipline, you will sometimes have to go beyond the available summaries.

3) The ancients are the closest we have to space aliens; people who live in a genuinely different world with different preconceptions.

Comment author: Vladimir_M 05 January 2011 07:42:04PM *  15 points [-]

diegocaleiro:

Classic Scholarship: Thumbs down http://brainstormers.wordpress.com/2010/03/03/sobre-ler-os-classicos/

That article is very poorly argued. Your argument is more or less correct in those fields where the progress of scholarship has a monotonous upward trend, in the sense that knowledge is accumulated without loss, and all existing insights continuously improved. This is true for e.g. Newtonian physics, and indeed, nobody would ever read Newton's original works instead of a modern textbook except for historical interest.

What you fail to understand, however, is that in many fields there is no such monotonous upward trend. This means that in the old classics you'll often find insight that has been neglected and forgotten, and you'll also find ideas that have fallen out of fashion and ideological favor, and been replaced with less accurate (and sometimes outright delusional) ones. Almost invariably, these insights and ideas are absent from modern texts, even those dealing specifically with the old authors, and there is often nothing comparable being written nowadays that could open your eyes to the errors of the modern consensus.

As a rule of thumb, the softer and more ideologically charged a field is, the more such cases you'll find where the modern range of mainstream opinion has in fact regressed away from reality relative to old authors. In economics, for example, you'll find a lot important insight in The Wealth of Nations that modern economics textbooks, and even modern treatments of Adam Smith, are silent about.

Even in hard sciences, when it comes to questions that raise deeper philosophical issues, revisiting classics can be a fruitful source of ideas. For example, Julian Barbour developed his ideas by studying the history of mechanics and relativity, and Arthur Ekert claims that the idea of quantum cryptography first occurred to him due to an insight he gathered from the classic EPR paper. (Ekert writes, "I guess I was lucky to read it in this particular way. The rest was just about rephrasing the subject in cryptographic terms.")

Another point you're neglecting is that truly good writers are extremely rare. Many classic works have remained in print after so many years exactly because people who wrote them were such good writers that virtually none of the modern authors working in the same field are able to produce anything as readable.

Comment author: David_Gerard 05 January 2011 11:36:24PM *  4 points [-]

Yes. Anyone who thinks Chaucer and Shakespeare are valueless for being old has misunderstood the field. As long as humans are savannah apes, they will find their works of value. We still read Chaucer and Shakespeare not because they are antecedents, but because they're good now.

Comment author: PhilGoetz 06 January 2011 12:52:40AM *  6 points [-]

Are Shakespeare's comedies - containing mainly sexual innuendo, mistaken identities, abuse, and puns, and using the same extremely improbable plot devices repeatedly - really great works of art? They're good, but are they really first-tier?

Do any of Shakespeare's tragedies contain insights into human nature that are as important or as difficult for you to discover on your own as those you would find in a Jhumpa Lahiri novel? I think not. (Honestly, is King Lear deep? No; just dramatic and well-written. Any idiot knows by Act II what will happen.)

We still read Shakespeare today partly because Shakespeare was great when he wrote; but partly because Shakespeare was a master of individual phrases and of style, and literature departments today are dominated by postmodernists who believe there is no such thing as substance, and therefore style is all that matters. (Or perhaps the simpler explanation is that people who make and critique films tend to be more impressed by visual effects than by content; and people who make and critique books tend to be more impressed by verbal effects than by content.)

(Don Quixote, though, is golden. :)

Comment author: Jack 06 January 2011 03:56:33AM 9 points [-]

We still read Shakespeare today partly because Shakespeare was great when he wrote; but partly because Shakespeare was a master of individual phrases and of style, and literature departments today are dominated by postmodernists who believe there is no such thing as substance, and therefore style is all that matters.

Shakespeare's centrality in English Lit curricula comes from it's historic place in the Western canon. Post-modernists are distinguished in particular by their opposition to any kind of canon.

Comment author: PhilGoetz 06 January 2011 05:34:13AM 3 points [-]

Good point!

And yet, I know English lit people who simultaneously love postmodernism and Shakespeare. There is a pervasive emphasis of style over content, which I have been attributing to postmodernism; but maybe I oversimplify.

Comment author: Jack 06 January 2011 06:19:45AM *  5 points [-]

Postmodernism isn't really characterized by a position on which works should be read so much as how they should be read. While postmodern thinking opposes canons it also supports reading culturally relevant texts with a critical/subversive eye. Shakespeare is rich with cultural context while also being complex and ambiguous enough to provide a space for lit critics to play with meanings and interpretations and get interesting results. Hamlet, which is far and away Billy Shake's best work, is particularly conducive to this. They do the same thing with Chaucer, actually, particularly the Wife of Bath's tale. I don't think it is about style over substance but about the freedom to play with cultural meaning and interpretation. You can't say Hamlet is short on substance, anyway.

But the extent to which authors like Chaucer and Shakespeare have become less central in lit departments is almost entirely due to this crowd- it's archetypal postmodernism which gives genre films and television the same importance as the historical Western canon.

Rosencrantz and Guildenstern are Dead probably boosts the Bard's popularity in the pro-postmodern scene.

Comment author: Costanza 06 January 2011 01:41:13AM 7 points [-]

Another reason to be familiar with the canonical works in a culture is precisely because they're canonical. It's like a common currency. By now, English-speaking culture is so rooted in Shakespeare that you'd be missing out if you didn't recognize the references.

Any idiot knows by Act II what will happen.

We do now! But apparently, the original Elizabethan audiences went in expecting a happy ending -- and were shocked when it turned out to be a tragedy. Tricky fellow, that Willy S.

Comment author: David_Gerard 06 January 2011 10:06:17AM *  2 points [-]

Another reason to be familiar with the canonical works in a culture is precisely because they're canonical. It's like a common currency. By now, English-speaking culture is so rooted in Shakespeare that you'd be missing out if you didn't recognize the references.

Yes. Same reason some familiarity with the King James Version of the Bible is culturally useful.

Comment author: ciphergoth 06 January 2011 03:44:36PM 2 points [-]
Comment author: PhilGoetz 06 January 2011 04:01:42AM 0 points [-]

I didn't mean they would know how it would end - I meant they would know that Lear used shallow indicators to judge character, and Cordelia would turn out to be the faithful daughter.

Comment author: Costanza 06 January 2011 04:19:44AM *  2 points [-]

It looks like audiences since before Shakespeare's time would have gone in knowing the outline of the story. But I'm mostly replying to confess - the same Wikipedia article that I myself quoted makes it clear that there was no really happy ending to King Lear until 1681. I wasn't paying close enough attention.

Comment author: lukeprog 05 January 2011 09:50:23AM 4 points [-]

Completely agreed. I wrote very much the same thing in How to Do Philosophy Better.

Comment author: orthonormal 05 January 2011 06:07:52PM *  9 points [-]

One counterpoint:

In The Failures of Eld Science, Eliezer's character points out that most scientists were never trained to recognize and navigate a genuine scientific controversy; instead, we hand our undergraduates the answers on a silver platter and have them do textbook problems. He proposes that if scientists had first had to think through and defeat phlogiston themselves, they would have been less stymied by the interpretation of quantum mechanics.

Similarly, I think I'm better off for having encountered some of the grand old systems of philosophy in their earliest and most viral forms, without all the subsequent criticisms and rebuttals attached. Of course I ran the risk of getting entrapped permanently in Plato or Nietzsche, but I learned things about rationality and about myself this way, and I don't think I would have learned those had I started by reading a modern digest of one or the other (with all the mistakes pointed out). (Of course, I have since read modern critiques and profited from them.)

On the other hand, some Great Books schools like to teach higher mathematics by having the students read Euclid, and I agree that's insane and not worth all the extra effort.

Comment author: mwengler 06 January 2011 08:09:53PM 6 points [-]

Interesting about pushing students through Phlogiston. Without it being required of physics majors, I took "philosophy of science" as an undergrad philosophy minor and read, among others, Popper. It has stuck with me like one of those viruses, let me know if I have much to gain by finally dropping some of what I think I learned from him. I personally loved looking at all science afterwards and listening in all discussions and thinking: "is this a difference that makes a difference?" Is there testable difference here or can I just skip it?

In a graduate course on superconducting electronics I once taught a wildly simple theory of electron pairing treating the electron wave functions as 1-d sine waves in the metal. I told the students: "the theory I am teaching you is wrong, but it illustrates many of the true features of the superconducting wave function. If you don't understand why it is wrong, you will be better off thinking this than not thinking this, while if you get to the point where you see why it is wrong, you will really understand superconductivity pretty well."

It never occurred to me to try to insert Popper into any of the classes I was teaching. I was not a very imaginitive professor.

By the way, on your name orthonormal, on what basis did you choose it? :)

Comment author: Benquo 05 January 2011 06:21:09PM *  2 points [-]

On the Euclid point, it depends on where you're starting from and what you're trying to do. I've seen people who thought they hated math, converted by going through some of Euclid. The geometrical method of exposition is beautiful in itself, and very different from the analytical approach most modern math follows. If you're already a math enthusiast, it would not benefit you quite as much.

Comment author: orthonormal 05 January 2011 06:25:34PM 5 points [-]

But there are more readable modern textbooks which use the geometrical method of exposition; I just taught out of one last semester.

Comment author: Benquo 05 January 2011 06:59:03PM 4 points [-]

I envy your students.

Comment author: MichaelVassar 05 January 2011 06:00:20PM 0 points [-]

Wow. I disagree exactly.

Comment author: JenniferRM 05 January 2011 07:25:48PM 13 points [-]

I think some justification would be helpful for your readers, especially those who don't know about your relatively high personal efficacy :-)

You asserted something similar and with more original content right next door and I think your implicit justification was spelled out a while ago in the article For progress to be by accumulation and not by random walk, read great books. I'm curious if these links capture the core justification well, or is more necessary to derive your conclusions?

It feels like lots of details deployed to justify your advice to "read the classics" and lots of the details deployed to justify the advice "avoid the classics" are basically compatible and some more nuanced theory should be available that is consistent with the totality of the facts like "In cases X and Y read the classics, and in case N and M avoid them" and perhaps the real disagreement is about the nature of the readership and which case better describes the majority of them... or the most important among them?

For example, I think maybe people in their late 20's or older who were clicky while young and are already polymaths might be helped reading the classics in domains where they want to do creative work, while most 17 year olds would do better to get summaries of the main issues and spend some time arguing with peers about them. For example, I've heard Mandlebrot had a knack for digging up neglected gems and resurrecting citation trees with 90 year gaps where all authors in the tree except for him were dead. This seems like a useful technique for boosting a career as a specialized intellectual, but I wouldn't suggest the trick to a 12 year old.

Comment author: MichaelVassar 06 January 2011 08:57:36AM 2 points [-]

I think those links are about right, as is the analysis. Thanks.

Comment author: Desrtopa 05 January 2011 06:32:52PM 4 points [-]

Could you elaborate?

Comment author: jsalvatier 05 January 2011 04:48:00PM *  1 point [-]

Indeed. If you're asking students to read the initial source material, there's a 90% chance you're doing it wrong.

Comment author: PhilGoetz 05 January 2011 06:05:11PM *  1 point [-]

In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science.

I agree that Hitchens should have looked to see what answers theists give to that question. (And he might have; since theists usually respond instead by saying that God is eternal, meaning outside of time and cause and effect, and therefore in no need of having a cause.) But I disagree that there are any more substantive objections to theism. "Who designed the designer?" is the best single knockdown argument against theism.

The question "where did God come from?" is not qualitatively the same as the question "how do you know your observation that a dropped bowling ball falls is correct?" In science, the answer to every "why" is something that is known with more certainty. Entropy decreases as you trace the epistemological/causal chain back up its causes. Theism, by contrast, boils down to the claim that entropy always increases as you trace back the causal chain. A being X must have been created by some being Y with greater entropy (complexity). The scientific epistemological chain converges; the theistic one diverges.

ADDED: This is basically the same as Tim Tyler's comment below.

Comment author: lukeprog 05 January 2011 07:01:43PM 4 points [-]

PhilGoetz,

And I'll give the same reply as i gave to Tim Tyler. :)

Hitchens did not mention entropy or complexity. He mentioned exactly and only the why-regress, the exact same why-regress that all scientific hypotheses are subject to. Perhaps the objection you raise to theism would have been good for Hitchens to give, but it is not the objection Hitchens gave.

It looks to me like people are trying to make Hitchens look good by putting smarter words in his mouth than the ones he actually spoke.

Comment author: Kaj_Sotala 05 January 2011 08:44:02PM *  5 points [-]

It looks to me like people are trying to make Hitchens look good by putting smarter words in his mouth than the ones he actually spoke.

I think it's more the principle of charity. Unless the other person has been mentally designated as an enemy, people tend to look for the most charitable plausible interpretation of his words. People are pointing out that what you gave as an example is a poor example to give, because your wording doesn't do enough to exclude the most charitable interpretation of Hitchens' words from the set of plausible interpretations. Therefore people will, upon hearing your example, automatically assume that this is actually what Hitchens was trying to say.

(I've been known to take this even further. Sometimes I'll point an article to a friend, have the friend ruthlessly criticize the article, and then I'll go "oh, of course the thing that the author is actually saying is pretty dreadful, but why would you care about that? If you read it as being about [this semi-related insightful thing he could have been saying instead if he'd thought about it a bit more], then it's a great article!")

Comment author: lukeprog 05 January 2011 08:50:52PM *  5 points [-]

Kaj_Sotala,

If Hitchens meant what people are charitably attributing to him, why didn't he make those points in the following rebuttal periods or during the Q&A? Craig gave the exact rebuttal that I just gave, so if Hitchens had intended to make a point about complexity or entropy rather than the point about infinite regress he explicitly made, he had plenty of opportunity to do so.

You are welcome to say that there are interesting objections to theism related to the question "Who designed the designer?" What confuses me is when people say I gave a bad example of non-scholarship because I represented Hitchens for what he actually said, rather than for what he did not say, not even when he had an opportunity to respond to Craig's rebuttal.

The argument people here are attributing to Hitchens is not the argument he gave. Hitchens gave an objection concerning an infinite regress of explanations. The argument being attributed to Hitchens is a different argument that was given in one form by Richard Dawkins as The Ultimate Boeing 747 Gambit. Dawkins' argument is unfortunately vague, though it has been reformulated with more precision (for example, Kolmogorov complexity) over here.

Comment author: Kaj_Sotala 05 January 2011 10:05:53PM *  6 points [-]

I didn't suggest that he meant that, I suggested that what you said didn't do enough to exclude it from the class of reasonable interpretations of what he might have meant.

Suppose someone says to me, like you did, "there's this guy Hitchens, he said the following: "Who designed the Designer? Don’t you run the risk… of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?'". The very first thing that comes to mind, and which came to my mind even before I'd read the next sentence, is "oh, I've used that argument myself, when some religious person was telling me 'but the Big Bang had to come from somewhere', that must be what Hitchens meant". That's the default interpretation that will come to the mind of anyone who's willing to give Hitchens the slightest benefit of doubt.

Yes, if people click on the links you provided they will see that the interpretation is wrong, but most people aren't going to do that. And people shouldn't need to click on a link to see that the most plausible-seeming interpretation of what they've read is, in fact, incorrect. If it's important for conveying your message correctly, then you should state it outright. If you give an example about a person's non-scholarship and people start saying "oh, but that doesn't need to be an example of non-scholarship", then it's a much worse example than one that doesn't prompt that response.

Comment author: Sly 06 January 2011 03:51:15AM 2 points [-]

Another thing to think about was that Hitchens was in a debate. The Christians in the audience that he is trying to convince will not be charitable.

Comment author: PhilGoetz 06 January 2011 08:45:39PM *  1 point [-]

You are technically correct. Your initial remarks misled me, for the reasons given by Kaj Sotala below. But it's a good example, if I read it carefully and literally, so don't take that as a criticism.

Comment author: lukeprog 06 January 2011 09:12:17PM 1 point [-]

Thanks.

Comment author: insigniff 05 July 2013 09:43:30AM 1 point [-]

Whether or not the first cause argument should be a concern in science, i think Bertrand Russell summarizes its problems quite well:

"Perhaps the simplest and easiest to understand is the argument of the First Cause. It is maintained that everything we see in this world has a cause, and as you go back in the chain of causes further and further you must come to a First Cause, and to that First Cause you give the name of God. That argument, I suppose, does not carry very much weight nowadays, because, in the first place, cause is not quite what it used to be. The philosophers and the men of science have got going on cause, and it has not anything like the vitality that it used to have; but apart from that, you can see that the argument that there must be a First Cause is one that cannot have any validity. I may say that when I was a young man, and was debating these questions very seriously in my mind, I for a long time accepted the argument of the First Cause, until one day, at the age of eighteen, I read John Stuart Mill's Autobiography, and I there found this sentence: "My father taught me that the question, Who made me? cannot be answered, since it immediately suggests the further question, Who made God?" That very simple sentence showed me, as I still think, the fallacy in the argument of the First Cause. If everything must have a cause, then God must have a cause. If there can be anything without a cause, it may just as well be the world as God, so that there cannot be any validity in that argument. It is exactly of the same nature as the Hindu's view, that the world rested upon an elephant, and the elephant rested upon a tortoise; and when they said, "How about the tortoise?" the Indian said, "Suppose we change the subject." The argument is really no better than that. There is no reason why the world could not have come into being without a cause; nor, on the other hand, is there any reason why it should not have always existed. There is no reason to suppose that the world had a beginning at all. The idea that things must have a beginning is really due to the poverty of our imagination. Therefore, perhaps, I need not waste any more time upon the argument about the First Cause." http://www.positiveatheism.org/hist/russell0.htm

Comment author: [deleted] 05 January 2011 10:19:16PM *  0 points [-]

I think the logical incoherence of theism is a stronger knock down argument. The most devastating criticism of theism relates not to what caused god but what causes his actions. God is conceived as an all-powerful will, subjecting it to the same simple argument that disposes of libertarian "free will." Either God's conduct is random or determined. But conceiving of god as something other than a will makes god otiose. If god acts randomly, the description is indistinguishable from the universe simply being random; if god is determined that is indistinguishable from the universe is simply determined.

Who created the creator is a good argument, but it isn't decisive. To say god must be more complex than the universe 1) is denied by theists, who call god uniquely simply; and 2) leaves the theist with one (weak) counterargument, inasmuch as it means treating god as a mechanism rather than something that is, well, supernatural. The theist says the causal requirements that govern matter don't apply, and we're unwarranted in generalizing our observations about the material world to the characteristics of god.

Ultimately, you can't avoid getting down to the really basic question: what is this god. If he's not a deterministic entity, what's the alternative to his behavior being random? [Actually, I'm not sure raw randomness is coherent either, but you don't have to take the argument that far.]

Comment author: lukeprog 05 January 2011 07:27:39AM 1 point [-]

Why did my post appear correctly in the editor, but when posted to the site, lose the spaces just before an apparently random selection of my hyperlinks?

Comment author: Eliezer_Yudkowsky 05 January 2011 09:20:41AM 6 points [-]

This happens when <div> tags are included, anywhere. I've deleted them.

Comment author: [deleted] 15 January 2011 06:41:20AM 2 points [-]

I will vouch for the fact that div-tags are the work of the devil. They impede my formatting on Blogger too. Very evil. I destroy all such tags and then I do my victory laugh.

Comment author: lukeprog 05 January 2011 09:48:35AM 0 points [-]

Sweet! Thanks.

I certainly didn't add div tags on purpose, so I'll be sure to watch out for them in the future.

Comment author: Clippy 05 January 2011 03:52:41PM 1 point [-]

One way to avoid formatting problems is to write your article in an external word processor (such as Microsoft Word) and then copy/paste it into the Website:LessWrong.com article entry field. (You cannot add the summary break this way, so that must be done afterward.)

Comment author: Randaly 05 January 2011 08:26:03AM *  0 points [-]

This seems to always occur around hyperlinks for me- and some other formatting too (e.g. "I've said almostnothing that is original.") I don't know why; I usually manually input an extra space.

Comment author: diegocaleiro 05 January 2011 09:21:07AM 1 point [-]

That happened to me as well.........

Lots of extra effort...... isn't this meta-fixable?

Comment author: Mass_Driver 05 January 2011 08:15:34AM 0 points [-]

I don't know. It's a great post, though: go fix it!

Comment author: lukeprog 05 January 2011 08:27:16AM 1 point [-]

For now, I simply added two spaces where Less Wrong wanted to collapse my single space into nothing. Hopefully someone will be able to figure out a more elegant solution.

I'm on Snow Leopard, Google Chrome.

Comment author: Dreaded_Anomaly 05 January 2011 08:39:46AM 0 points [-]

You could try using the HTML character entity reference for a non-breaking space, &*nbsp; (remove the asterisk). It's not really more elegant, but it will look nicer.

Comment author: Dr_Manhattan 05 January 2011 05:28:32PM 1 point [-]

Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...

Charlie Munger speaks of a "latticework of mental models". A good mix, though somewhat skewed to investing, is found here

http://www.focusinvestor.com/FocusSeriesPart3.pdf

Comment author: recumbent 21 October 2011 04:01:53AM 0 points [-]

This discussion has been largely philosophy-based, which is understandable given the site's focus. But are people interested in knowing something about many different fields? Below is my attempt at different levels of liberal arts education. I have been working on either taking a class or reading a textbook in each of these areas, preferably a textbook for the people that will be majoring in this subject (I have 3 more to do). Then if I can retain it, I can know the basic vocabulary to communicate with people in almost any field, and also look for common themes and fruitful areas at the margins of fields. The basic liberal arts education is one course in the four fundamental areas; the super liberal arts education is a course in all the fundamental fields; and the ultra liberal arts education is a course in the applied fields as well. I did not include the ludicrous liberal arts education, which would be a course in the minor fields, ones that there would be a department for somewhere.

Levels of Liberal Arts

Fundamental Areas

  1. Arts
  2. Humanities
  3. Social sciences
  4. Natural sciences

Fundamental fields: “Super Liberal Arts Education”

  1. Visual arts
  2. Performing arts
  3. Philosophy
  4. History
  5. Literature
  6. Writing
  7. Another language
  8. Religion
  9. Geography
  10. Anthropology
  11. Sociology
  12. Political Science
  13. Psychology
  14. Economics
  15. Biology
  16. Chemistry
  17. Physics
  18. Math

Trade/applied fields (Along with the Fundamental fields, these compose the “Ultra Liberal Arts Education”

  1. Law
  2. Journalism
  3. Business
  4. Finance
  5. Medicine
  6. Geology (might cover oceanography)
  7. Atmospheric science
  8. Chemical Engineering
  9. Civil Engineering (covers environmental engr)
  10. Computer science
  11. Electrical Engineering
  12. Mechanical Engineering
  13. Environmental science/studies