Full disclosures below. *
I agree with much of Glymour's manifesto, but I think the passage quoted would have been better left on the cutting-room floor. One reason is given in the critique you link: lots of philosophy gets grants and citations and employment in diverse areas around the academy and elsewhere. Not all of it gets noticed in science or furthers a scientific project, even broadly construed. For example, John Hawthorne just won a multi-million dollar grant to do work in epistemology of religion, and a couple of years ago, Alfred Mele won a multi-million dollar grant to do more work on free will. I doubt that Glymour thinks either of these projects has the virtues of the work of his CMU colleagues. But by the "grant-winning" standard, administrators should love this sort of philosophy. By a sales or readership standard, administrators ought to be encouraging more pop-culture and philosophy schlock.
Another reason is given by Glymour in the same manifesto:
A real use of philosophy departments is to provide shelter for such thinkers [who are, at least initially, outsiders to the science of the day, people who will take up questions that may have been made invisible to scientists because of disciplinary blinkers], and in the long run they may be the salvation of philosophy as an academic discipline.
So, a good use for philosophy departments is to shelter iconoclastic thinkers who are not going to be either understood or appreciated by contemporary scientists. How are such people going to be successful grant-winners? I can see how they might successfully publish within philosophy, given a certain let-every-flower-bloom attitude in philosophy. And I can see how some philosophers might end up convincing some scientists to take their work seriously enough to fund it ... eventually. But surely, some of Glymour's iconoclasts will be missed or ignored in the grant-giving process. Better, I think, to have some places for people to think whatever they want to think and be supported in that thinking so that they do not have to panic about meeting the basic necessities of life. If that means having to put up with literary criticism, then so be it.
The manifesto has a nice paragraph where Glymour lists the contributions of many mathematical philosophers. This might be relevant to UDT:
Philosophers and statisticians alike want to posit probabilities over sentences, but how would that work with a language adequate to science and mathematics, say first order logic? Haim Gaifman told us, and worked out the implications for what is and what is not learnable.
Yup. This is why I was so surprised in January 2011 that Less Wrong had never before mentioned formal philosophy, which is the branch of philosophy most relevant to the open research problems of Friendly AI. See, for example, Self-Reference and the Acyclity of Rational Choice or Reasoning with Bounded Resources and Assigning Probabilities to Arithmetical Statements.
Thanks for the links. I just read those two papers and they don't seem to be saying anything new to me :-(
In your linked piece, you were talking about formal epistemology. Here you say "formal philosophy." Is that a typo, or do you think that formal epistemology exhausts formal philosophy? (I would hope not the latter, since lots of formal work gets done in philosophy outside epistemology!)
This is pretty much unrelated but do you think maybe you could write a short post about the relevance of algorithmic probability for human rationality? There's this really common error 'round these parts where people say a hypothesis (e.g. God, psi, etc) is a prior unlikely because it is a "complex" hypothesis according to the universal prior. Obviously the "universal prior" says no such thing, people are just taking whatever cached category of hypotheses they think are more probable for other unmentioned reasons and then labeling that category "simple", which might have to do with coding theory but has nothing to do with algorithmic probability. Considering this appeal to simplicity is one of the most common attempted argument stoppers it might benefit the local sanity waterline to discourage this error. Fewer "priors", more evidence.
ETA: I feel obliged to say that though algorithmic probability isn't that useful for describing humans' epistemic states, it's very useful for talking about FAI ideas; it's basically a tool for transforming indexical information about observations into logical information about programs and also proofs thanks to the Curry--Howard isomorphism, which is pretty cool, among other reasons it's cool.
As Luke recently pointed out,
As Bellman (1961) said, "the very construction of a precise mathematical statement of a verbal problem is itself a problem of major difficulty."
But many verbal restatements of verbal problems often, even typically, precede and facilitate the construction of this golden mathematical trophy. These portions of philosophy, which are the bulk of it, might easily fail to impress the computer scientists. But without them, progress in formal philosophy would be slower.
For those interested, the CMU philosophy department organizes an annual summer school in logic and formal epistemology.
Its interesting why some of the humanities -- and particulary areas of philosophy -- are constantly defending their research program or the value of the discipline as a whole. Aparently, the folks of other segments of academia want see something useful. But it's not so sad, in some cases the dialog can happen, for example, in formal epistemology, the tentative to mix Bayesianism with conceptual analysis, trying to formalize concepts like 'coherence'.
the measure of value for philosophy departments is whether they are taken seriously by computer scientists
I would roughly generalize to "scientists". There is the need of people armed with both the tools of philosophy and science to discuss the meaning of many discoveries of the 20th/21st century: usually scientists are too narrowly focused and philosopher are not sufficiently well prepared. Nice to know that there are some exceptions (trusting you on this, I till have to go through the links).
My upcoming book, 1-Page-Classics gives examples of a kind of "reduced" Bayesianism in the form of a one-pager called "Traditional Claims" and another called "Modal Realism."
The book might also be interesting for virtue ethics, in the form of abbreviations of the famous scroll "The Mandate of Heaven," Confucius' "Analects or Analectus," and Lao Tzu's "Tao Te Ching."
I also abbreviate Epictetus' "Enchiridion" in a creative fashion, and "Republic of Plato" includes an excellent form of sophist criticism to that project (poetry, the ring of Giges, etc.).
I've long held CMU's philosophy department in high regard. One of their leading lights, Clark Glymour, recently published a short manifesto, which Brian Leiter summed up as saying that "the measure of value for philosophy departments is whether they are taken seriously by computer scientists."
Selected quote from Glymour's manifesto:
Also see the critique here, but I'd like to have Glymour working on FAI.