Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Philosophy Needs to Trust Your Rationality Even Though It Shouldn't

27 Post author: lukeprog 29 November 2012 09:00PM

Part of the sequence: Rationality and Philosophy

Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute.

Thomas Kelly

The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Jason Brennan

 

After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.

In its dependence on masses of evidence and definitive experiments, science doesn't trust your rationality:

Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

Sometimes, you can answer philosophical questions with mountains of evidence, as with the example of moral motivation given above. But or many philosophical problems, overwhelming evidence simply isn't available. Or maybe you can't afford to wait a decade for definitive experiments to be done. Thus, "if you would rather not waste ten years trying to prove the wrong theory," or if you'd like to get the right answer without overwhelming evidence, "you'll need to [tackle] the vastly more difficult problem: listening to evidence that doesn't shout in your ear."

This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)

Because it tackles so many questions that can't be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn't: we generally are as "stupid and self-deceiving" as science assumes we are. We're "predictably irrational" and all that.

But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one's rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?

Alas, it doesn't seem so. I don't see much Kahneman & Tversky in philosophy syllabi — just light-weight "critical thinking" classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don't like. So what's really needed is regular habits training for genuine curiosity, motivated cognition mitigation, and so on.

(Imagine a world in which Frank Jackson's famous reversal on the knowledge argument wasn't news — because established philosophers changed their minds all the time. Imagine a world in which philosophers were fine-tuned enough to reach consensus on 10 bits of evidence rather than 1,000.)

We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT? Livengood et al. (2010) found, via an internet survey, that subjects with graduate-level philosophy training had a mean CRT score of 1.32. (The best possible score is 3.)

A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).

Moreover, several studies show that philosophers are just as prone to particular biases as laypeople (Schulz et al. 2011; Tobia et al. 2012), for example order effects in moral judgment (Schwitzgebel & Cushman 2012).

People are typically excited about the Center for Applied Rationality because it teaches thinking skills that can improve one's happiness and effectiveness. That excites me, too. But I hope that in the long run CFAR will also help produce better philosophers, because it looks to me like we need top-notch philosophical work to secure a desirable future for humanity.3

 

Next post: Train Philosophers with Pearl and Kahneman, not Plato and Kant

Previous post: Intuitions Aren't Shared That Way

 

 

Notes

1 Clearly, many philosophers have advanced versions of motivational internalism that are directly contradicted by these results from psychology. However, we don't know exactly which version of motivational internalism is defended by each survey participant who said they "accept" or "lean toward" motivational internalism. Perhaps many of them defend weakened versions of motivational internalism, such as those discussed in section 3.1 of May (forthcoming).

2 Mathematicians reach even stronger consensus than physicists, but they don't appeal to what is usually thought of as "mountains of evidence." What's going on, there? Mathematicians and philosophers almost always agree about whether a proof or an argument is valid, given a particular formal system. The difference is that a mathematician's premises consist in axioms and in theorems already strongly proven, whereas a philosopher's premises consist in substantive claims about the world for which the evidence given is often very weak (e.g. that philosopher's intuitions).

3 Bostrom (2000); Yudkowsky (2008); Muehlhauser (2011).

Comments (169)

Comment author: IlyaShpitser 29 November 2012 09:07:20PM 26 points [-]

A minor (but important) nitpick:

[Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

Science sets up experiments not just because it does not trust you, but because even if you were a perfect Bayesian, you could not determine cause effect relationships just from using Bayes theorem a lot.

Comment author: lukeprog 30 November 2012 03:59:00AM 11 points [-]

Sure. A good clarification.

Comment author: Eliezer_Yudkowsky 30 November 2012 06:33:40PM 4 points [-]

Right! Besides just Bayes's Theorem, you'd also need Occam's Razor as a simplicity prior over causal structures. And, to drive the probability of a causal structure high enough, confidence that you'd observed in sufficient detail to drive down the probability of extra confounding or intervening variables.

Since the latter part is sometimes difficult though not theoretically impossible to achieve in fields like medicine, a randomized experiment in which you trust that your random numbers will probably have the Markov condition relative to other background variables, can more quickly give you confidence about some directions on causal arrows when the combination of effect size and sample size is large enough. Naturally, all of this is a mere special case of Bayesian reasoning on possible causal structures where (1) you start out very confident that some random numbers are conditionally independent of all their non-descendants in the graph, and (2) you start out very confident that your randomized experimental procedure causally connects to a single descendant node in that graph (the independent variable).

Comment author: IlyaShpitser 30 November 2012 10:07:28PM 7 points [-]

(a) You don't need to observe confounders to learn structure from data. In fact, sometimes you don't need any standard conditional independence at all. (Luke gave me the impression SI wasn't very interested in that point -- maybe it should be).

(b) Occam's razor / faithfulness gives you enough to learn the structure of statistical models, not causal ones. You need additional assumptions to equate the statistical models you learn with causal models. Bayesian networks are not causal models. Causality is not about conditional independence, it is about counterfactual invariance, that is causality expresses what changes or stays the same after a hypothetical 'wiggle.'

There is no guarantee that even given Occam's razor and faithfulness being true that the graph you obtain is such that if I wiggle a parent, the child will change. To verify your causal assumptions, you have to run an experiment, or no scientist will believe your graph is causal. This is what real causal discovery papers do, for example:

http://www.sciencemag.org/content/308/5721/523.abstract

Here they learned a protein signaling network, but then implemented an experiment where they changed the protein level of a parent via an RNA molecule, and verified that the child changed, but parent of a parent did not change.


I am sure you can set up a Bayesian story for this entire enterprise, if you wanted. But, firstly, this Bayesian story would not be expressed purely in probability theory but in the language that can express counterfactual invariance and talk about experiments (for example language of potential outcomes or do(.)). And secondly, giving something a Bayesian story is sort of equivalent to re-expressing some complicated program as a vi macro. Could be done (vi is turing-complete!) but why? People don't write practical code in vi macros.

Comment author: Eliezer_Yudkowsky 01 December 2012 01:01:02AM 5 points [-]

This sounds like we're talking past each other somehow. Your point (a) is not clear to me - I was saying that to learn a sufficiently high-probability causal model from non-intervention data, you need to have observed the data in sufficient detail to rule out confounders (except at some low probability) (via simplicity priors, which otherwise can't drive down the probability of an untestable invisible confounder by all that far). This can certainly be done in principle, e.g. if you put the system under a microscope with a higher resolution than the system, and verified there were only X kinds of stuff in it and no others.

Your point (b) sounds just plain wrong to me. If you have a simplicity prior over causal models, and you can derive testable probable predictions from causal models, then you can do Bayesian updating and get a posterior over causal models. Substituting the word "flammable fizzbins" for "causal models" in the preceding sentence will produce another true sentence. I think you mean something different by "Bayesian" and "Occam's Razor" than I do.

Comment author: IlyaShpitser 01 December 2012 06:10:06AM *  11 points [-]

By (a) I mean that you can sometimes get the true graph exactly even without having to observe confounders. Actually this was sort of known already (see the FCI algorithm, or even the IC* algorithm in Pearl's book), but we can do a lot better than that. For example, if we have the true graph:

a -> b -> c -> d, with a <- u1 -> c, and a <- u2 -> d, where we do not observe u1,u2, and u1,u2 are very complicated, then we can figure out the true graph exactly by independence type techniques without having to observe u1 and u2. Note: the marginal distribution p(a,b,c,d) that came from this graph has no conditional independences at all (checkable by d-separation on a,b,c,d), so typical techniques fail.


(b) is I guess "a subtle issue" -- but my point is about careful language use and keeping causal and statistical issues clear and separate.

A "Bayesian network" (or "belief network" -- I don't like the word Bayesian here because it is confusing the issue, you can use frequentist techniques with belief networks if you wanted, in fact a lot of folks do) is a joint distribution that factorizes as a DAG. That's it. Nothing about causality. If there is a joint density representing a causal process where a is a direct cause of b is a direct cause of c, then this joint density will factorize with respect to both

a -> b -> c

and

a <- b <- c

but only the former graph is causal, the latter is not. Both graphs form a "Bayesian network" with the joint density (since the density factorizes with respect to both graphs), but only one graph is a causal graph. If you want to talk about causal models, in addition to saying that there is a Markov factorization you also need to say something else -- something that makes parents into direct causes. Usually people say something like:

for every x, p(x | pa(x)) = p(x | do(pa(x))), or mention the g-formula, or the truncated factorization of do(.), or "the causal Markov condition."

But this is something that (a) you need to say explicitly, and (b) involves language beyond standard probability theory because there is a do(.), and (c) is controversial to some people. What is do(.)? It refers to a hypothetical experiment/intervention.


If all you are learning is a graph that gives you a Markov factorization you have no business making claims about interventions -- interventions are a separate magisterium. You can assume that the unknown graph from which the data came is causal -- but you need to say this explicitly, this assumption will be controversial to some people, and by making that assumption you are I think committing yourself to the use of interventionist/potential outcome language (just to describe what it means for a data generating graph to be causal).

I have no problems with you doing Bayesian updating and getting posteriors over causal models -- I just wanted to get more precision on what a causal model is. A causal model is not a density factorizing with respect to a DAG -- that's a statistical model. A causal model makes assertions that relate hypothetical experiments like p(x | do(pa(x))) with observed data like p(x | pa(x)). So your Bayesian updating is operating in a world that contains more than just probability theory (which is a theory of standard joint densities, without the mention of do(.) or hypothetical experiments). You can in fact augment probability theory with a logical description of interventions, see for example this paper:

http://www.jair.org/papers/paper648.html


If your notion of causal model does not relate do(.) to observed data, then I don't know what you mean by a causal model. It's certainly not what I mean by it.

Comment author: Eliezer_Yudkowsky 01 December 2012 07:29:20PM 6 points [-]

Well, this is very rapidly getting us into complex territory that future decision-theory posts will hopefully explore, but a very brief answer would be that I am unwilling to define anything fundamental in terms of do() operations because our universe does not contain any do() operations, and counterfactuals are not allowed to be part of our fundamental ontology because nothing counterfactual actually exists and no counterfactual universes are ever observed. There are quarks and electrons, or rather amplitude distributions over joint quark and lepton fields; but there is no do() in physics.

Causality seems to exist, in the sense that the universe seems completely causally structured - there is causality in physics. On a microscopic level where no "experiments" ever take place and there are no uncertainties, the microfuture is still related to the micropast with a neighborhood-structure whose laws would yield a continuous analogue of D-separation if we became uncertain of any variables.

Counterfactuals are human hypothetical constructs built on top of high-level models of this actually-existing causality. Experiments do not perform actual interventions and access alternate counterfactual universes hanging alongside our own, they just connect hopefully-Markov random numbers into a particular causal arrow.

Another way of saying this is that a high-level causal model is more powerful than a high-level statistical model because it can induct and describe switches, as causal processes, which behave as though switching arrows around, and yields predictions for this new case even when the settings of the switches haven't been observed before. This is a fancypants way of saying that a causal model lets you throw a bunch of rocks at trees, and then predict what happens when you throw rocks at a window for the first time.

Comment author: Wei_Dai 01 December 2012 08:32:02PM 7 points [-]

As an additional data point, I also still do not have a very good understanding of your ideas about causality (although I did note earlier that it seems rather different from Pearl's (which are similar to Ilya's)). I also note that nobody else seems to have a good understanding of your ideas, at least not enough to try to build upon them either here on LW or on the decision theory mailing list or try to explain them to me when I asked.

Comment author: Eliezer_Yudkowsky 01 December 2012 09:17:11PM 3 points [-]

Interesting. Sorry to bother you further, but can I ask you to quote a particular sentence or paragraph above that seems unclear? Or was the above clear, but it implies other questions that aren't clear, or the motivations aren't clear?

Comment author: Wei_Dai 01 December 2012 11:19:27PM *  7 points [-]

On second thought, the main problem may not be lack of clarity but that your ideas about causality are too speculative and people either lack confidence that your research program (try to reduce Pearl's do()-based causality to lower-level "causality in physics") is the right one, or do not see how to proceed.

Both apply for me but the former is perhaps more relevant at this point. Basically I'm not sure that "do()-based causality" will actually end up playing a role in the ultimate "correct" decision theory (I guess if there is lack of clarity, it's why you think that it will), and in the mean time there are other problems that definitely need to be solved and also seem more approachable.

(To explain why I think "do()-based causality" may not end up playing a role, it seems plausible that in an AI or at least decision theory (I wanted to say theoretical decision theory but that seems redundant :), cognition about "high-level causality" just ends up being handled as a special case by a more general algorithm, similar to how an AI programmed to maximize expected utility wouldn't specifically need to be hand-coded with natural language processing if it was running on a sufficiently powerful computer.)

ETA: BTW, can you comment on whether my understanding in this comment was correct, and whether they still apply to Eliezer_2012?

Comment author: Eliezer_Yudkowsky 02 December 2012 01:15:53AM 3 points [-]

You realize I'm arguing against do()-based causality? If not, I was very much unclearer than I thought.

I have never tried to reduce causal arrows to similarity; Barbour does, I don't. I take causality to be, or be the epistemic conjugate of, something physical and real which was involved in manufacturing this oddly-well-modeled-by-causality universe that we actually live in. They are presently primitive in my model; I have not yet reduced them, except in the obvious sense that they are also formal mathematical relations between points, i.e., causal relations are a special case of logical relations (and yet we still live in a causal universe rather than a merely logical one). I do indeed reduce consciousness to computation and computation to causality, though there's a step here involving magical reality-fluid about which I am still confused - I have no idea why or what it means for a causal process to be more or less real, either as a result of having more or less Born measure, being instantiated in many places, or for any other reason.

Comment author: Benja 02 December 2012 12:46:38AM 5 points [-]

As a third data point, I used to be very confused about your ideas about causality, but your recent writing has helped a lot. To make embarassingly clear how very wrong I've been able to be, some years ago when you'd told us about TDT but not given details, I thought you had a fully worked-out and justified theory about how a decision agent could use causal graphs to model its uncertainty about the output of platonic computations, and use do() on its own output to compute the utility of different courses of action, and I got very frustrated when I simply couldn't figure out how to fill in the details of that...

...hmm. (I should probably clarify: when I say "use causal graphs to reason about", I don't mean in the 'trivial' sense you are actually using where the platonic computations cause other things but are themselves uncaused in the model; I mean some sort of system where different computations and/or logical facts about computations form a non-degenerate graph, and where do() severs one node somewhere in the middle of that graph from its parents.) "And", I was going to say, "when you finally did tell us more, I had a strong oh moment when you said that you still weren't able to give a completely satisfying theory/justification, but were reasonably satisfied with the version you had. But I still continued to think that my picture of what you had been trying to do had been correct, only you didn't have a fully worked-out theory of it, either." The actual quote that turned into this memory of things seems to be,

Note that this does not solve the remaining open problems in TDT (though Nesov and Dai may have solved one such problem with their updateless decision theory). Also, although this theory goes into much more detail about how to compute its counterfactuals than classical CDT, there are still some visible incompletenesses when it comes to generating causal graphs that include the uncertain results of computations, computations dependent on other computations, computations uncertainly correlated to other computations, computations that reason abstractly about other computations without simulating them exactly, and so on.

But there's also this:

The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.

And later:

Those of you who've read the quantum mechanics sequence can extrapolate from past experience that I'm not bluffing.

Huh. In retrospect I can see how this matches my current understanding of what you're doing, but comparing this to what I wrote in the first paragraph above (before searching for that post), it's actually surprisingly nonobvious where the difference is between what you wrote back then and what I wrote just now to explain the way in which I had horribly misunderstood you...

Anyway. As for what you wrote in the great-grandparent, I had to read it slowly, but most of it makes perfect sense to me; the last paragraph I'm not quite as sure about, but there too I think I understand what you mean.

There is, however, one major point on which I currently feel confused. You seem to be saying that causal reasoning should be seen as a very fundamental principle of epistemology, and on your list of open problems, you have "Better formalize hybrid of causal and mathematical inference." But it seems to me that if you just do inference about logical uncertainty, and the mathematical object you happen to be interested in is a cellular automaton or the PDE giving the time evolution of some field theory, then your probability distribution over the state at different times will necessarily happen to factor in such a way that it can be represented as a causal model. So why treat causality as something fundamental in your epistemology, and then require deep thinking about how to integrate it with the rest of your reasoning system, rather than treating it as an efficient way to compress some probability distributions, which then just automatically happens to apply to the mathematical objects representing our actual physics? (At this point, I ask this question not as a criticism, but simply to illustrate my current confusion.)

Comment author: IlyaShpitser 02 December 2012 04:33:22AM 1 point [-]

So why treat causality as something fundamental in your epistemology, and then require deep thinking about how to integrate it with the rest of your reasoning system, rather than treating it as an efficient way to compress some probability distributions, which then just automatically happens to apply to the mathematical objects representing our actual physics?

Because causality is not about efficiently encoding anything. A causal process a -> b -> c is equally efficiently encoded via c -> b -> a.

But it seems to me that if you just do inference about logical uncertainty, and the mathematical object you happen to be interested in is a cellular automaton or the PDE giving the time evolution of some field theory, then your probability distribution over the state at different times will necessarily happen to factor in such a way that it can be represented as a causal model.

This is not true, for lots of reasons, one of them having to do with "observational equivalence." A given causal graph has many different graphs with which it agrees on all observable constraints. All these other graphs are not causal. The 3 node chain above is one example.

Comment author: IlyaShpitser 01 December 2012 11:08:39PM *  5 points [-]

I would be interested in reading about this. A few points:

(a) I agree that causality is a "useful fiction" (like real numbers or derivatives).

(b) If you are going to be writing posts about "causal diagrams" you need to be clear about what you mean. Usually by causal diagrams people mean Pearl's stuff, or closely related stuff (agnostic causal models, minimal causal models, etc.) All these models are defined via either do(.) or stronger notation. If you do not mean that by causal diagrams, that's fine! But please explain what you do mean to avoid confusing people. You have a paper on TDT that seems to use causal diagrams. Which ones did you mean in there?

edit: I should say that if your project has "defining actual cause" as a special case, it's probably a black hole from which no one returns (it's the analytic philosophy version of the P/NP problem).

edit 2: I think the derivation of "do(.)" ought to be not dissimilar to the derivation of "+", if you worry about induction problems. "+" is a mathematical fiction very useful for representing regularities with handling objects, "do(.)" is a mathematical fiction very useful for representing regularities involved with algorithms with actuators running around.

Comment author: Eliezer_Yudkowsky 02 December 2012 01:11:45AM 0 points [-]

If causality is' useful fiction, it's conjugate to some useful nonfiction; I should like to know what the latter is.

I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it. Even if do() - as shorthand for describing experimental procedures involving switches on arrows - does happen to be a procedure you can perform on those diagrams, that's a consequence of the definition, it is not actually part of the representation of the actual causal model. You can write out causal models, and they give predictions - this suffices to define them as hypotheses.

More importantly: How can you possibly make the truth-condition be a correspondence to counterfactual universes that don't actually exist? That's the point of my whole epistemology sequence - truth-conditions get defined relative to some combination of physical reality that actually exists, and valid logical consequences pinned down by axioms. So yes, I would definitely derive do() rather than have it being primitive, and I wouldn't ever talk about the truth-condition of causal models relative to a do() out there in the environment - we talk about the truth-condition of causal models relative to quarks and electrons and quantum fields, to reality.

I'm a bit worried (from some of his comments about causal decision theory) that Pearl may actually believe in free will, or did when he wrote the first edition of Causality. In reality nothing is without parents, nothing is physically uncaused - that's the other problem with do().

Comment author: IlyaShpitser 02 December 2012 04:38:37AM *  6 points [-]

I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it.

Well, the author is dead, they say.

There are actually two separate causal models in Pearl's book: "causal Bayesian networks" (chapter 1), and "functional models" aka "non-parametric structural equation models" (chapter 7). These models are not the same, in fact functional models are a lot stronger logically (that is they make many more assumptions).

The first is defined via do(.), you can check the definition. The second can be defined either via a set of functions, or via a set of axioms. The two definitions are, I believe, equivalent. The axiomatic approach is valuable in statistics, where we often cannot exhibit the functions that make up the model, and must resort to enumerating assumptions. If you want to take the axiomatic approach you need a language stronger than do(.). In particular you need to be able to express counterfactual statements of the form "I have a headache. Would I have a headache had I taken an aspirin one hour ago?" Pearl's model in chapter 7 actually makes assumptions about counterfactuals like that. If you think talking about counterfactual worlds that don't actually exist is dubious, then you join a large chorus of folks who are critical of Pearl's functional models.

If you want to learn more about different kinds of causal models people look at, and the criticisms of models that make assumptions on counterfactuals, the following is a good read:

http://events.iq.harvard.edu/events/sites/iq.harvard.edu.events/files/wp100.pdf


Some folks claim that a model is not causal unless it assumes consistency, which is an axiom stating that if for a person u, we intervene on X and set it to a value x that naturally occurs in u, then for any Y in u, the value of Y given that intervention is equal to the value of Y in that same person had we not intervened on X at all. Or, concisely:

Y(x,u) = Y(u), if X(u) = x

or even more concisely:

Y(X) = Y

This assumption is actually counterfactual. Without this assumption it's not possible to do causal inference.

Comment author: thomblake 03 December 2012 05:36:29PM 1 point [-]

Reading this whole thread, I'm interested to know what your thoughts on causality are. Do you have existing posts on the subject that I should re-read? I was under the impression you pretty much agreed with Pearl, but now that seems not to be the case.

By the way, Pearl certainly wasn't arguing from a "free will" perspective - rather, I think he'd agree with "there is no do() in physics" but disagree that "there is causality in physics".

Comment author: Eliezer_Yudkowsky 01 December 2012 09:06:00PM 1 point [-]

a -> b -> c -> d, with a <- u1 -> c, and a <- u2 -> d, where we do not observe u1,u2, and u1,u2 are very complicated, then we can figure out the true graph exactly by independence type techniques without having to observe u1 and u2. Note: the marginal distribution p(a,b,c,d) that came from this graph has no conditional independences at all (checkable by d-separation on a,b,c,d), so typical techniques fail.

Irrelevant question: Isn't (b || d) | a, c?

Comment author: IlyaShpitser 01 December 2012 10:42:26PM 9 points [-]

No, because b -> c <-> a <-> d is an open path if you condition on c and a.

Comment author: Eliezer_Yudkowsky 02 December 2012 01:07:13AM 2 points [-]

Ah, right.

Comment author: lukeprog 02 December 2012 01:31:49AM *  3 points [-]

Luke gave me the impression SI wasn't very interested in that point

How? I find myself very interested in this point, just not enough to schedule a lecture about it in the next month, since we have a lot of other things going on, and we're out of town, and so on.

Comment author: IlyaShpitser 02 December 2012 02:24:48AM 4 points [-]

Fair enough, retracted. Sorry!

Comment author: pengvado 30 November 2012 11:55:38PM *  3 points [-]

On your account, how do you learn causal models from observing someone else perform an experiment? That doesn't involve any interventions or counterfactuals. You only see what actually happens, in a system that includes a scientist.

Comment author: IlyaShpitser 01 December 2012 12:11:33AM *  4 points [-]

That depends what you mean by an "experiment." If you divide a set of patients into a control group and a test group, and then have the test group smoke a pack of cigarettes per day, that is an "experiment" to me, one that is represented by an intervention (because we are forcing the test group to smoke regardless of what they would naturally want to do).

Observing that the test group is much more likely to develop cancer would lead me to conclude that the graph

smoking -> cancer

is a causal graph rather than merely a statistical graph.


If we do not perform the above experiment due to ethical reasons, but instead use observational data on smokers, we have to worry about confounders, like Fisher did. We also have to worry, because we are implicitly linking that data with counterfactual situations (what would have happened if those guys we observed were forced to smoke). This linking isn't "free," there are assumptions operating in the background. Assumptions expressed in a language that can talk about counterfactual situations.

Comment author: thomblake 29 November 2012 09:46:36PM 7 points [-]

I'm so glad you post here.

Comment author: [deleted] 02 December 2012 12:55:09AM 8 points [-]

We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT?...

Your presentation here seems misleading to me. You imply that philosophers are merely average scorers on the CRT relative to the rest of the (similarly educated) population.

This claim is misleading for several reasons: 1) The study from which you get the philosophers' score is a mean score for people who have had some graduate level philosophical training. This is a set that will overlap with many of the other groups you mention. While it will include all professional philosophers, I don't think a majority of the set will be professional philosophers. Graduate level logic or political philosophy, etc. courses are pretty standard in graduate educations across the board.

2) Fredrick takes scores from a variety of different schools, trying to capture people, evidentially, who are undergraduates, graduate students, or faculty. Fredrick comes up with a mean score of 1.24 for respondents who are members of a university. In contrast, Livengood (from which you get the philosophers' mean score) gets a mean score of 0.65 and 0.82 for people with undergraduate or graduate/professional education respectively. If these two studies were using similar tests and methodologies, we should expect these scores to converge more. It seems likely that the Fredrick study is not using comparable methodology or controls, making the straight comparison of scores misleading.

3) The Livengood study actually argues that people with some philosophical training tend to do significantly better than the rest of the population on the CRT test, even when one controls for education. You do not mention this. You really ought to. Especially since, unlike the Fredrick study, the Livengood study is the only one you cite which uses a methodology relevant to the question you're asking.

Comment author: RobbBB 29 November 2012 10:27:59PM *  18 points [-]

Your previous post was good, but this one seems to be eliding a few too many issues. If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — actually tells us about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned. The survey you cite is also obviously unhelpful, in that the questions on that survey were chosen because they're controversial. Most philosophical questions are not very controversial, but for that very reason you don't hear much about them. If we hand-picked all the foundational questions physicists disagreed about and conducted a popularity poll, would we be rightly surprised to find that the poll results were divided?

(It's also worth noting that some of the things being measured by the poll are attitudinal and linguistic variation between different philosophical schools and programs, not just doctrinal disagreements. Why should we expect ethicists and philosophers of mathematics to completely agree in methodology and terminology, when we do not expect the same from physicists and biologists?)

There are three reasons philosophers disagree about foundational issues:

(1) Almost everyone disagrees, at least tacitly, about foundational issues. Foundational issues are hard, and our ordinary methods of acquiring truth and resolving disagreements often short-circuit when we arrive at them. Scientific realism is controversial among scientists. Platonism is controversial among mathematicians. Moral realism is controversial among politicians and voters. Philosophers disagree about these matters for the same basic reasons that everyone else does; the only difference is that philosophers do not follow the same social conventions the rest of us do that dictate bracketing and ignoring foundational disagreements as much as possible. In other words...

(2) ... philosophy is about foundational disagreement. There is no one worldly content or subject matter or methodology shared between all the things we call 'philosophy.' The only thing we really use to distinguish philosophers from non-philosophers is how foundational and controversial the things they talk about are. When you put all the deep controversies in a box and call that box Philosophy, you should not be surprised upon opening the box to see that it is clogged with disagreement.

(3) Inasmuch as philosophical issues are settled, they stop getting talked about. So there's an obvious selection bias effect. Philosophical progress occurs; but that progress gets immediately imported into our political systems, our terminological choices and conceptual distinctions, our scientific theories and practices, our logical and mathematical toolboxes. And then it stops being philosophy.

That said, I agree with a lot of your criticisms of a lot of philosophers' practices. They need more cognitive science and experimentalism. Desperately. But we should be a lot more careful and sophisticated in making this criticism, because most philosophers these days (even the most metaphysically promiscuous) do not endorse the claim 'our naive, unreflective intuitions automatically pick out the truth,' and because we risk alienating the Useful Philosophers when we make our target of attack simply Philosophy, rather than a more carefully constructed group.

LessWrong: Start tabooing the word 'philosophy.' See how it goes.

Comment author: Vladimir_Nesov 29 November 2012 10:37:32PM *  7 points [-]

If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — is actually asserting about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned.

A major problem with modern physics is that there are almost no known phenomena that are known to work in a way that disagrees with how modern physics predicts they would work (in principle; there are lots of inferential/computational difficulties). What physics asserts about the world is, to the best of anyone's knowledge, coincides with what's known about most of the world in all detail. The physicists have to build billion dollar monstrosities like LHC just to get their hands on something they don't already thoroughly understand. This doesn't resemble the situation with philosophy in the slightest.

Comment author: RobbBB 29 November 2012 10:51:15PM 2 points [-]

You're speaking in very general terms, and you're not directly answering my question, which was 'what is quantum mechanics asserting about the world?' I take it that what you're asserting amounts to just "It all adds up to normality." But that doesn't answer questions concerning the correct interpretation of quantum mechanics. "x + y + z . . . = normality." That's a great sentiment, but I'm asking about what physics' "x" and "y" and "z" are, not questioning whether the equation itself holds.

Comment author: Vladimir_Nesov 29 November 2012 11:02:01PM *  2 points [-]

you're not directly answering my question, which was 'what is quantum mechanics asserting about the world?'

I'm pointing out that in particular it's asserting all those things that we know about the world. That's a lot, and the fact that there is consensus and not much arguing about this shouldn't make this achievement a trivial detail. This seems like a significant distinction from philosophy that makes simple analogies between these disciplines extremely suspect.

(I agree that I'm not engaging with the main points of your comment; I'm focusing only on this particular aside.)

Comment author: RobbBB 29 November 2012 11:07:26PM *  -2 points [-]

So your response to my pointing out that physicists too disagree about basic things, is to point out that physicists don't disagree about everything. In particular, they agree that the world around us exists.

Uh... good for them? Philosophers too have been known to harbor a strong suspicion that there is a world, and that it harbors things like chairs and egg timers and volcanoes. Physicists aren't special in that respect. (In particular, see the philosophical literature on Moorean facts.)

Comment author: Vladimir_Nesov 29 November 2012 11:11:43PM *  2 points [-]

physicists don't disagree about everything. In particular, they agree that the world around us exists. ... Philosophers too have been known to harbor a strong suspicion that there is a world

Physicists agree about almost everything. In particular, they agree about all specific details about how the world works relevant (in principle) to most things that have ever been observed (this is a lot more detail than "the world exists").

Comment author: RobbBB 29 November 2012 11:25:04PM *  1 point [-]

They agree about the most useful formalisms for modeling and predicting observations. But 'formalism' and 'observation' are not themselves concepts of physics; they are to be analyzed away in the endgame. My request is not for you to assert (or deny) that physicists have very detailed formalisms, or very useful ones; it is for you to consider how much agreement there is about the territory ultimately corresponding to these formalisms.

A simple example is the disagreement about which many-worlds-style interpretation is best; and about whether many-worlds-style interpretations are the best interpretations at all; and about whether, if they are the best, whether they're best enough to dominate the probability space. Since the final truth-conditions and referents of all our macro- and micro-physical discourse depends on this interpretation, one cannot duck the question 'what are chairs?' or 'what are electrons?' simply by noting 'chairs are something or other that's real and fits our model.' It's true, but it's not the question under dispute. I said physicists disagree about many things; I never said that physicists fail to agree about anything, so changing the topic to the latter risks confusing the issue.

Comment author: prase 30 November 2012 07:24:57PM 3 points [-]

You are basically saying that physicists disagree about philosophical questions.

Comment author: RobbBB 30 November 2012 07:38:10PM 1 point [-]

Is the truth of many-worlds theory, or of non-standard models, a purely 'philosophical' matter? If so, then sure. But that's just a matter of how we choose to use the word 'philosophy;' it doesn't change the fact that these are issues physicists, specifically, care and disagree about. To dismiss any foundational issue physicists disagree about as for that very reason 'philosophical' is merely to reaffirm my earlier point. Remember, my point was that we tend to befuddle ourselves by classifying issues as 'philosophical' because they seem intractable and general, then acting surprised when all the topics we've classified in this way are, well, intractable and general.

It's fine if you think that humanity should collectively and universally give up on every topic that has ever seemed intractable. But you can make that point much more clearly in those simple words than by bringing in definitions of 'philosophy.'

Comment author: Desrtopa 01 December 2012 04:41:38PM 3 points [-]

It seems that the matters you're arguing that scientists disagree on are all ones where we cannot, at least by means anyone's come up with yet, discriminate between options by use of empiricism.

The questions they disagree on may or may not be "philosophical," depending on how you define your terms, but they're questions that scientists are not currently able to resolve by doing science to them.

The observation that scientists disagree on matters that they cannot resolve with science doesn't detract from the argument that the process of science is useful for building consensuses. If anything it supports it, since we can see that scientists do not tend to converge on consensuses on questions they aren't able to address with science.

Comment author: prase 01 December 2012 06:25:07PM *  0 points [-]

I think you are reading too much into my comment. It totally wasn't about what humanity should collectively give up on, or even what anybody should. And I agree that philosophy is effectively defined as a collection of problems which are not yet understood enough to be even investigated by standard scientific methods.

I was only pointing out (perhaps not much clearly, but I hadn't time for a lengthier comment) that the core of physics is formalisms and modelling and predictions (and perhaps engineering issues since experimental apparatuses today are often more complex than the phenomena they are used to observe). That is, almost all knowledge needed to be a physicist is the ordinary "non-philosophical" knowledge that everybody agrees upon, and almost all talks at physics conferences are about formalism and observations, while the questions you label "foundational" are given relatively small amount of attention. It may seem that asking "what is the true nature of electron" is a question of physics, since it is about electrons, but actually most physicists would find the question uninteresting and/or confused while the question might sound truly interesting to a philosopher. (And it isn't due to lack of agreement on the correct answer, but more likely because physicists like more specific / less vague questions as compared to philosophers).

One can get false impression about that since the most famous physicists tend to talk significantly more about philosophical questions than the average, but if Feynman speaks about interpretation of quantum mechanics, it's not a proof that interpretation of quantum mechanics is extremely important question of physics (because else a Nobel laureate wouldn't talk about it), it's rather proof that Feynman has really high status and he can get away with giving a talk on a less-than-usually rigorous topic (and it is much easier to make an interesting lecture from philosophical stuff than from more technical stuff).

Of course, my point is partly about definitions - not so much the definition of philosophy but rather the definition of physics - but once we are comparing two disciplines having common definitions of those disciplines is unavoidable.

Comment author: Pablo_Stafforini 30 November 2012 06:34:04AM 3 points [-]

Most philosophical questions are not very controversial, but for that very reason you don't hear much about them.

Really? Can you name a few philosophical questions whose answers are uncontroversial?

Comment author: Viliam_Bur 30 November 2012 11:55:01AM 5 points [-]

Inasmuch as philosophical issues are settled, they stop getting talked about.

Why exactly? I mean, there is no controversy in mathematics about whether 2+2=4, and yet we continue teaching this knowledge in schools. Uncontroversial, yet necessary to be taught, because humans don't get it automatically, and because it is necessary for more complicated calculations.

Why exactly don't philosophers do an equivalent of this? It is because once a topic has been settled at a philosophical conference, the next generations of humans are automatically born with this knowledge? Or at least the answer is published so widely, that it becomes more known than the knowledge of 2+2=4? Or what?

Start tabooing the word 'philosophy.' See how it goes.

First approximation: Pretended ability to make specific conclusions concerning ill-defined but high-status topics. :(

Comment author: RobbBB 30 November 2012 07:23:47PM 6 points [-]

I mean, there is no controversy in mathematics about whether 2+2=4, and yet we continue teaching this knowledge in schools.

Yes, and we continue teaching modus ponens and proof by reductio in philosophy classrooms. (Not to mention historical facts about philosophy.) Here we're changing the subject from 'do issues keep getting talked about equally after they're settled?' to 'do useful facts get taught in class?' The philosopher certainly has plenty of simple equations to appeal to. But the mathematician also has foundational controversies, both settled and open.

Pretended ability to make specific conclusions concerning ill-defined but high-status topics. :(

So if I pretend to be able to make specific conclusions about capital in macroeconomics, I'm doing philosophy?

Comment author: Wei_Dai 30 November 2012 12:02:21PM 6 points [-]

I'm not sure that more rationality in philosophy would help enough as far as FAI is concerned. I expect that if philosophers became more rational, they would mainly just become more uncertain about various philosophical positions, rather than reach many useful (for building FAI) consensuses.

If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers. For example, Turing, Church, and other's work on understanding the nature of computation, von Neumann and Morgenstern's decision theory, Tegmark's Ultimate Ensemble, and algorithmic information theory / Solomonoff Induction. (Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?) Based on this, I think appropriate background knowledge and raw intellectual firepower (most of the smartest humans probably go into math/science instead of philosophy) are perhaps more important than rationality for making philosophical progress.

Comment author: Peterdjones 30 November 2012 01:01:19PM *  8 points [-]

(Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?)

  • Quine's attack on aprioricity and analycity.
  • Kuhn's' and Popper's philosophy of science.
  • Rawls' and Nozick's political philsophy
  • Kripkes New Metaphsycal Necessity

ETA:

  • Austin's speach act theory
  • Ryles critique of Cartesianism
  • HOT theory (various)
  • Tarski's convention T
  • Gettier's counteraxamples
  • Parfitt on personal identiy
  • Parfitt on ehtics
  • Wittgenstein's PLA
Comment author: Wei_Dai 30 November 2012 06:08:31PM 6 points [-]

I'm only familiar with about a third of these (not counting Tarski who I agreed with JoshuaZ is more of a mathematician than philosopher), but the ones that I am familiar with do not seem as interesting/impressive/fruitful/useful as the advances I mentioned in the grandparent comment. If you could pick one or two on your list for me to study in more detail, which would you suggest?

Comment author: BerryPick6 30 November 2012 06:12:26PM 1 point [-]

I know you aren't asking me, but my choices to answer this question would be Popper's Philosophy of Science; Rawls and Nozick's Political Philosophy and Quine.

Comment author: Peterdjones 30 November 2012 07:00:39PM 0 points [-]

Interesting to whom? Fruitful for what?

Comment author: Wei_Dai 30 November 2012 09:18:03PM 5 points [-]

Interesting to whom? Fruitful for what?

According to my own philosophical interests, which as it turned out (i.e., apparently by coincidence) also seems well aligned with what's useful for building FAI. I guess one thing that might be causing us to talk a bit past each other is that I read the opening post as talking about philosophy in the context of building FAI (since I know that's what the author is really interested in), but you may be seeing it as talking about philosophy in general (and looking at the post again I notice that it doesn't actually mention Friendly AI at all except by linking to a post about it).

Anyway, if you think any of the examples you gave might be especially interesting to someone like me, please let me know. Or, if you want, tell me which is most interesting to you and why.

Comment author: JoshuaZ 30 November 2012 05:38:45PM 3 points [-]

Most of your examples seem valid but this one is strongly questionable:

Tarski's convention T

This example doesn't work. Tarski was a professional mathematician. There was a lot of interplay at the time between math and philosophy, but it seems he was closer to the math end of things. He did at times apply for philosophy positions, but for the vast majority of his life he was doing work as a mathematician. He was a mathematician/logician when he was at the Institute for Advanced Study, and he spent most of his professional career as a professor at Berkley in the math department. Moreover, while he did publish some papers in philosophy proper, he was in general a very prolific writer, and the majority of his work (like his work with quantifier elimination in the real numbers, or the Banach-Tarski paradox) are unambiguously mathematical.

Similarly, the people who studied under him are all thought of as mathematicians(like Julia Robinson), or mathematician-philosophers(Feferman), with most in the first category.

Overall, Tarski was much closer to being a professional mathematician whose work sometimes touched on philosophy than a professional philosopher who sometimes did math.

Comment author: BerryPick6 30 November 2012 02:14:25PM *  3 points [-]
  • Mackie's Argument from Queerness
  • Hare and Ayers' work on Expressivism
  • Goodman's New Riddle of Induction
  • Wittgenstein
  • Frankfurt on Free Will
  • The Quine-Putnam indispensability thesis
  • Causal Theory of Reference
Comment author: TimS 30 November 2012 09:20:54PM 2 points [-]

Kuhn's' and Popper's philosophy of science.

Made me laugh for a second seeing those two on the same line because Popper (falsifiability) and Kuhn (Structures of Scientific Revolutions) are not particularly related.

Comment author: Peterdjones 30 November 2012 11:05:28PM 1 point [-]

Not at all. i should probably have put them on separate lines.

Comment author: paper-machine 30 November 2012 12:49:55PM 3 points [-]

(Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?)

I think the canonical example would be Thomas Metzinger's model of the first-person perspective.

Comment author: Wei_Dai 01 December 2012 12:27:18PM 1 point [-]

I think the canonical example would be Thomas Metzinger's model of the first-person perspective.

Would't there be at least one reference to his book in SEP if that was true?

Comment author: gwern 01 December 2012 04:49:07PM 2 points [-]
Comment author: Wei_Dai 01 December 2012 08:11:39PM *  0 points [-]

Yeah, I did the same search, but none of those results reference his main work, the book that paper-machine cited (or any other papers/books that, judging from the titles, are about his main ideas).

Comment author: gwern 01 December 2012 08:37:22PM 1 point [-]

They're still citations to his body of work, which is all on pretty much the same topic. SEP is good, but it is just an encyclopedia, after all, and Being No One is a very challenging book (I still haven't read it because it's too hard for me). A general citation search would be more useful; I see 647 citations to it in Google Scholar. (I don't know of a citation engine specializing in philosophy - Philpapers shows a fair bit of activity related to Metzinger but doesn't give me how many philosophy papers cite it, much less philosophy of mind.)

Comment author: Kawoomba 01 December 2012 11:20:19PM 2 points [-]

Being No One is a very challenging book

This lecture he gives about the very same topic is much more accessible.

Comment author: fubarobfusco 02 December 2012 02:15:07AM 1 point [-]

Thank you for posting this.

Comment author: NancyLebovitz 02 December 2012 05:34:35AM 0 points [-]

He suggests that the reason we don't have awareness that our sensory experiences are created by a detailed internal process is that it wasn't evolutionarily worthwhile. However, we're currently in an environment where at least our emotional experiences are more and more likely to be hacked by other people who aren't necessarily on our side, which means that self-awareness is becoming more valuable. At this point, the evolution is more likely to be memetic (parents teaching their children to notice what's going on in advertisements) than physiological, though it's also plausible that some people find it innately easier to track what is going on with their emotions than others.

Has anyone read The Book of Not Knowing by Peter Ralston? I've only read about half of it, but it looks like it's heading into the same territory.

Comment author: Wei_Dai 01 December 2012 08:57:38PM 1 point [-]

I didn't even try to read the book, but went through a bunch of review papers (which of course all try to summarize the main ideas of the book) and feel like I got a general understanding that way. I wanted to see how his ideas compare to his peers (so as to judge how much of an advance they are upon the state of the art), and that's when I found the SEP lacking any discussion of them (which still seems fairly damning to me).

Comment author: BerryPick6 01 December 2012 08:50:34PM 0 points [-]

Being No One is a very challenging book (I still haven't read it because it's too hard for me).

Apparently, his follow-up book "The Ego Tunnel" deals with mostly the same stuff and is not as impenetrable. Have you read it? I'd be interested in hearing your thoughts on it.

Comment author: gwern 01 December 2012 09:16:10PM 0 points [-]

Ironically, my problem with that book was that it was too easy and simple.

Comment author: paper-machine 01 December 2012 04:07:56PM *  1 point [-]

No idea why this would be true.

(For example, despite being a reasonably well-known mathematician, there is only one reference to S. S. Abhyankar in the MacTutor history of mathematicians.)

Comment author: BerryPick6 30 November 2012 01:33:42PM 1 point [-]

Nick Bostrom?

Comment author: Wei_Dai 01 December 2012 12:08:25AM *  9 points [-]

I think Nick is actually an example of how rationality isn't that useful for making philosophical progress. I'm a bit reluctant to say this (for obvious social reasons, which I'm judging to be outweighed by the strategic importance of this issue) but his work (PhD thesis) on anthropic reasoning wasn't actually very good. I know that at least one SI Research Associate agrees with my assessment.

ETA: I should qualify this by saying that while his proposed solution wasn't very good (which you can also infer from the fact that nobody ever talks about or builds upon it around here despite strong interest in the topic) he did come up arguments/considerations/thought experiments, such as the Presumptuous Philosopher, that we still discuss.

Comment author: BerryPick6 01 December 2012 12:11:13AM 3 points [-]

I'll freely admit that I haven't actually read any of his work, and I was mainly making the comment due to the generally fanboyish response he gets 'round these parts. I found your comment very interesting, and may investigate further.

Comment author: cousin_it 02 December 2012 04:04:26AM 1 point [-]

I know that at least one SI Research Associate agrees with my assessment.

Just in case this refers to me: I agree with your assessment of Bostrom's thesis, but I'm no longer a SI research associate :-)

Comment author: Peterdjones 30 November 2012 01:42:44PM 0 points [-]

As an example of what?

Comment author: BerryPick6 30 November 2012 01:48:14PM 2 points [-]

A straight-up philosopher who is useful to FAI (more X-Risk, but it's probably still applicable.) Obviously, your examples are the ones that immediately occurred to me, so I didn't want to repeat them.

Comment author: Peterdjones 30 November 2012 01:11:15PM 0 points [-]

For example, Turing, Church, and other's work on understanding the nature of computation,

Why does that count as phil?

von Neumann and Morgenstern's decision theory,

or that?

and algorithmic information theory / Solomonoff Induction.

or that?

Tegmark's Ultimate Ensemble,

OK. That resembles modal realism, which is deifnitely philosphy, although it is routinely condemned here as bad philosophy.

Comment author: IlyaShpitser 30 November 2012 08:39:13PM *  4 points [-]

Look, everything counts as phil: (http://en.wikipedia.org/wiki/Natural_philosophy). Philosophy gets credit for launching science in the 19th century.

Philosophers were the first to invent the AI effect, apparently (http://en.wikipedia.org/wiki/AI_effect).

If you want to look at interesting advances in philosophy, read the stuff by the CMU causality gang (Spirtes/Scheines/Glymour, philosophy department, also Kelly). Of course you will probably say that is not really philosophy but theoretical statistics or something. Pearl's stuff can be considered philosophy too (certainly his stuff on actual cause is cited a lot in phil papers).

Comment author: Peterdjones 30 November 2012 11:13:27PM 0 points [-]

Look, everything counts as phil: Old science may also have counted as phil. in the days when they weren't distinct. However WD's exmaples were of contemporary developements that seem to be considered not-phil by contemporary philosophers.

certainly his stuff on actual cause is cited a lot in phil papers

Science in general is quoted quite a lot. But there is a difference between phils. discussing phil. and phils. discussing non-phil as somethign that can be philosophised about. if only in tone and presentation.

Comment author: IlyaShpitser 01 December 2012 07:41:53AM 2 points [-]

Your quoting is confusing.

Comment author: Wei_Dai 30 November 2012 04:33:35PM 2 points [-]

Why does that count as phil?

Perhaps a more relevant question, in the context of the OP, is whether those problems are representative of the types of foundational (as opposed to engineering, logistical, strategic, etc.) problems that need to be solved in order to build an FAI.

But we could talk about "philosophy" as well, since, to be honest, I'm not sure why some topics count as "philosophy" and others don't. It seems to me that my list of advances do fall under Wikipedia's description of philosophy as "the study of general and fundamental problems, such as those connected with reality, existence, knowledge, values, reason, mind, and language." Do you disagree, or have a alternative definition?

Comment author: RichardKennaway 30 November 2012 05:04:49PM *  4 points [-]

It seems to me that my list of advances do fall under Wikipedia's description of philosophy

I agree. But there are also some systematic differences between what the people you cited did and what (other) philosophers do.

  • The former didn't merely study fundamental problems, they solved them.

  • They did stuff that now exists and can be studied independently of the original works. You don't have to read a single word of Turing to understand Turing machines and their importance. You need not study Solomonoff to understand Solomonoff induction.

  • Their works are generally not shelved with philosophy in libraries. Are they studied in undergraduate courses on philosophy?

Comment author: novalis 30 November 2012 06:23:21PM 3 points [-]

Turing's work on AI (and Searle's response) was discussed in my undergrad intro phil course. But that is not quite the same thing.

Comment author: BerryPick6 30 November 2012 05:07:42PM 1 point [-]

Their works are generally not shelved with philosophy in libraries. Are they studied in undergraduate courses on philosophy?

Not in my undergraduate program, at least.

Comment author: DaFranker 30 November 2012 04:52:36PM *  3 points [-]

I think the criticism is indeed pointed towards the scientific "field" of Philosophy, AKA people working in Philosophy Departments or similar.

I doubt many here are targeting the activity of philosophy, nor the people who would identify as "philosophers", but rather specifically towards Philosophy academics with a specialization in Philosophy, who work in a Philosophy Department and produce Philosophy papers to be published in a Journal of Philosophical Writings (and possibly give the occasional Philosophy class or seminar, depending on the local supply of TAs).

IME, a large fraction of real, practicing philosophers are actively publishing papers on arXiv or equivalent.

Comment author: Peterdjones 30 November 2012 05:17:27PM 2 points [-]

I think the criticism is indeed pointed towards the scientific "field" of Philosophy

Did you mean academic field?

I doubt many here are targeting the activity of philosophy, nor the people who would identify as "philosophers", but rather specifically towards Philosophy academics with a specialization in Philosophy, who work in a Philosophy Department and produce Philosophy papers to be published in a Journal of Philosophical Writings (and possibly give the occasional Philosophy class or seminar, depending on the local supply of TAs).

You mean professional phi. bad, amateur phil good. Or not so much amaterur phil as the sort of sciencey-philly cross-disciplinary stuff done by EY and Robin and Botrom and Tegmark do. Maybe. But actually some of it is quite bad for reasons which are evident if you know phil.

Comment author: DaFranker 30 November 2012 05:30:18PM *  1 point [-]

Did you mean academic field?

Yes, my bad.

You mean professional phi. bad, amateur phil good.

A good professional study of philosophy itself is to me indistinguishable from someone doing metaresearch, i.e. figuring out how to make the standards of the scientific method even better and the techniques of all scientists more efficient. IME, this is not what the majority of academics working in Philosophy Departments are doing.

OTOH, good applied philosophy, i.e. the sort of stuff you do once you've studied the result of the above metaresearch, is basically just doing science. In other words, doing research in any field that is not about how to do research.

So yes, in a sense, most academics categorized as "professional phil" are less good than most academics categorized as "amateur phil" who mainly work in other disciplines. The latter are also almost exclusively "sciencey-philly cross-disciplinary".

I'm guessing we both agree that non-academic-nor-scientist amateur philosophers are less likely to produce meaningful research than any of the above, and yet is pretty much the stereotype that most people (in the general north-american population) assign to "philosophers". Then again, the exclusion of "scientists" from that category feels like begging the question.

Comment author: Peterdjones 30 November 2012 05:47:34PM *  0 points [-]

So yes, in a sense, most academics categorized as "professional phil" are less good than most academics categorized as "amateur phil" who mainly work in other disciplines

Is the "so" meant to imply that that follows from the forefgoing? I don't see how it does.

Comment author: Peterdjones 30 November 2012 05:08:59PM *  0 points [-]

I was responding to the sentence: "If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers."

..which does not mention "advances in philosophy useful to FAI".

Do you disagree, or have a alternative definition?

None of them have been much discussed by phils. (except possibly Bostrom, the Diane Hsieh of LessWrongism).

Comment author: Wei_Dai 30 November 2012 05:37:15PM 2 points [-]

None of them have been much discussed by phils.

Theory of computation is obviously used by the computational theory of mind as well as philosophy of language and of mathematics and logic. Decision theorists are commonly employed by philosophy departments and all current decision theories descend from vNM's. AIT actually doesn't seem to be much discussed by philosophers (a search found only a couple of references in the SEP, and even the entry on "simplicity" only gives a brief mention of it) which is a bit surprising. (Oh, there's a more substantial discussion in the entry for "information".)

Comment author: Peterdjones 30 November 2012 07:11:36PM 0 points [-]

Theory of computation is obviously used by the computational theory of mind

Surely that is the other way round. Early computer theorists just wanted to solve mathematical problems mechanically.

Theory of computation is obviously used by the computational theory of mind

What is your point? His day job was physicist.

Comment author: kip1981 29 November 2012 08:42:43PM 13 points [-]

Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.

First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illusions (positive outcome bias, the just world phenomenon, the Lake Wobegon effect, the fundamental attribution error), etc. I see this in my favorite topic area (the free will problem), but I believe that it likely applies broadly across philosophy.

Second, semantic ambiguity creates persistent problems if not identified and fixed. The solutions to several of Hilbert's 100 problems are "no answer - problem statement is not well defined." That approach is unsexy, and emotionally dissatisfying (all of this work, yet we get no answer!). Perhaps for that reason, philosophers (but not mathematicians) seem completely incapable of doing it. On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it. In fact, as with CB, philosophy problems tend to cluster around problems that persist because of SA. (If the problems didn't persist, they might be considered trivial or boring.)

Comment author: Peterdjones 30 November 2012 12:24:01AM 6 points [-]

On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it.

And they neve expend any effort in establishing clear meanings for such terms. Oh wait....they expend far too mcuh effort arguing about definitions...no, too little...no, too much.

OK: the problem with philosopher is that they are contradictory.

Comment author: khafra 30 November 2012 06:14:52PM 0 points [-]

And they never expend any effort in establishing clear meanings for such terms. Oh wait....they expend far too much effort arguing about definitions

If philosophers were strongly biased toward climbing the ladder of abstraction instead of descending it, they could expend a great deal of effort, flailing uselessly about definitions.

Comment author: Bruno_Coelho 02 December 2012 04:09:20PM -1 points [-]

What sort of people do you have in mind? The generalization apparently consider academic philosophers in the actual state, but not past people. Sure, someone without strong science background will miss the point, focusing on the words. But arguing "by definitions" is not something done exclusively by philosophers.

Comment author: BerryPick6 30 November 2012 06:20:16PM *  0 points [-]

On only the rarest occasions do philosophers suggest that some term ("good", "morality," "rationalism", "free will", "soul", "knowledge") might not possess a definition that is precise enough to do the work that we ask of it.

At least when it comes to the concepts "Good," "Morality" and "Free Will," I'm familiar with some fairly prominent suggestions that they are in dire need of redefinition and other attempts to narrow or eliminate discussions about such loose ideas altogether.

Comment author: bryjnar 29 November 2012 11:51:18PM 4 points [-]

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex...

Huh?

Examples like that are the bread and butter of discussions about motivational internalism: precisely the argument that tends to get made is that because it's not motivating it's not a real moral judgement. You may think that's stupid in other ways, but it's not that philosophers are ignorant of what psychology tells us, some of them just disagree about how to interpret it.

Comment author: nigerweiss 29 November 2012 08:32:12PM 15 points [-]

Another extremely serious problem is that there is next to no particularly effective effort in philosophical academia to disregard confused questions, and to move away from naive linguistic realism. The number of philosophical questions of the form 'is x y' that can be resolved by 'depends on your definition of x and y' is deeply depressing. There does not seem to be a strong understanding of how important it is to remember that not all words correspond to natural, or even (in some cases) meaningful categories.

Comment author: bryjnar 29 November 2012 11:18:12PM 5 points [-]

I strongly disagree. Almost every question in philosophy that I've ever studied has some camp of philosophers who reject the question as ill-posed, or want to dissolve it, or some such. Wittgensteinians sometimes take that attitude towards every question. Such philosophers often not discussed as much as those who propose "big answers" but there's no question that they exist and that any philosopher working in the field is well aware of them.

Also, there's a selection effect: people who think question X isn't a proper question tend not to spend their careers publishing on question X!

Comment author: siodine 29 November 2012 11:55:06PM 1 point [-]

I agree, but the problems remain and the arguments flourish.

Comment author: nigerweiss 30 November 2012 12:47:02AM -1 points [-]

Sure, there are absolutely philosophers who aren't talking about absolute nonsense. But as an industry, philosophy has a miserably bad signal-noise ratio.

Comment author: bryjnar 30 November 2012 02:52:19AM 1 point [-]

I'd mostly agree, but the particular criticism that you levelled isn't very well-founded. Questioning the way we use language and the way that philosophical questions are put is not the unheard of idea that you portray it as. In fact, it's pretty standard. It's just not necessarily the stuff that people choose to put into most "Intro to the Philosophy of X" textbooks, since there's usually more discussion to be had if the question is well-posed!

Comment author: RobbBB 29 November 2012 11:01:24PM *  5 points [-]

Not only are pretty much all contemporary philosophers attentive to this fact, but there's an active philosophical literature about the naturalness of some terms as opposed to others, and about how one can reasonably distinguish natural kinds from non-natural ones. Particularly interesting is some of the recent work in metaphilosophy and in particular metametaphysics, which examines whether (or when) ontological disputes are substantive, what is the function of philosophical disputes, when one can be justified in believing a metaphysical doctrine, etc. (Note: This field is not merely awesome because it has a hilarious name.)

Don't confuse disagreements about which natural kinds exist, and hence about which disputes are substantive, with disagreements about whether there's a distinction between substantive and non-substantive disputes at all.

Comment author: Mitchell_Porter 29 November 2012 09:01:39PM *  10 points [-]

Please list as many examples of these questions as you can muster. (I mean questions, seriously discussed by philosophers, which you claim can be resolved in this way.)

Comment author: nigerweiss 29 November 2012 09:48:29PM 16 points [-]

Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier's problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle's Chinese room and other bad thought experiments.

I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,

Comment author: Bugmaster 30 November 2012 04:08:27AM 6 points [-]

Your examples include:

(1) Any discussion of what art is.
(2) Any discussion of whether or not the universe is real.
(3) Any conversation about whether machines can truly be intelligent.

I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, "how long is this stick ?". Depending on your definition, the answer may be "this many meters long", "depends on which reference frame you're using", "the concept of a fixed length makes no sense at this scale and temperature", or "it's not a stick, it's a cube". That doesn't mean that the question is inherently confused, only that you and your interlocutor have a communication problem.

That said, I believe that questions (1) and (3) are, in fact, questions about humans. They can be rephrased as "what causes humans to interpret an object or a performance as art", and "what kind of things do humans consider to be intelligent". The answers to these questions would be complex, involving multi-modal distributions with fuzzy boundaries, etc., but that still does not necessarily imply that the questions are confused.

Which is not to say that confused questions don't exist, or that modern philosophical academia isn't riddled with them; all I'm saying is that your examples are not convincing.

Comment author: JackV 30 November 2012 11:37:31AM 7 points [-]

I agree that the answers to these questions depend on definitions

I think he meant that those questions depend ONLY on definitions.

As in, there's a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking "can a submarine swim" is only interesting in deciding "should the English word 'swim' apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely". That example sounds stupid, but people waste a lot of time on the similar case of "think" instead of "swim".

Comment author: Bugmaster 30 November 2012 04:59:51PM 0 points [-]

Ok, that's a good point; inserting the word "only" in there does make a huge difference.

I also agree with BerryPick6 on this sub-thread.

Comment author: BerryPick6 30 November 2012 12:03:33PM 4 points [-]

"What causes humans to interpret an object or a performance as art" and "What is art?" may be seen as two entirely different questions to certain philosophers. I'm skeptical that people who frequent this site would make such a distinction, but we aren't talking about LWers here.

Comment author: Peterdjones 30 November 2012 12:19:22PM *  0 points [-]

People whoe frequent this site already do make parallel distinctions about more LW-friendly topics. For instance, the point of the Art of Rationality is that there is a right way to do thinking and persuading, which is not to say that Reason "just is" whatever happens to persuade or convince people, since people can be persuaded by bad arguments. If that can be made to work, then "it's hanging in a gallery, but it isn't art" can be made to work.

ETA:

That said, I believe that questions (1) and (3) are, in fact, questions about humans.

Rationality is about humans, in a sense, too. The moral is that being "about humans" doens't imply that the search for norms or real meanings, or genuine/pseudo distinctions is fruitless.

Comment author: Bugmaster 30 November 2012 04:57:59PM 1 point [-]

Agreed, but my point was that questions about humans are questions about the Universe (since humans are part of it), and therefore they can be answerable and meaningful. Thus, you could indeed come up with an answer that sounds something like, "it's hanging in a gallery, but our model predicts that it's only 12.5% art".

But I agree with BerryPick6 when he says that not all philosophers make that distinction.

Comment author: nigerweiss 30 November 2012 08:46:38AM *  2 points [-]

I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, "how long is this stick ?"

There's a key distinction that I feel you may be glossing over here. In the case of the stick question, there is an extremely high probability that you and the person you're talking to, though you may not be using exactly the same definitions, are using definitions that are closely enough entangled with observable features of the world be broadly isomorphic.

In other words, there is a good chance that, without either of you adjusting your definitions, you and the neurotypical human you're talking to are likely to be able to come up with some answer that both of you will find satisfying, and will allow you to meaningfully predict future experiences.

With the three examples I raised, this isn't the case. There are a host of different definitions, which are not closely entangled with simple, observable features of the world. As such, even if you and the person you're talking to have similar life experiences, there is no guarantee that you will come to the same conclusions, because your definitions are likely to be personal, and the outcome of the question depends heavily upon those definitions.

Furthermore, in the three cases I mentioned, unlike the stick, if you hold a given position, it's not at all clear what evidence could persuade you to change your mind, for many possible (and common!) positions. This is a telltale sign of a confused question.

Comment author: Bugmaster 30 November 2012 05:03:58PM 0 points [-]

There are a host of different definitions, which are not closely entangled with simple, observable features of the world.

I believe that at least two of those definitions could be something like, "what kinds of humans would consider this art ?", or "will machines ever pass the Turing test". These questions are about human actions which express human thoughts, and are indeed observable features of the world. I do agree that there are many other, more personal definitions that are of little use.

Comment author: RobbBB 30 November 2012 01:29:28AM 2 points [-]

I think we need a clearer idea of what we mean by a 'bad' thought experiment. Sometimes thought experiments are good precisely because they make us recognize (sometimes deliberately) that one of the concepts we imported into the experiment is unworkable. Searle's Chinese room is a good example of this, since it (and a class of similar thought experiments) helps show that our intuitive conceptions of the mental are, on a physicalist account, defective in a variety of ways. The right response is to analyze and revise the problem concepts. The right response is not to simply pretend that the thought experiment was never proposed; the results of thought experiments are data, even if they're only data about our own imaginative faculties.

Comment author: siodine 29 November 2012 10:08:14PM *  2 points [-]

My first thought was "every philosophical thought experiment ever" and to my surprise wikipedia says there aren't that many thought experiments in philosophy (although, they are huge topics of discussion). I think the violinist experiment is uniquely bad. The floating man experiment is another good example, but very old.

Comment author: RobbBB 30 November 2012 01:24:42AM 2 points [-]

What's your objection to the violinist thought experiment? If you're a utilitarian, perhaps you don't think the waters here are very deep. It's certainly a useful way of deflating and short-circuiting certain other intuitions that block scientific and medicinal progress in much of the developed world, though.

Comment author: siodine 30 November 2012 04:07:12PM *  3 points [-]

From SEP:

Judith Thomson provided one of the most striking and effective thought experiments in the moral realm (see Thomson, 1971). Her example is aimed at a popular anti-abortion argument that goes something like this: The foetus is an innocent person with a right to life. Abortion results in the death of a foetus. Therefore, abortion is morally wrong. In her thought experiment we are asked to imagine a famous violinist falling into a coma. The society of music lovers determines from medical records that you and you alone can save the violinist's life by being hooked up to him for nine months. The music lovers break into your home while you are asleep and hook the unconscious (and unknowing, hence innocent) violinist to you. You may want to unhook him, but you are then faced with this argument put forward by the music lovers: The violinist is an innocent person with a right to life. Unhooking him will result in his death. Therefore, unhooking him is morally wrong.

However, the argument, even though it has the same structure as the anti-abortion argument, does not seem convincing in this case. You would be very generous to remain attached and in bed for nine months, but you are not morally obliged to do so.

The thought experiment depends on your intuitions or your definition of moral obligations and wrongness, but the experiment doesn't make these distinctions. It just pretends that everyone has same intuition and as such the experiment should remain analogous regardless (probably because Judith didn't think anyone else could have different intuitions), and so then you have all these other philosophers and people arguing about this minutia and adding on further qualifications and modifications to the point where that they may as well be talking about actual abortion.

Comment author: RobbBB 30 November 2012 08:10:48PM 7 points [-]

The thought experiment functions as an informal reductio ad absurdum of the argument 'Fetuses are people. Therefore abortion is immoral.' or 'Fetuses are conscious. Therefore abortion is immoral.' That's all it's doing. If you didn't find the arguments compelling in the first place, then the reductio won't be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of the violin thought experiment is that you don't need to question the anti-abortionist's premises in order to undermine the most common anti-abortion arguments; they yield consequences all on their own that most anti-abortionists would find unacceptable.

That is the dialectical significance of the above argument. It has nothing to do with assuming that everyone found the original anti-abortion argument plausible. An initially implausible argument that's sufficiently popular may still be worth analyzing and refuting.

Comment author: Mitchell_Porter 30 November 2012 12:26:58AM 2 points [-]

I am unimpressed by your examples.

Can we first agree that some questions are not dissolved by observing that meanings are conventional? If I run up to you and say "My house is on fire, what should I do?", and you tell me "The answer depends, in part, on what you mean by 'house' and 'fire'...", that will not save my possessions from destruction.

If I take your preceding comment at face value, then you are telling me

  • there is nothing to think about in pondering the nature of art, it's just a matter of definition
  • there is nothing to think about regarding whether the universe exists, it's just a matter of definition
  • there's no question of whether artificial intelligence is the same thing as natural intelligence, it's just a matter of definition

and that there's no "house-on-fire" real issue lurking anywhere behind these topics. Is that really what you think?

Comment author: nigerweiss 30 November 2012 12:45:16AM *  2 points [-]

Well, I'm sorry. Please fill out a conversational complaint form and put it in the box, and an HR representative will mail you a more detailed survey in six to eight weeks.

I agree entirely that meaningful questions exist, and made no claim to the contrary. I do not believe, however, that as an institution, modern philosophy is particularly good at identifying those questions.

In response to your questions,

  • Yes, absolutely.

  • Yes, mostly. There are different kinds of existence, but the answer you get out will depend entirely on your definitions.

  • Yes, mostly. There are different kinds of possible artificial intelligence, but the question of whether machines can -truly- be intelligent depends exclusively upon your definition of intelligence.

As a general rule, if you can't imagine any piece of experimental evidence settling a question, it's probably a definitional one.

Comment author: Mitchell_Porter 30 November 2012 03:44:51AM 2 points [-]

The true nature of art, existence, and intelligence are all substantial topics - highly substantial! In each case, like the physical house-on-fire, there is an object of inquiry independent of the name we give it.

With respect to art - think of the analogous question concerning science. Would you be so quick to claim that whether something is science is purely a matter of definition?

With respect to existence - whether the universe is real - we can distinguish possibilities such as: there really is a universe containing billions of light-years of galaxies full of stars; there is a brain in a vat being fed illusory stimuli, with the real world actually being quite unlike the world described by known physics and astronomy; and even solipsistic metaphysical idealism - there is no matter at all, just a perceiving consciousness having experiences.

If I ponder whether the universe is real, I am trying to choose between these and other options. Since I know that the universe appears to be there, I also know that any viable scenario must contain "apparent universe" as an entity. To insist that the reality of the universe is just a matter of definition, you must say that "apparent universe" in all its forms is potentially worthy of the name "actual universe". That's certainly not true to what I would mean by "real". If I ask whether the Andromeda galaxy is real, I mean whether there really is a vast tract of space populated with trillions of stars, etc. A data structure providing a small part of the cosmic backdrop in a simulated experience would not count.

With respect to intelligence - I think the root of the problem here is that you think you already know what intelligence in humans is - that it is fundamentally just computation - and that the boundary between smart computation and dumb computation is obviously arbitrary. It's like thinking of a cloud as "water vapor". Water vapor can congregate on a continuum of scales from invisibly small to kilometers in size, and a cloud is just a fuzzy naive category employed by humans for the water vapor they can see in the sky.

Intelligence, so the argument goes, is similarly a fuzzy naive category employed by humans for the computation they can see in human behavior. There would be some truth to that analysis of the concept... except that, in the longer run, we may find ourselves wanting to say that certain highly specific refinements of the original concept are the only reasonable ways of making it precise. Intelligence implies something like sophisticated insight; so it can't apply to anything too simple (like a thermostat), and it can't apply to algorithms that work through brute force.

And then there is the whole question of consciousness and its role in human intelligence. We may end up wishing to say that there is a fundamental distinction between conscious intelligence - sophisticated cognition which employs genuine insight, i.e. conscious insight, conscious awareness of salient facts and relations - and unconscious intelligence - where the "insight" is really a matter of computational efficiency. The topic of intelligence is the one where I would come closest to endorsing your semantic relativism, but that's only because in this case, the "independent object of inquiry" appears to include heterogeneous phenomena (e.g. sophisticated conscious cognition, sophisticated unconscious cognition, sophisticated general problem-solving algorithms), and how we end up designating those phenomena once we obtain a mature understanding of their nature, might be somewhat contingent after all.

Comment author: John_Maxwell_IV 30 November 2012 09:07:11AM *  1 point [-]

As a general rule, if you can't imagine any piece of experimental evidence settling a question, it's probably a definitional one.

So what's the difference between philosophy and science then?

Comment author: nigerweiss 30 November 2012 09:25:15PM 0 points [-]

Err... science deals with questions you can settle with evidence? I'm not sure what you're getting at here.

Comment author: John_Maxwell_IV 30 November 2012 09:27:50PM 2 points [-]

How does your use of the label "philosophical" fit in with your uses of the categories "definitional" and "can be settled by experimental evidence"?

Comment author: Rune 30 November 2012 12:52:25AM 9 points [-]

I once met a philosophy professor who was at the time thinking about the problem "Are electrons real?" I asked her what her findings had shown thus far, and she said she thinks they're not real. I then asked her to give me examples of things that are real. She said she doesn't know any examples of such things.

Comment author: Peterdjones 30 November 2012 12:17:16AM 0 points [-]

Please name some contemporary philosophers who are naive linguistic realists.

Comment author: timtyler 30 November 2012 01:41:55AM *  3 points [-]

Luke quoted:

Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.

That's a pretty irritatingly-wrong quote. Of course the scientific method is social of reasons other than the stupidity and self-deceiving nature of scientists. For example, the scientists doing the cigarette-company-funded science probably weren't stupid or self-deceiving. Other scientists doubted their science for a different set of reasons.

Comment author: Yvain 01 December 2012 08:13:46PM 2 points [-]

A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).

I found this by far the most interesting part of this (very good) post. I am surprised I had to learn it hidden inside a mostly unrelated essay. I would certainly like to hear more about this test.

Comment author: John_Maxwell_IV 30 November 2012 09:03:04AM *  2 points [-]

Are some philosophical questions questions about reality? If so, what does it take for a question about reality to count as "philosophical" as opposed to "scientific"? Are these just empirical clusters?

And if it's not a fact about reality, what does it mean to get it right?

Comment author: ygert 01 December 2012 07:24:58PM 0 points [-]

I think the point is not to think of questions as philosophical or not, but rather look at the people trying to solve these questions. This post is talking about how the people called "philosophers" are not effective at solving these problems, and as such that they should change their approach. In fact, a large part of the Sequences are attempting to solve questions which you might think of as "philosophical" and have in the past been worked on by philosophers. But what this post says is that the correct way to look at these (or any other) problems is to look at them in a rational way (like EY did in writing the Sequences) and not in the way most people (specifically the class of people known as "philosophers") have tried to solve them in the past.

Comment author: Peterdjones 30 November 2012 12:02:26AM *  2 points [-]

But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.

Philosophy hasn;t been very successful at finding the truth about the kind of questions philosophy typically considers. What's better...at answering those kinds of questions? You can only condemn philosophy for having worse methods than science, based on results, if they are both applied to the same problems.

Comment author: [deleted] 01 December 2012 11:32:02PM 1 point [-]

Sometimes, they are even divided on psychological questions that psychologists have already answered...

I think you've misunderstood the debate: philosophers are arguing in this case over whether or not moral judgements are intrinsically motivating. If they are, then the brain-damaged people you make reference to are (according to moral judgement internalizes) not really making moral judgements. They're just mouthing the words.

This is just to say that psychology has answered a certain question, but not the question that philosophers debating this point are concerned about.

Comment author: Manfred 02 December 2012 07:26:37AM *  2 points [-]

This pattern-matches an awful lot to "if a tree falls in a forest..."

Comment author: [deleted] 02 December 2012 04:32:20PM 0 points [-]

Yeah, but at a sufficiently low resolution (such as my description), lots of stuff pattern-matches, so: http://plato.stanford.edu/entries/moral-motivation/#MorJudMot

I'm not saying the philosophical debate is interesting or important (or that it's not), but the claim that psychologists have settled the question relies on an equivocation on 'moral judgement': in the psychological study, giving an answer to a moral question which comports with answers given by healthy people is a sufficient condition on moral judgement. For philosophers, it is neither necessary, not sufficient. Clearly, they are not talking about the same thing.

Comment author: Qiaochu_Yuan 02 December 2012 12:24:52AM *  2 points [-]

How do I know whether anyone is making moral judgments as opposed to mouthing the words?

Comment author: [deleted] 02 December 2012 01:33:20AM 0 points [-]

That sounds like an interesting question! If you'll forgive me answering your question with another, do you think that this is the kind of question psychology can answer, and if so, what kind of evidential result would help answer it?

Comment author: Qiaochu_Yuan 02 December 2012 06:39:27AM 2 points [-]

Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment. A tentative definition might come from the following procedure: ask a wide selection of people to make what would colloquially be referred to as moral judgments and see what parts of their brains light up. If there's a common light-up pattern to basic moral judgments about things like murder, then we might call that neurological event a moral judgment. Part of this light-up pattern might be missing in the brain-damaged people.

Comment author: [deleted] 02 December 2012 04:41:14PM *  1 point [-]

Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment.

But that's the philosophical debate!

As to your definition, notice the following problem: suppose you get a healthy person answering a moral question. Region A and B of their brain lights up. Now you go to the brain damaged person, and in response to the same moral question only region A lights up. You also notice that the healthy person is motivated to act on the moral judgement, while the brain damaged person is not. So you conclude that B has something to do with motivation.

So do you define a moral judgement as 'the lighting up of A and B' or just 'the lighting up of A'? Notice that nothing about the result you've observed seems to answer or even address that question. You can presuppose that it's A, or both A and B, but then you've assumed an answer to the philosophical debate. There's a big difference between assuming an answer, and answering.

Comment author: Qiaochu_Yuan 02 December 2012 09:00:44PM 3 points [-]

Neither. You taboo "moral judgment." From there, as far as I can tell, the question is dissolved.

Comment author: [deleted] 02 December 2012 10:32:10PM *  1 point [-]

Okay, good idea, let's taboo moral judgement. So your definition from the great grandparent was (I'm paraphrasing) "the activity of the brain in response to what are colloquially referred to as moral judgements." What should we replace 'moral judgement' with in this definition?

I assume it's clear that we can't replace it with 'the activity of the brain...'

(ETA: For the record, if tabooing in this way is your strategy, I think you're with me in rejecting Luke's claim that psychology has settled a the externalism vs. internalism question. At the very best, psychology has rejected the question, not solved it. But much more likely, since philosophers probably won't taboo 'moral judgement' the way you have (i.e. in terms of brain states) psychology is simply discussing a different topic.)

Comment author: Qiaochu_Yuan 02 December 2012 10:59:26PM 0 points [-]

"...in response to questions about whether it is right to kill people in various situations, or take things from people in various situations, or more generally to impose one's will on another person in a way that would have had significance in the ancestral environment." (This is based on my own intuition that people process judgments about ancestral-environment-type things like murder differently from the way people process judgments about non-ancestral-environment-type things like copyright law. I could be wrong about this.)

How would a philosopher taboo "moral judgment"?

Comment author: [deleted] 02 December 2012 11:40:10PM 0 points [-]

That's fine, but it doesn't address the problem I described in the great great grandparent of this reply. Either you mean the brain activity of a healthy person, or the brain activity common to healthy and brain-damaged people. Even if philosophers intend to be discussing brain processes (which, in almost every case, they do not) then you've assumed an answer, not given one.

But in any case, this way of tabooing 'moral judgement' makes it very clear that the question the psychologist is discussing is not the question the philosopher is discussing.

Comment author: Qiaochu_Yuan 03 December 2012 12:26:23AM 2 points [-]

In that case I don't understand the question the philosopher is discussing. Can you explain it to me without using the phrase "moral judgment"?

Comment author: Decius 10 December 2012 02:10:51AM 1 point [-]

What would evidence of deontology / consequentialism / virtue ethics, empiricism vs. rationalism, or physicalism vs. non-physicalism look like?

Comment author: Vaniver 30 November 2012 06:18:49AM 1 point [-]

(As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)

It seems to me that rationality is more about updating the correct amount, which is primarily calculating the likelihood ratio correctly. Most of the examples of philosophical errors you've discussed come from not calculating that ratio correctly, not from starting out with a bizarre prior.

For example, consider Yvain and the Case of the Visual Imagination:

Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane."

This looks like having the same prior as many other people; the rationality was in actually running the experiment and calculating the likelihood ratio, which was able to overcome the extreme prior. You could say that Galton only considered this because he had a non-extreme prior, and that if people trusted their intuitions less and had more curious agnosticism, their beliefs would converge faster. But it seems to me that the curiosity (i.e. looking for evidence that favors one hypothesis over another) is more important than the agnosticism- the goal is not "I could be wrong" but "I could be wrong if X."

Comment author: Benito 29 November 2012 10:59:32PM 1 point [-]

Just to point out: your 3rd footnote all links to the same page. Enjoyed the post. Perhaps a case study of a big philosophy problem fully dissolved here?

Comment author: lukeprog 30 November 2012 06:45:12PM 0 points [-]

Fixed, thanks.

Comment author: shminux 29 November 2012 09:22:36PM -2 points [-]

So, your account basically implies that philosophy is less reliable than astrology, but is not as useful? Then why even bother talking to the philosophical types, to begin with?

Comment author: Peterdjones 30 November 2012 12:36:54AM 0 points [-]

Becuase no one has better approaches to those questions.

Comment author: crazy88 04 December 2012 12:20:00AM 0 points [-]

Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1

This isn't an area about which I know very much about but my understanding is that very few philosophers actually hold to a version of internalism which is disproven by these sorts of cases (even fewer than you might expect because those people that do hold to such a view tend to get commented on more often because "look how empirical evidence disproves this philosophical view" is a popular paper writing strategy and so people hunt for a target and then attack it, even if that target is not a good representation of the general perspective). As I said, not my area of expertise so I'm happy to be proben wrong on this.

I know you mention this sort of issue in the footnote but I think that still runs the risk of being misleading and making it seem that philosophers on mass hold a view that they (AFAIK) don't. This is particularly likely to happen because you cite a survey of philosophers in the same breath.

In general, I find that academic philosophy is far less bad than people on LW seem to think it is, in a large part because of a tendency on LW to focus on fringe views instead of mainstream views amongst philosophers and to misinterpret the meaning of words used by philosophers in a technical manner.

Comment author: aaronsw 01 December 2012 02:12:32PM 0 points [-]

Typo: But or many philosophical problems

Comment author: DanArmak 30 November 2012 06:37:07PM 0 points [-]

they're split 25-24-18 on deontology / consequentialism / virtue ethics,

Does that mean they're all moral realists? Otherwise it's like being split on the "true" human skin color.

Comment author: BerryPick6 30 November 2012 06:39:41PM 2 points [-]

There's a separate question for Moral Realism vs. Moral Anti-Realism. It's an often accepted position among philosophers that one can hold Normative Ethical positions totally removed from their Meta-Ethics, which may account for some of the confusion.

Comment author: CCC 30 November 2012 09:28:35AM 0 points [-]

According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics,

???

I am confused. I lean towrds value ethics, and I can certainly see the appeal of consequentialism; but as I understand it, deontology is simply "follow the rules", right?

I fail to see the appeal of that as a basis for ethics. (As a basis for avoiding confrontation, yes, but not as a basis for deciding what is right or wrong). It doesn't seem to stand up well on inspection (who makes the rules? Surely they can't be decided deontologically?)

So... what am I missing? Why is deontology more favoured than either of the other two options?

Comment author: Peterdjones 30 November 2012 09:53:01AM *  5 points [-]

Deontology doens't mean "follow any rules" or "follow given rules" or "be law abiding". A deontologist can reject purported moral rules, just as a virtue theorist does not have to accept that copulaing with as many women as possible is "manly virtue", just as a value theorist does not have to value blind patriotism. Etc.

ETA:

Surely they can't be decided deontologically?

Meta-ethical systems ususally don't supply their own methdology. Deontologists usually work out rules based on some specific deontological meta-rule or "maxim", such as "follow on that rule one would wish to be universal law". Deontologies may vary according to the selection of maxim.

Comment author: BerryPick6 30 November 2012 10:55:46AM *  3 points [-]

Further, many philosophers think that Meta-Ethics and Normative Ethics can have sort of a "hard barrier" between them, so that one's meta-ethical view may have no impact at all upon one's acceptance of Deontology or Deontological systems.

EDIT: For the record, I think this is pretty ridiculous, but it's worth noting that people believe it.

Comment author: CCC 02 December 2012 08:40:13AM 0 points [-]

Meta-ethical systems ususally don't supply their own methdology. Deontologists usually work out rules based on some specific deontological meta-rule or "maxim", such as "follow on that rule one would wish to be universal law". Deontologies may vary according to the selection of maxim.

Ah, thank you. This was the point that I was missing; that the choice of maxim to follow may be via some non-deontological method.

Now it makes sense. Many thanks.

Comment author: roland 30 November 2012 12:27:51AM -1 points [-]

sophistication effect

The name of this bias is Bias blind spot.

Comment author: lukeprog 30 November 2012 04:02:29AM 3 points [-]

That's part of it. The sophistication effect specifically calls out the fact that due to the bias blind spot, sophisticated arguers have more ammunition with which to avoid noticing their own biases, and to see biases in others.