A minor (but important) nitpick:
[Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.
Science sets up experiments not just because it does not trust you, but because even if you were a perfect Bayesian, you could not determine cause effect relationships just from using Bayes theorem a lot.
Right! Besides just Bayes's Theorem, you'd also need Occam's Razor as a simplicity prior over causal structures. And, to drive the probability of a causal structure high enough, confidence that you'd observed in sufficient detail to drive down the probability of extra confounding or intervening variables.
Since the latter part is sometimes difficult though not theoretically impossible to achieve in fields like medicine, a randomized experiment in which you trust that your random numbers will probably have the Markov condition relative to other background variables, can more quickly give you confidence about some directions on causal arrows when the combination of effect size and sample size is large enough. Naturally, all of this is a mere special case of Bayesian reasoning on possible causal structures where (1) you start out very confident that some random numbers are conditionally independent of all their non-descendants in the graph, and (2) you start out very confident that your randomized experimental procedure causally connects to a single descendant node in that graph (the independent variable).
(a) You don't need to observe confounders to learn structure from data. In fact, sometimes you don't need any standard conditional independence at all. (Luke gave me the impression SI wasn't very interested in that point -- maybe it should be).
(b) Occam's razor / faithfulness gives you enough to learn the structure of statistical models, not causal ones. You need additional assumptions to equate the statistical models you learn with causal models. Bayesian networks are not causal models. Causality is not about conditional independence, it is about counterfactual invariance, that is causality expresses what changes or stays the same after a hypothetical 'wiggle.'
There is no guarantee that even given Occam's razor and faithfulness being true that the graph you obtain is such that if I wiggle a parent, the child will change. To verify your causal assumptions, you have to run an experiment, or no scientist will believe your graph is causal. This is what real causal discovery papers do, for example:
http://www.sciencemag.org/content/308/5721/523.abstract
Here they learned a protein signaling network, but then implemented an experiment where they changed the protein level of a paren...
This sounds like we're talking past each other somehow. Your point (a) is not clear to me - I was saying that to learn a sufficiently high-probability causal model from non-intervention data, you need to have observed the data in sufficient detail to rule out confounders (except at some low probability) (via simplicity priors, which otherwise can't drive down the probability of an untestable invisible confounder by all that far). This can certainly be done in principle, e.g. if you put the system under a microscope with a higher resolution than the system, and verified there were only X kinds of stuff in it and no others.
Your point (b) sounds just plain wrong to me. If you have a simplicity prior over causal models, and you can derive testable probable predictions from causal models, then you can do Bayesian updating and get a posterior over causal models. Substituting the word "flammable fizzbins" for "causal models" in the preceding sentence will produce another true sentence. I think you mean something different by "Bayesian" and "Occam's Razor" than I do.
By (a) I mean that you can sometimes get the true graph exactly even without having to observe confounders. Actually this was sort of known already (see the FCI algorithm, or even the IC* algorithm in Pearl's book), but we can do a lot better than that. For example, if we have the true graph:
a -> b -> c -> d, with a <- u1 -> c, and a <- u2 -> d, where we do not observe u1,u2, and u1,u2 are very complicated, then we can figure out the true graph exactly by independence type techniques without having to observe u1 and u2. Note: the marginal distribution p(a,b,c,d) that came from this graph has no conditional independences at all (checkable by d-separation on a,b,c,d), so typical techniques fail.
(b) is I guess "a subtle issue" -- but my point is about careful language use and keeping causal and statistical issues clear and separate.
A "Bayesian network" (or "belief network" -- I don't like the word Bayesian here because it is confusing the issue, you can use frequentist techniques with belief networks if you wanted, in fact a lot of folks do) is a joint distribution that factorizes as a DAG. That's it. Nothing about causality. ...
Well, this is very rapidly getting us into complex territory that future decision-theory posts will hopefully explore, but a very brief answer would be that I am unwilling to define anything fundamental in terms of do() operations because our universe does not contain any do() operations, and counterfactuals are not allowed to be part of our fundamental ontology because nothing counterfactual actually exists and no counterfactual universes are ever observed. There are quarks and electrons, or rather amplitude distributions over joint quark and lepton fields; but there is no do() in physics.
Causality seems to exist, in the sense that the universe seems completely causally structured - there is causality in physics. On a microscopic level where no "experiments" ever take place and there are no uncertainties, the microfuture is still related to the micropast with a neighborhood-structure whose laws would yield a continuous analogue of D-separation if we became uncertain of any variables.
Counterfactuals are human hypothetical constructs built on top of high-level models of this actually-existing causality. Experiments do not perform actual interventions and access alternat...
As an additional data point, I also still do not have a very good understanding of your ideas about causality (although I did note earlier that it seems rather different from Pearl's (which are similar to Ilya's)). I also note that nobody else seems to have a good understanding of your ideas, at least not enough to try to build upon them either here on LW or on the decision theory mailing list or try to explain them to me when I asked.
As a third data point, I used to be very confused about your ideas about causality, but your recent writing has helped a lot. To make embarassingly clear how very wrong I've been able to be, some years ago when you'd told us about TDT but not given details, I thought you had a fully worked-out and justified theory about how a decision agent could use causal graphs to model its uncertainty about the output of platonic computations, and use do() on its own output to compute the utility of different courses of action, and I got very frustrated when I simply couldn't figure out how to fill in the details of that...
...hmm. (I should probably clarify: when I say "use causal graphs to reason about", I don't mean in the 'trivial' sense you are actually using where the platonic computations cause other things but are themselves uncaused in the model; I mean some sort of system where different computations and/or logical facts about computations form a non-degenerate graph, and where do() severs one node somewhere in the middle of that graph from its parents.) "And", I was going to say, "when you finally did tell us more, I had a strong oh moment when you said that y...
On second thought, the main problem may not be lack of clarity but that your ideas about causality are too speculative and people either lack confidence that your research program (try to reduce Pearl's do()-based causality to lower-level "causality in physics") is the right one, or do not see how to proceed.
Both apply for me but the former is perhaps more relevant at this point. Basically I'm not sure that "do()-based causality" will actually end up playing a role in the ultimate "correct" decision theory (I guess if there is lack of clarity, it's why you think that it will), and in the mean time there are other problems that definitely need to be solved and also seem more approachable.
(To explain why I think "do()-based causality" may not end up playing a role, it seems plausible that in an AI or at least decision theory (I wanted to say theoretical decision theory but that seems redundant :), cognition about "high-level causality" just ends up being handled as a special case by a more general algorithm, similar to how an AI programmed to maximize expected utility wouldn't specifically need to be hand-coded with natural language processing if it was running on a sufficiently powerful computer.)
ETA: BTW, can you comment on whether my understanding in this comment was correct, and whether they still apply to Eliezer_2012?
If causality isn't a special kind of logic, why is everything in the known universe made out of (a continuous analogue of) causality instead of logic in general?
Wait, if causality is a special kind of logic, how does that help answer the question? Don't we still have to answer why the universe is made of this kind of logical instead of some other?
Why not Time-Turners or a zillion other possibilities?
I don't understand how lack of Time-Turners makes you think causality is a special kind of logic or why you want to incorporate causality into decision theory (which is still my bigger question). Similar questions could be asked about other features of the universe:
But we're not concerned about these questions at the level of decision theory, since it seems possible to have a decision theory that works with an arbitrary number of dimensions, and with both kinds of laws of physics. Similarly, I don't see why we can't have a "causality-agnostic" decision theory that works in universes both with and without Time-Turners.
I don't think Pearl's diagrams are defined via do(). I think I disagree with that statement even if you can find Pearl making it.
Well, the author is dead, they say.
There are actually two separate causal models in Pearl's book: "causal Bayesian networks" (chapter 1), and "functional models" aka "non-parametric structural equation models" (chapter 7). These models are not the same, in fact functional models are a lot stronger logically (that is they make many more assumptions).
The first is defined via do(.), you can check the definition. The second can be defined either via a set of functions, or via a set of axioms. The two definitions are, I believe, equivalent. The axiomatic approach is valuable in statistics, where we often cannot exhibit the functions that make up the model, and must resort to enumerating assumptions. If you want to take the axiomatic approach you need a language stronger than do(.). In particular you need to be able to express counterfactual statements of the form "I have a headache. Would I have a headache had I taken an aspirin one hour ago?" Pearl's model in chapter 7 actually makes assumptions about counte...
Another extremely serious problem is that there is next to no particularly effective effort in philosophical academia to disregard confused questions, and to move away from naive linguistic realism. The number of philosophical questions of the form 'is x y' that can be resolved by 'depends on your definition of x and y' is deeply depressing. There does not seem to be a strong understanding of how important it is to remember that not all words correspond to natural, or even (in some cases) meaningful categories.
Please list as many examples of these questions as you can muster. (I mean questions, seriously discussed by philosophers, which you claim can be resolved in this way.)
Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier's problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle's Chinese room and other bad thought experiments.
I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,
I agree that the answers to these questions depend on definitions
I think he meant that those questions depend ONLY on definitions.
As in, there's a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking "can a submarine swim" is only interesting in deciding "should the English word 'swim' apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely". That example sounds stupid, but people waste a lot of time on the similar case of "think" instead of "swim".
The thought experiment functions as an informal reductio ad absurdum of the argument 'Fetuses are people. Therefore abortion is immoral.' or 'Fetuses are conscious. Therefore abortion is immoral.' That's all it's doing. If you didn't find the arguments compelling in the first place, then the reductio won't be relevant to you. Likewise, if you think the whole moral framework underlying these anti-abortion arguments is suspect, then you may want to fight things out at the fundaments rather than getting into nitty-gritty details like this. The significance of the violin thought experiment is that you don't need to question the anti-abortionist's premises in order to undermine the most common anti-abortion arguments; they yield consequences all on their own that most anti-abortionists would find unacceptable.
That is the dialectical significance of the above argument. It has nothing to do with assuming that everyone found the original anti-abortion argument plausible. An initially implausible argument that's sufficiently popular may still be worth analyzing and refuting.
I once met a philosophy professor who was at the time thinking about the problem "Are electrons real?" I asked her what her findings had shown thus far, and she said she thinks they're not real. I then asked her to give me examples of things that are real. She said she doesn't know any examples of such things.
Your previous post was good, but this one seems to be eliding a few too many issues. If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — actually tells us about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned. The survey you cite is also obviously unhelpful, in that the questions on that survey were chosen because they're controversial. Most philosophical questions are not very controversial, but for that very reason you don't hear much about them. If we hand-picked all the foundational questions physicists disagreed about and conducted a popularity poll, would we be rightly surprised to find that the poll results were divided?
(It's also worth noting that some of the things being measured by the poll are attitudinal and linguistic variation between different philosophical schools and programs, not just doctrinal disagreements. Why should we expect ethicists and philosophers of mathematics to completely agree in methodology and terminology, when we do not expect the same from physicists and biologists?)
There are three reasons philosop...
If you took a poll of physicists asking them to explain what their fundamental model — quantum mechanics — is actually asserting about the world (surely a simple enough question), there would be disagreement comparable to that regarding the philosophical questions you mentioned.
A major problem with modern physics is that there are almost no known phenomena that are known to work in a way that disagrees with how modern physics predicts they would work (in principle; there are lots of inferential/computational difficulties). What physics asserts about the world is, to the best of anyone's knowledge, coincides with what's known about most of the world in all detail. The physicists have to build billion dollar monstrosities like LHC just to get their hands on something they don't already thoroughly understand. This doesn't resemble the situation with philosophy in the slightest.
Inasmuch as philosophical issues are settled, they stop getting talked about.
Why exactly? I mean, there is no controversy in mathematics about whether 2+2=4, and yet we continue teaching this knowledge in schools. Uncontroversial, yet necessary to be taught, because humans don't get it automatically, and because it is necessary for more complicated calculations.
Why exactly don't philosophers do an equivalent of this? It is because once a topic has been settled at a philosophical conference, the next generations of humans are automatically born with this knowledge? Or at least the answer is published so widely, that it becomes more known than the knowledge of 2+2=4? Or what?
Start tabooing the word 'philosophy.' See how it goes.
First approximation: Pretended ability to make specific conclusions concerning ill-defined but high-status topics. :(
Although I'm a lawyer, I've developed my own pet meta-approach to philosophy. I call it the "Cognitive Biases Plus Semantic Ambiguity" approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.
First, cognitive biases - or (roughly speaking) cognitive illusions - are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illusions (positive outcome bias, the just world phenomenon, the Lake Wobegon effect, the fundamental attribution error), etc. I see this in my favorite topic area (the free will problem), but I believe that it likely applies broadly across philosophy.
Second, semantic ambiguity creates persistent problems if not identified and fixed. The solutions to several of Hilbert's 100 problems are "no answer - problem statement is not well defined." That approach is unsexy, and emotionally dissatisfying (all of this work, yet we get no answer!). Perhaps for that reason, philosophers (but not mathematicians) seem completely incapabl...
We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT?...
Your presentation here seems misleading to me. You imply that philosophers are merely average scorers on the CRT relative to the rest of the (similarly educated) population.
This claim is misleading for several reasons: 1) The study from which you get the philosophers' score is a mean score for people who have had some graduate level philosophical training. This is a set that will overlap with many of the other groups you mention. While it will include all professional philosophers, I don't think a majority of the set will be professional philosophers. Graduate level logic or political philosophy, etc. courses are pretty standard in graduate educations across the board.
2) Fredrick takes scores from a variety of different schools, trying to capture people, evidentially, who are undergraduates, graduate students, or faculty. Fredrick comes up with a mean score of 1.24 for respondents who are members of a university. In contrast, Livengood (from which you get the philosophers' mean score) gets a mean score of 0.65 and 0.82 for people with undergraduate or gradu...
I'm not sure that more rationality in philosophy would help enough as far as FAI is concerned. I expect that if philosophers became more rational, they would mainly just become more uncertain about various philosophical positions, rather than reach many useful (for building FAI) consensuses.
If you look at the most interesting recent advances in philosophy, it seems that most of them were made by non-philosophers. For example, Turing, Church, and other's work on understanding the nature of computation, von Neumann and Morgenstern's decision theory, Tegmark's Ultimate Ensemble, and algorithmic information theory / Solomonoff Induction. (Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?) Based on this, I think appropriate background knowledge and raw intellectual firepower (most of the smartest humans probably go into math/science instead of philosophy) are perhaps more important than rationality for making philosophical progress.
(Can anyone think of a similarly impressive advance made by professional philosophers, in this same time frame?)
ETA:
I'm only familiar with about a third of these (not counting Tarski who I agreed with JoshuaZ is more of a mathematician than philosopher), but the ones that I am familiar with do not seem as interesting/impressive/fruitful/useful as the advances I mentioned in the grandparent comment. If you could pick one or two on your list for me to study in more detail, which would you suggest?
I think Nick is actually an example of how rationality isn't that useful for making philosophical progress. I'm a bit reluctant to say this (for obvious social reasons, which I'm judging to be outweighed by the strategic importance of this issue) but his work (PhD thesis) on anthropic reasoning wasn't actually very good. I know that at least one SI Research Associate agrees with my assessment.
ETA: I should qualify this by saying that while his proposed solution wasn't very good (which you can also infer from the fact that nobody ever talks about or builds upon it around here despite strong interest in the topic) he did come up arguments/considerations/thought experiments, such as the Presumptuous Philosopher, that we still discuss.
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex...
Huh?
Examples like that are the bread and butter of discussions about motivational internalism: prec...
Are some philosophical questions questions about reality? If so, what does it take for a question about reality to count as "philosophical" as opposed to "scientific"? Are these just empirical clusters?
And if it's not a fact about reality, what does it mean to get it right?
Luke quoted:
Science is built around the assumption that you're too stupid and self-deceiving to just use [probability theory]. After all, if it was that simple, we wouldn't need a social process of science... [Standard scientific method] doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.
That's a pretty irritatingly-wrong quote. Of course the scientific method is social of reasons other than the stupidity and self-deceiving nature of sc...
A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).
I found this by far the most interesting part of this (very good) post. I am surprised I had to learn it hidden inside a mostly unrelated essay. I would certainly like to hear more about this test.
What would evidence of deontology / consequentialism / virtue ethics, empiricism vs. rationalism, or physicalism vs. non-physicalism look like?
But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions.
Philosophy hasn;t been very successful at finding the truth about the kind of questions philosophy typically considers. What's better...at answering those kinds of questions? You can only condemn philosophy for having worse methods than science, based on results, if they are both applied to the same problems.
Sometimes, they are even divided on psychological questions that psychologists have already answered...
I think you've misunderstood the debate: philosophers are arguing in this case over whether or not moral judgements are intrinsically motivating. If they are, then the brain-damaged people you make reference to are (according to moral judgement internalizes) not really making moral judgements. They're just mouthing the words.
This is just to say that psychology has answered a certain question, but not the question that philosophers debating this point are concerned about.
According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics,
???
I am confused. I lean towrds value ethics, and I can certainly see the appeal of consequentialism; but as I understand it, deontology is simply "follow the rules", right?
I fail to see the appeal of that as a basis for ethics. (As a basis for avoiding confrontation, yes, but not as a basis for deciding what is right or wrong). It doesn't seem to stand up well on inspection (who makes the rules? Surely they can't be decided deontologically?)
So... what am I missing? Why is deontology more favoured than either of the other two options?
(As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)
It seems to me that rationality is more about updating the correct amount, which is primarily calculating the likelihood ratio correctly. Most of the examples of philosophical errors you've discussed come from not calculating that ratio correctly, not from starting out with a bizarre prior.
For example, consider Yvain and the Case of the Visual Imagination:
...Upon hearing this, my response was "How the stars was this actually a real debate? Of course we h
Just to point out: your 3rd footnote all links to the same page. Enjoyed the post. Perhaps a case study of a big philosophy problem fully dissolved here?
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1
This isn't an area about which I know very much about but my understanding i...
they're split 25-24-18 on deontology / consequentialism / virtue ethics,
Does that mean they're all moral realists? Otherwise it's like being split on the "true" human skin color.
So, your account basically implies that philosophy is less reliable than astrology, but is not as useful? Then why even bother talking to the philosophical types, to begin with?
Part of the sequence: Rationality and Philosophy
Thomas Kelly
Jason Brennan
After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism.
Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1
Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make a moral judgment they aren't themselves motivated by, but scientists have reached consensus about such things.
In its dependence on masses of evidence and definitive experiments, science doesn't trust your rationality:
Sometimes, you can answer philosophical questions with mountains of evidence, as with the example of moral motivation given above. But or many philosophical problems, overwhelming evidence simply isn't available. Or maybe you can't afford to wait a decade for definitive experiments to be done. Thus, "if you would rather not waste ten years trying to prove the wrong theory," or if you'd like to get the right answer without overwhelming evidence, "you'll need to [tackle] the vastly more difficult problem: listening to evidence that doesn't shout in your ear."
This is why philosophers need rationality training even more desperately than scientists do. Philosophy asks you to get the right answer without evidence that shouts in your ear. The less evidence you have, or the harder it is to interpret, the more rationality you need to get the right answer. (As likelihood ratios get smaller, your priors need to be better and your updates more accurate.)
Because it tackles so many questions that can't be answered by masses of evidence or definitive experiments, philosophy needs to trust your rationality even though it shouldn't: we generally are as "stupid and self-deceiving" as science assumes we are. We're "predictably irrational" and all that.
But hey! Maybe philosophers are prepared for this. Since philosophy is so much more demanding of one's rationality, perhaps the field has built top-notch rationality training into the standard philosophy curriculum?
Alas, it doesn't seem so. I don't see much Kahneman & Tversky in philosophy syllabi — just light-weight "critical thinking" classes and lists of informal fallacies. But even classes in human bias might not improve things much due to the sophistication effect: someone with a sophisticated knowledge of fallacies and biases might just have more ammunition with which to attack views they don't like. So what's really needed is regular habits training for genuine curiosity, motivated cognition mitigation, and so on.
(Imagine a world in which Frank Jackson's famous reversal on the knowledge argument wasn't news — because established philosophers changed their minds all the time. Imagine a world in which philosophers were fine-tuned enough to reach consensus on 10 bits of evidence rather than 1,000.)
We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)'s CRT? Livengood et al. (2010) found, via an internet survey, that subjects with graduate-level philosophy training had a mean CRT score of 1.32. (The best possible score is 3.)
A score of 1.32 isn't radically different from the mean CRT scores found for psychology undergraduates (1.5), financial planners (1.76), Florida Circuit Court judges (1.23), Princeton Undergraduates (1.63), and people who happened to be sitting along the Charles River during a July 4th fireworks display (1.53). It is also noticeably lower than the mean CRT scores found for MIT students (2.18) and for attendees to a LessWrong.com meetup group (2.69).
Moreover, several studies show that philosophers are just as prone to particular biases as laypeople (Schulz et al. 2011; Tobia et al. 2012), for example order effects in moral judgment (Schwitzgebel & Cushman 2012).
People are typically excited about the Center for Applied Rationality because it teaches thinking skills that can improve one's happiness and effectiveness. That excites me, too. But I hope that in the long run CFAR will also help produce better philosophers, because it looks to me like we need top-notch philosophical work to secure a desirable future for humanity.3
Next post: Train Philosophers with Pearl and Kahneman, not Plato and Kant
Previous post: Intuitions Aren't Shared That Way
Notes
1 Clearly, many philosophers have advanced versions of motivational internalism that are directly contradicted by these results from psychology. However, we don't know exactly which version of motivational internalism is defended by each survey participant who said they "accept" or "lean toward" motivational internalism. Perhaps many of them defend weakened versions of motivational internalism, such as those discussed in section 3.1 of May (forthcoming).
2 Mathematicians reach even stronger consensus than physicists, but they don't appeal to what is usually thought of as "mountains of evidence." What's going on, there? Mathematicians and philosophers almost always agree about whether a proof or an argument is valid, given a particular formal system. The difference is that a mathematician's premises consist in axioms and in theorems already strongly proven, whereas a philosopher's premises consist in substantive claims about the world for which the evidence given is often very weak (e.g. that philosopher's intuitions).
3 Bostrom (2000); Yudkowsky (2008); Muehlhauser (2011).