The survey results page also lists "Strongest correlations" with other questions. If I'm reading the tables correctly for the Newcomb's Problem results, there were 17 groups (in the target population who gave a particular answer to one of the other survey questions) in which one-boxing was at least as common as two-boxing. In order (of one-boxers minus two-boxers):
Political philosophy: communitarianism (84 vs 67)
Semantic content: radical contextualism (most or all) (49 vs 34)
Analysis of knowledge: justified true belief (49 vs 34)
Response to external-world skepticism: pragmatic (51 vs 37)
Normative ethics: virtue ethics (112 vs 100)
Philosopher: Quine (33 vs 22)
Arguments for theism: moral (22 vs 12)
Hume: skeptic (72 vs 63)
Aim of philosophy: wisdom (96 vs 88)
Philosophical knowledge: none (12 vs 5)
Philosopher: Marx (8 vs 2)
Aim of philosophy: goodness/justice (73 vs 69)
A priori knowledge: [no] (62 vs 58)
Consciousness: panpsychism (16 vs 13)
External world: skepticism (20 vs 18)
Eating animals and animal products: omnivorism (yes and yes) (168 vs 168)
Truth: epistemic (26 vs 26)
This is fantastic.
... Virtue ethicists one-box?!
I'm guessing this is a case of "views that correlate with being less socially/culturally linked to analytic philosophers, will tend to correlate more with one-boxing". But it would be wild if something were going on like:
Consequentialists two-box, because thinking about "consequences" primes you to accept the CDT argument that you should maximize your direct causal impact.
Deontologists two-box, because thinking about duties/principles primes you to accept the "I should do the capital-r Rational thing even if it's not useful" argument.
Virtue ethicists one-box, because (a subset of) one-boxers are the ones talking about 'making yourself into the right kind of agent'. (Or, more likely, virtue ethicists one-box just because they lack the other views' reasons to two-box.)
Virtue ethicists one-box, because (a subset of) one-boxers are the ones talking about 'making yourself into the right kind of agent'.
This seems sort of obvious to me, and I'm kind of surprised that only a bit over half of the virtue ethicists one-box.
[EDIT] I think I was giving the virtue ethicists too much credit and shouldn't have been that surprised--this is actually a challenging situation to map onto traditional virtues, and 'invent the new virtue for this situation' is not that much a standard piece of virtue ethics. I would be surprised if only a bit over half of virtue ethicists pay up in Parfit's Hitchhiker, even tho the problems are pretty equivalent.
53% of virtue ethicists one-box (out of those who picked a side).
Seems plausible that it's for kinda-FDT-like reasons, since virtue ethics is about 'be the kind of person who' and that's basically what matters when other agents are modeling you. It also fits with Eliezer's semi-joking(?) tweet "The rules say we must use consequentialism, but good people are deontologists, and virtue ethics is what actually works."
Whereas people who give the pragmatic response to external-world skepticism seem more likely to have "join the millionaires club" reasons for one-boxing.
Taking the second box is greedy and greed is a vice. This might also explain one-boxing by Marxists.
By the very crude method (ignoring "skip" and "insufficiently familiar" answers) 'give the field a point for every 1% of the field that endorses theism', and 'give the field a point for every 1% that endorses A-theory minus every 1% that endorses B-theory', the seven highest-scoring fields (i.e., the ones I'd expect to be least reasonable and least science-literate) are:
The seven lowest scoring (aka most promising fields) are:
(Philosophy of computing and information only has data for the target group, which is less comparable, so I leave it out here. It would get -25 points.)
Are we assuming affirming A-theory is indicative of science illiteracy because it is incompatible with special relativity or for some other reason?
Basically, though with two wrinkles:
(Or, if they’ve literally just never heard of relativity in spite of being a faculty member at a university, they should refrain from weighing in on the A-theory vs. B-theory debate until they’ve done some googling.)
But if they’ve never heard of relativity (or, more likely, have essentially no idea of the content of the theory of relativity), how the heck should they know that relativity has any connection to the “A-theory vs. B-theory” debate, and that they therefore should refrain from weighing in, on the basis of that specific bit of ignorance?
Maybe there are all sorts of “unknown unknowns”—domains of knowledge of whose existence or content I am unaware—which materially affect topics on which I might, if asked, feel that I may reasonably express a view (but would not think this if my ignorance of those domains were to be rectified). But I don’t know what they might be, or on what topics they might bear—that’s what makes them unknown unknowns!—and yet any view I currently hold (or might assent to express, if asked) might be affected by them!
Should I therefore refrain from weighing in on anything, ever, until such time as I have familiarized myself with literally all human knowledge?
I do not think that this is a reasonable standard to which we should hold anyone…
I think that's a very fair point, and after thinking about it more I'm guessing I'd want to exclude the people who voted 'lean A-theory' and 'lean B-theory', and just include the people who strongly endorsed one or the other (since the survey distinguishes those options).
I think it's a very useful signal either way, even the 'unfair' version. And I think it really does take very little googling (or very little physics background knowledge) to run into the basic issues; e.g., SEP is a standard resource I'd expect a lot of philosophers to make a beeline to if they wanted a primer. But regardless, saying it's as an unambiguous or extreme a sign as theism is too harsh.
(Though in some ways it's a more extreme sign than theism, because theism is a thing a lot of people were raised in and have strong emotional attachments—it's epistemic 'easy mode' in the sense that it's an especially outrageous belief, but it can be 'hard mode' emotionally. I wanted something like a-theory added to the mix partly because it's a lot harder to end up 'a-theorist by default' than to end up 'theist by default'.)
Ehh, I think I want to walk back my 'go easy on a-theorists' update a bit. Even if I give someone a pass on the physics-illiteracy issue, I feel like I should still dock a lot of points for Bayes-illiteracy. What evidence does someone think they've computed that allows them to update toward A-theory? How did the 'mysterious Nowness of Now' causally impact your brain so as to allow you to detect its existence, relative to the hypothetical version of you in a block universe?
Crux: If there's a reasonable argument out there for having a higher prior on A-theory, in a universe that looks remotely like our universe, then I'll update.
Heck, thinking in terms of information theory and physical Bayesian updates at all is already most of what I'm asking for here. What I'm expecting instead is 'well, A-theory felt more intuitive to me, and the less intuitive-to-me view gets the burden of proof'.
Counterpoint: how much of the literature on the philosophical presentism vs. eternalism debate have you acquainted yourself with?
For example, you linked to the SEP entry on time, so surely you noticed the philosophers’ extant counterarguments to the “relativity entails B-theory” argument? What is your opinion on papers like this one or this one?
Now, if your response is “I have not read these papers”, then are you not in the same situation as the philosopher who is unfamiliar with relativity?
(But in fact a quick survey of the literature on the “nature of time” debate—including, by the way, the SEP entry you linked!—shows that philosophers involved in this debate are familiar with relativity’s implications in this context… perhaps more familiar, on average, than are physicists—or rationalists—with the philosophical positions in question.)
I haven't read those papers, and am not very familiar with the presentism debate, though I'm not 0% familiar. Nor have I read the papers you link. If there's a specific argument that you think is a good argument for the A-theory view (e.g., if you endorse one of the papers), I'm happy to check it out and see if it updates me.
But the mere fact that "papers exist", "people who disagree with me exist", and "people publishing papers in favor of A-theory nowadays are all familiar with special relativity (at least to some degree)" doesn't move me, no -- I was taking all of those things for granted when I wrote the OP.
Having interacted with a decent number of analytic-philosophy metaphysics papers and arguments at college, I think I already know the rough base rate of 'crazy and poorly-justified views on reality' in this area, and I think it's very high by LW standards (though I think metaphysics is unusually healthy for a philosophy field).
Since I'm not a domain expert, it might turn out that I'm missing something crucial and the A-theory has some subtle important argument favoring it; but I don't treat this as nontrivially likely merely because professional metaphysicians who have heard of special relativity disagree with me, no. (If I'm wrong, most of my probability mass would be on 'I'm defining the A-theory wrong, actually A-theorists are totally fine with there being no unique special Present Moment.' Though then I'd wonder what the content of the theory is!)
If there’s a specific argument that you think is a good argument for the A-theory view (e.g., if you endorse one of the papers), I’m happy to check it out and see if it updates me.
No, not particularly. Actually, I do not have an opinion on the matter one way or the other!
As for the rest of your comment… it is understandable, as far as it goes; but note that a philosopher could say just the same thing, but in reverse.
He might say: “The mere fact that ‘a physics theory exists’, ‘physicists think that their theory has some bearing on this philosophical argument’, and ‘physicists have some familiarity with the state of the philosophical debate on the matter’ doesn’t move me.”
Our philosopher might say, further: “I think I already know the rough base rate of ‘physical scientists with delusions of philosophy’; I have interacted with many such folks, who think that they do not need to study philosophy in order to have an opinion on philosophers’ debates.”
And he might add, in all humility: “Since I’m not a domain expert, it might turn out that I’m missing something crucial, and the theory of relativity has some important consequence that bears on the argument; but I don’t treat this as nontrivially likely merely because professional physicists who have heard of the ‘eternalism vs. presentism’ debate disagree with me.”
Now, suppose I am a curious, though reasonably well-informed, layman—neither professionally a philosopher nor yet a physicist—and I observe this back-and-forth. What should I conclude from this exchange, about which one of you is right?
… and that would be the argument that I would make, if it were the case that you dismissed the philosophers’ arguments without reading them, while the philosophers dismissed your arguments (and/or those of the physicists) without reading them. But that’s not the case! Instead, what we have is a situation where you dismiss their arguments without reading them, while they have read your arguments, and are disagreeing with them on that, informed, basis.
Now what should I (the hypothetical well-informed layman) conclude?
Of course, the matter is more complicated than even that, because philosophers hardly agree with each other on this matter. But let’s not lose sight of the point of this discussion thread, which is: should a philosopher who endorses A-theory be docked “rationality points” (on the reasoning that any such philosopher must surely be suffering from “science illiteracy”—because if they had done any “basic research” [i.e., five minutes of web-searching], they would have learned about the special relativity issue, and would—we are meant to assume—immediately and reliably conclude that they had no business having an opinion about the nature of time, at least not without gaining a thorough technical understanding of special relativity)?
I think the answer to that question is “no, definitely not”. It’s obvious from a casual literature search that philosophers who are familiar with the “eternalism vs. presentism” debate at all, are also familiar with the question of special relativity’s implications for that debate. Whatever is causing some of them to still favor A-theory, it ain’t “science illiteracy”, inability to use Google, or any other such simple foolishness.
it is understandable, as far as it goes; but note that a philosopher could say just the same thing, but in reverse.
Sure! And similarly, if you were an agnostic, and I were an atheist making all the same statements about theism, you could say 'philosophers of religion could say just the same thing, but in reverse'.
Yet this symmetry wouldn't be a good reason for me to doubt atheism or put more time into reading theology articles.
I think the specific symmetry you're pointing at doesn't quite work (special relativity doesn't have the same standing as A-theory arguments, either in fact or in philosophers' or physicists' eyes), but it's not cruxy in any case.
Instead, what we have is a situation where you dismiss their arguments without reading them, while they have read your arguments, and are disagreeing with them on that, informed, basis.
At a minimum, you should say that I'm making a bizarrely bold prediction (at least from an outside-view perspective that thinks philosophers have systematically accurate beliefs about their subject matter). If I turn out to be right, after having put so little work in, it suggests I have surprisingly 'efficient' heuristics -- ones that can figure out truth on at least certain classes of question, without putting in a ton of legwork first. (Cf. skills like 'being able to tell whether certain papers are shit based on the abstract'.)
You're free to update toward the hypothesis that I'm overconfident; the point of my sharing my views is to let you consider hypotheses like that, rather than hiding any arrogant-sounding beliefs of mine from view. I'm deliberately stating my views in a bold, stick-my-neck-out way because those are my actual views -- I think we do for-real live in the world where A-theory is obviously false.
I'm not saying any of this to shut down discussion, or say I'm unwilling to hear arguments for A-theory. But I do think there's value in combating underconfidence just as much as overconfidence, and in trying to reach conclusions efficiently rather than going through ritualistic doubts.
If you think I'm going too fast, then that's a testable claim, since we can look at the best A-theory arguments and see if they change my mind, the minds of people we both agree are very sane, etc. But I'd probably want to delegate that search for 'best arguments' to someone who's more optimistic that it will change anything.
Whatever is causing some of them to still favor A-theory, it ain’t “science illiteracy”, inability to use Google, or any other such simple foolishness.
Depending on how much we're talking about 'philosophers who don't work on the metaphysics of time professionally, but have a view on this debate' (the main group I discussed in the OP) vs. 'A-theorists who write on the topic professionally', I'd say it's mostly a mix of (a) not using google / not having basic familiarity with the special relativity argument; (b) misunderstanding the force of the special relativity argument; and (c) misunderstanding/rejecting the basic Bayesian idea of how evidence, burdens of proof, updating, priors, and thermodynamic-work-that-makes-a-map-reflect-a-territory work, in favor of epistemologies that put more weight on 'metaphysical intuitions that don't make Bayesian sense but feel really compelling when I think them'.
I'd say much the same thing about professional theologians who argue that God must be real (in order for us to know stuff at all) because there's no reason for evolution to give humans accurate cognition; or, for that matter, about theologians who argue that God must be real because speciation isn't real. There are huge industries of theist scholars who have spent their whole lives arguing such things. Can they really be so wrong, when the counter-argument is so obvious, so strong, and so googlable?
Apparently, they can.
To put it in simpler terms: is a physicist who believes an invisible, undetectable dragon lives in their garage 'science-illiterate'?
I'd say that they're at best science-illiterate, if not outright unhinged. If you want to say that it's impossible to be science-illiterate while knowing a bunch of physics facts or while being able to do certain forms of physics lab work, then I assume we're defining the word 'science-illiterate' differently. But hopefully this example clarifies in what basic sense I'm using the term.
Or B theory just isn't that good. If physical reductionism is the correct theory of mind, so that the mind is just another part of the block, it's difficult to see where so much as an illusion of temporal flow comes from.
Some versions of the A-theory might technically be compatible with special relativity.
Well, Copenhagen is compatible with SR (collapse is nonlocal, but cannot be used for nonlocal signaling), and it allows you to identify a moving present moment as where collapse is occurring.
If physical reductionism is the correct theory of mind, so that the mind is just another part of the block, it's difficult to see where so much as an illusion of temporal flow comes from.
No? Seems trivially easy to see, and I don't think reductionism matters here. If I were an immaterial Cartesian soul plugged in to the Matrix, and the Matrix ran on block-universe physics, the same arguments for 'it feels like there's an objective Now but really this is just an implication of my being where I am in the block universe' would hold. The argument is about your relationship to other spatial and temporal slices of the Matrix, not about the nature of your brain or mind.
Well, Copenhagen is compatible with SR (collapse is nonlocal, but cannot be used for nonlocal signaling), and it allows you to identify a moving present moment as where collapse is occurring.
On the (false) collapse interpretation of QM, I can cause collapses; but so can my brother Fred who lives on Mars. Which of our experiences coincide with 'the present', if there is a single unique reference-frame-independent Present Moment?
The number of moral realists, and especially non-naturalist moral realists, both strike me as indications that there is something wrong with contemporary academic philosophy. It almost seems like philosophers reliably hold one of the less defensible positions across many issues.
We've talked about this a bit, but to restate my view on LW: I think there's enormous variation in what people mean by "moral realism", enough to make it a mostly useless term (and, as a corollary, I claim we shouldn't update much about philosophers' competence based on how realist or anti-realist they are).
Even 'morality: subjective or objective?' seems almost optimized to be confusing, for the same reason it would be confusing to ask "are the rules of baseball subjective, or objective?".
Baseball's rules are subjective in the sense that humans came up with them (/ located them in the space of all possible game rules), but objective in the sense that you can't just change the rules willy-nilly (without in effect changing the topic away from "baseball" and to a new game of your making). The same is true for morality.
(Though there's an additional sense in which morality might be called 'subjective', namely that there isn't a single agreed-upon 'rule set' corresponding to the word 'morality', and different people favor different rule sets -- like if soccer fans and American-football fans were constantly fighting about which of the two games is the 'real football'.
And there's an additional sense in which morality might be called 'objective', namely that the rules of morality don't allow you to stop playing the game. With baseball, you can choose to stop playing, and no one will complain about it. With morality, we rightly treat the 'game' as one that everyone is expected to play 24/7, at least to the degree of obeying game rules like 'don't murder'.)
This is also why I don't care how many philosophers think aesthetic value is subjective vs. objective. The case is quantitatively stronger for 'aesthetic value is subjective' than for 'morality is subjective' (because it's more likely that Bob and Alice's respective Aesthetics Rules will disagree about 'Mozart is one of the best musicians' than that Bob and Alice's Morality Rules will disagree about 'it's fine to kill Mozart for fun'), but qualitatively the same ambiguities are involved.
It would be better to ask questions like 'is it a supernatural miracle that morality exists / that humans happen to endorse the True Morality' and 'if all humans were ideally rational and informed, how much would they agree about what's obligatory, impermissible, worthy of praise, worthy of punishment, etc.?', rather than asking about 'subjective', 'objective', 'real', or 'unreal'.
Copying over a comment I left on Facebook:
I claim that there are basically four positions here:
1. Magical-thinking anti-realist: There's nothing special about morality, it's just like the rules of chess. So let's stop being moral!
2. Reasonable anti-realist: There's nothing special about morality, it's just like the rules of chess. It's important to emphasize that the magical-thinking realists are wrong, though, so let's say 'moral statements aren't mind-independently true', even though there's a sense in which they are mind-independently true (eg, the same sense in which statements about chess rules are mind-independently true).
3. Reasonable realist: There's nothing special about morality, it's just like the rules of chess. It's important to emphasize that the magical-thinking anti-realists are wrong, though, so let's say 'moral statements are mind-independently true', even though there's a sense in which they aren't mind-independently true (eg, the same sense in which statements about chess rules aren't mind-independently true).
4. Magical-thinking realist: Morality has to be incredibly magically physics-transcendingly special, otherwise (the magical-thinking anti-realist is right / God doesn't exist / etc.). So I hereby assert that it is indeed special in that way!
Terminology choices aside, views 2 and 3 are identical, and the whole debate gets muddled and entrenched because people fixate on the 'realism' rather than on the thing anyone actually cares about.
Cf. people who say 'we can't say non-realism is true, that would give aid and comfort to crazy cultural relativists who (incoherently) think we can't ban female genital mutilation because there are no grounds for imposing any standards across cultural divides'.
Whether you're more scared of crazy cultural relativists or of crypto-religionists isn't a good way of dividing up the space of views about the metaphysics of morality! But somehow here (I claim) we are.
My own view is the one endorsed in Eliezer Yudkowsky's By Which It May Be Judged (roughly Frank Jackson's analytic descriptivism). This is what I mean when I use moral language.
When it comes to describing moral discourse in general, I endorse semantic pluralism / 'different groups are talking about wildly different things, and in some cases talking about nothing at all, when they use moral language'.
You could call these views "anti-realist" in some senses. In other senses, you could call them realist (as I believe Frank Jackson does). But ultimately the labels are unimportant; what matters is the actual content of the view, and we should only use the labels if they help with understanding that content, rather than concealing it under a pile of ambiguities and asides.
When it comes to describing moral discourse in general, I endorse semantic pluralism / 'different groups are talking about wildly different things, and in some cases talking about nothing at all, when they use moral language'.
I agree, but this is orthogonal to whether moral realism is true. Questions about moral realism generally concern whether there are stance-independent moral facts. Whether or not there are such facts does not directly depend on the descriptive status of folk moral thought and discourse. Even if it did, it's unclear to me how such an approach would vindicate any substantive account of realism.
You could call these views "anti-realist" in some senses. In other senses, you could call them realist (as I believe Frank Jackson does).
I'd have to know more about what Jackson's specific position is to address it.
But ultimately the labels are unimportant; what matters is the actual content of the view, and we should only use the labels if they help with understanding that content, rather than concealing it under a pile of ambiguities and asides.
I agree with all that. I just don't agree that this is diagnostic of debates in metaethics about realism versus antirealism. I don't consider the realist label to be unhelpful, I do think it has a sufficiently well-understood meaning that its use isn't wildly confused or unhelpful in contemporary debates, and I suspect most people who say that they're moral realists endorse a sufficiently similar enough cluster of views that there's nothing too troubling about using the term as a central distinction in the field. There is certainly wiggle room and quibbling, but there isn't nearly enough actual variation in how philosophers understand realism for it to be plausible that a substantial proportion of realists don't endorse the kinds of views I'm objecting to and claiming are indicative of problems in the field.
I don't know enough about Jackson's position in particular, but I'd be willing to bet I'd include it among those views I consider objectionable.
I think there's enormous variation in what people mean by "moral realism", enough to make it a mostly useless term
I disagree with this claim, and I don’t think that, even if there were lots of variation in what people meant by moral realism, that this would render my claim that the large proportion of respondents who favor realism indicates a problem in the profession. The term is not “useless,” and even if the term were useless, I am not talking about the term. I am talking about the actual substantive positions held by philosophers: whatever their conception of “realism,” I am claiming that enough of that 60% endorse indefensible positions that it is a problem.
I have a number of objections to the claim you’re making, but I’d like to be sure I understand your position a little better, in case those objections are misguided. You outline a number of ways we might think of objectivity and subjectivity, but I am not sure what work these distinctions are doing. It is one thing to draw a distinction, or identify a way one might use particular terms, such as “objective” and “subjective.” It is another to provide reasons or evidence to think these particular conceptions of the terms in question are driving the way people responded to the PhilPapers survey.
I’m also a bit puzzled at your focus on the terms “objective” and “subjective.” Did they ask whether morality was objective/subjective in the 2009 or 2020 versions of the survey?
It would be better to ask questions like 'is it a supernatural miracle that morality exists / that humans happen to endorse the True Morality
I doubt that such questions would be better.
Both of these questions are framed in ways that are unconventional with respect to existing positions in metaethics, both are a bit vague, and both are generally hard to interpret.
For instance, a theist could believe that God exists and that God grounds moral truth, but not think that it is a “supernatural miracle” that morality exists. It's also unclear what it means to say morality "exists." Even a moral antirealist might agree that morality exists. That just isn't typically a way that philosophers, especially those working in metaethics, would talk about putative moral claims or facts.
I’d have similar concerns about the unconventionality of asking about “the True Morality.” I study metaethics, and I’m not entirely sure what this would even mean. What does capitalizing it mean?
It also seems to conflate questions about the scope and applicability of moral concerns with questions about what makes moral claims true. More importantly, it seems to conflate descriptive claims about the beliefs people happen to hold with metaethical claims, and may arbitrarily restrict morality to humans in ways that would concern respondents.
I don't know how much this should motivate you to update away from what you're proposing here, but I can do so here. My primary area of specialization, and the focus of my dissertation research, concerns the empirical study of folk metaethics (that is, the metaethical positions nonphilosophers hold). In particular, my focus in on the methodology of paradigms designed to assess what people think about the nature of morality. Much of my work focuses on identifying ways in which questions about metaethics could be ambiguous, confusing, or otherwise difficult to interpret (see here). This also extends to a lesser extent to questions about aesthetics (see here). Much of this work focuses on presenting evidence of interpretative variation specifically in how people respond to questions about metaethics. Interpretative variation refers to the degree to which respondents in a study interpret the same set of stimuli differently from one another. I have amassed considerable evidence of interpretative variation in lay populations specifically with respect to how they respond to questions about metaethics.
While I am confident there is interpretative variation in how philosophers responded to the questions in the PhilPapers survey, I'm skeptical that such variation would encompass such radically divergent conceptions of moral realism that the number of respondents who endorsed what I'd consider unobjectionable notions of realism would be anything more than a very small minority.
I say all this to make a point: there may be literally no person on the planet more aware of, and sensitive to, concerns about how people would interpret questions about metaethics. And I am still arguing that you are very likely missing the mark in this particular case.
I also wanted to add that I am generally receptive to the kind of approach you are taking. My approach to many issues in philosophy is roughly aligned with quietists and draws heavily on identifying cases in which a dispute turns out to be a pseudodispute predicated on imprecision in language or confusion about concepts. More generally, I tend to take a quietist or a "dissolve the problem away" kind of approach. I say this to emphasize that it is generally in my nature to favor the kind of position you're arguing for here, and that I nevertheless think it is off the mark in this particular case. Perhaps the closest analogy I could make would be to theism: there is enough overlap in what theism refers to that the most sensible stance to adopt is atheism.
The combination of the two proposed explanations for why certain fields have a higher rate of one-boxing than others seems kind of plausible, but also very suspicious, because being more like decision theorists than like normies (and thus possibly getting more exposure to pro-two-boxing arguments that are popular among decision theorists) seems very similar to being more predisposed to good critical thinking on these sorts of topics (and thus possibly more likely to support one-boxing for correct reasons), so, by combining these two effects, we can explain why people in some subfield might be more likely than average to one-box and also why people in that same subfield might be more likely than average to two-box, and just pick whichever of these explanations correctly predicts whatever people in that field end up answering.
Of course, this complaint makes it seem especially strange state that two-boxing ended up being so popular among decision theorists.
Yeah, I don't think that combo of hypotheses is totally unfalsifiable (eg, normative ethicists doing so well is IMO a strike against my hypotheses), but it's definitely flexible enough that it has to get a lot less credit for correct predictions. It's harder to falsify, so it doesn't win many points when it's verified.
Fortunately, both parts of the hypothesis can be tested in some ways separately. E.g., maybe I'm wrong about 'most non-philosophers one-box' and the Guardian poll was a fluke; I haven't double-checked yet, and don't feel that confident in a single Guardian survey.
With Newcomb's Problem, I always wonder how much the issue is confounded by formulations like "Omega predicted correctly in 99% of past cases", where given some normally reasonable assumptions (even really good predictors probably aren't running a literal copy of your mind), it's easy to conclude you're being reflective enough about the decision to be in a small minority of unpredictable people. I would be interested in seeing statistics on a version of Newcomb's Problem that explicitly said Omega predicts correctly all of the time because it runs an identical copy of you and your environment.
I also wonder if anyone has argued that you-the-atoms should two-box, you-the-algorithm should one-box, and which entity "you" refers to is just a semantic issue.
For what it's worth, I know of at least one decision theorist who is very familiar with and closely associated with the LessWrong community who at least at one point not long ago leaned toward two-boxing. I think he may have changed his mind since then, but this is at least a data point showing that it's not a given that philosophers who are closely aligned with LessWrong type of thinking necessarily one-box.
Yeah, I see possible signs of this in the survey data itself -- decision theorists strongly favor two-boxing, but a lot of their other answers are surprisingly LW-like if there's no causal connection like 'decision theorists are unusually likely to read LW'. It's one reasonable explanation, anyway.
I'm confused what “hidden-variable” interpretation this survey is referring to. “Hidden-variable,” to me, sounds inconsistent with Bell's theorem, in which case I would say that shows just as much physics illiteracy as the A-theory of time. But maybe they just mean “hidden-variable” to refer to pilot wave theory or something?
Hey Rob, on the question of God, you wrote: “This question is 'philosophy in easy mode', so seems like a decent proxy for field health / competence”
Saying that this is philosophy in easy mode implies that the answer is obvious, and the way you phrased it above makes it seem like atheism is obviously the correct answer.
How would you answer a question I asked about a year ago: Besides implementation details, what differences are there between rationalists' conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God? (source tweet)
Besides implementation details, what differences are there between rationalists' conception of benevolent AGI and the monotheistic conception of an omnipotent, omniscient, and benevolent God?
We could distinguish belief in something with hope that it will exist. For example, one could hope that they won't get a disease without committing to the belief that they won't get that disease.
If by "rationalist conception of a benevolent AGI" you are referring to a belief that such an entity will come into existence, then I think one of the primary differences between this and the monotheistic conception of God, is that rationalists don't necessarily claim that such a benevolent entity will come into existence. At most, they claim it would simply be good if one (or many) were developed. But it does not seem inevitable, hence the efforts to ensure that AI is developed safely.
That’s a good distinction on hope something will exist vs belief that something exists! Thanks.
I don't know what you meant to set aside by saying "Besides implementation details", but it seems worth noting that the most important difference is that AGI (if it existed today) would be a naturalistic posit, not a supernatural or magical hypothesis.
To my eye, your question sounds like 'What's the difference between believing sorcerers exist who can conjure arbitrarily large fireballs, and believing engineers exist who can build flamethrowers?' One is magical (seems strongly contrary to the general character of physical law, treats human-psychology-ish concepts as fundamental rather than physics-ish concepts, etc.), the other isn't.
Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.
Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.
What empirical evidence would someone need to observe to believe that such an AGI, that is maximal in any of those traits, exists?
Maximality of those traits? I don't think that's empirically determinable at all, and certainly not practically measurable by humans.
One can certainly have beliefs about comparative levels of power, knowledge, and benevolence. The types of evidence for and against them should be pretty obvious under most circumstances. Evidence against those traits being greater than some particular standard is also evidence against maximality of those traits. However, evidence for reaching some particular standard is only evidence for maximality if you already believe that the standard in question is the highest that can possibly exist.
I don't see any reason why we should believe that any standard that we can empirically determine is maximal, so I don't think that one can rationally believe some entity to be maximal in any such trait. At best, we can have evidence that they are far beyond human capability.
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence's origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.
I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. "There exists some entity that is omnipotent, omniscient, and omnibenevolent" is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
I am off-put by the repeated implications that 1-boxing in Newcomb's is correct. I understand that is popular here, but it seems unreasonably confident to react to seeing decision theorists 2-box with "why are the experts wrong" rather than "hmm, maybe they are right". Especially when you go on to see that you generally agree with them on many other issues. Of course, as a 2-boxer myself I am biased, but without actually discussing Newcomb's paradox I think that this data is some strong evidence that the view should be treated more seriously than this.
The position that one-boxing is correct is not held so lightly on this site that one survey could shift this position much.
You can find numerous discussions of Newcomb's Problem on Less Wrong here: https://www.lesswrong.com/tag/newcomb-s-problem
I'm pretty sure I understand that perspective, and I'd be happy to discuss any object-level arguments for two-boxing you'd like. :) If I'm wrong, I want to learn I'm wrong!
But I already knew that two-boxing was more popular than one-boxing (going back to the 2009 survey), so this survey isn't a large update on that front. There are plenty of other Qs on the PhilPapers survey I feel super uncertain about; this just doesn't happen to be one of them (at least, not to such a degree that the survey data alone can shift me toward two-boxing). If philosophers of religion agreed with me about MWI, mind uploading, and the B theory of time, I wouldn't update toward theism, either; it's just not a mysterious or unfamiliar enough topic to me, as someone who's gone pretty deep both on the arguments for and against theism, and on the arguments for and against two-boxing.
Another way of putting this is that argument screens off authority.
It's not that appeals to authority are invalid -- it's that I already understand the arguments that make these authorities endorse two-boxing, so (non-neglibly) updating based on the authorities and the arguments would be double-counting that evidence.
In 2009, David Bourget and David Chalmers ran the PhilPapers Survey (results, paper), sending questions to "all regular faculty members" at top "Ph.D.-granting [philosophy] departments in English-speaking countries" plus ten other philosophy departments deemed to have "strength in analytic philosophy comparable to the other 89 departments".
Bourget and Chalmers now have a new PhilPapers Survey out, run in 2020 (results, paper). I'll use this post to pick out some findings I found interesting, and say opinionated stuff about them. Keep in mind that I'm focusing on topics and results that dovetail with things I'm curious about (e.g., 'why do academic decision theorists and LW decision theorists disagree so much?'), not giving a neutral overview of the whole 100-question survey.
The new survey's target population consists of:
In order to make comparisons to the 2009 results, the 2020 survey also looked at a "2009-comparable departments" list selected using similar criteria to the 2009 survey:
Based on this description, I expect the "2009-comparable departments" in the 2020 survey to be more elite, influential, and reasonable than the 2020 "target group", so I mostly focus on 2009-comparable departments below. In the tables below, if the row doesn't say "Target" (i.e., target group), the population is "2009-comparable departments".
Note that in the 2020 survey (unlike 2009), respondents could endorse multiple answers.
1. Decision theory
Newcomb's problem: The following groups (with n noting their size, and skipping people who skipped the question or said they weren't sufficiently familiar with it) endorsed the following options in the 2020 survey:
5% of decision theorists said they "accept a combination of views", and 9% said they were "agnostic/undecided".
I think decision theorists are astonishingly wrong here, so I was curious to see if other philosophy fields did better.
I looked at every field where enough surveyed people gave their views on Newcomb's problem. Here they are in order of 'how much more likely are they to two-box than to one-box':
(Note that many of these groups are small-n. Since philosophers of computing and information were an especially small and weird group, and I expect LWers to be extra interested in this group, I also looked at the target-group version for this field.)
Every field did much better than decision theory (by the "getting more utility in Newcomb's problem" metric). However, the only fields that favored one-boxing over two-boxing was ancient Greek and Roman philosophy, and aesthetics.
After those two fields, the best fields were philosophy of cognitive science, applied ethics, metaphilosophy, philosophy of mathematics, and 17th/18th century philosophy (only 4-5% more likely to two-box than one-box), followed by philosophy of religion, normative ethics, and metaphysics.
My quick post-hoc, low-confidence guess about why these fields did relatively well is (hiding behind a spoiler tag so others can make their own unanchored guesses):
My inclination is to model the aestheticians, historians of philosophy, philosophers of religion, and applied ethicists as 'in-between' analytic philosophers and the general public (who one-box more often than they two-box, unlike analytic philosophers). I think of specialists in those fields as relatively normal people, who have had less exposure to analytic-philosophy culture and ideas and whose views therefore tend to more closely resemble the views of some person on the street.
This would also explain why the "2009-comparable departments", who I expected to be more elite and analytic-philosophy-ish, did so much worse than the "target group" here.
I would have guessed, however, that philosophers of gender/race/sexuality would also have done relatively well on Newcomb's problem, if 'analytic-philosophy-ness' were the driving factor.
I'm pretty confused about this, though the small n for some of these populations means that a lot of this could be pretty random. (E.g., network effects: a single just-for-fun faculty email thread about Newcomb's problem could convince a bunch of philosophers of sexuality that two-boxing is great. Then this would show up in the survey because very few philosophers of sexuality have ever even heard of Newcomb's problem, and the ones who haven't heard of it aren't included.)
At the same time, my inclination is to treat philosophers of cognitive science, mathematics, normative ethics, metaphysics, and metaphilosophy as 'heavily embedded in analytic philosophy land, but smart enough (/ healthy enough as a field) to see through the bad arguments for two-boxing to some extent'.
There's also a question of why cognitive science would help philosophers do better on Newcomb's problem, when computer science doesn't. I wonder if the kinds of debates that are popular in computer science are the sort that attract people with bad epistemics? ('Wow, the Chinese room argument is amazing, I want to work in this field!') I really have no idea, and wouldn't have predicted this in advance.
Normative ethics also surprises me here. And both of my explanations for 'why did field X do well?' are post-hoc, and based on my prior sense that some of these fields are much smarter and more reasonable than others.
It's very plausible that there's some difference between the factors that make aestheticians one-box more, and the factors that make philosophers of cognitive science one-box more. To be confident in my particular explanations, however, we'd want to run various tests and look at various other comparisons between the groups.
The fields that did the worst after decision theory were philosophy of gender/race/sexuality, 20th-century philosophy, philosophy of language, philosophy of law, political philosophy, and philosophy of biology, of social science, and of science-in-general.
A separate question is whether academic decision theory has gotten better since the 2009 survey. Eyeballing the (small-n) numbers, the answer is that it seems to have gotten worse: two-boxing became even more popular (in 2009-comparable departments), and one-boxing even less popular:
n=31 for the 2009 side of the comparison, n=22 for the 2020 side. The numbers above are different from the ones I originally presented because Bourget and Chalmers include "skip" and "insufficiently familiar" answers, and exclude responses that chose multiple options, in order to make the methodology more closely match that of the 2009 survey.
2. (Non-animal) ethics
Regarding "Meta-ethics: moral realism or moral anti-realism?":
Regarding "Moral judgment: non-cognitivism or cognitivism?":
Regarding "Morality: expressivism, naturalist realism, constructivism, error theory, or non-naturalism?":
Regarding "Normative ethics: virtue ethics, consequentialism, or deontology?" (putting in parentheses the percentage that only chose the option in question):
Excluding responses that endorsed multiple options, we can see that normative ethicists have moved away from deontology and towards virtue ethics since 2009, though deontology is still the most popular:
30 normative-ethicist respondents also wrote in "pluralism" or "pluralist" in the 2020 survey.
Regarding "Trolley problem (five straight ahead, one on side track, turn requires switching, what ought one do?): don't switch or switch?":
Regarding "Footbridge (pushing man off bridge will save five on track below, what ought one do?): push or don't push?":
Regarding "Human genetic engineering: permissible or impermissible?":
Regarding "Well-being: hedonism/experientialism, desire satisfaction, or objective list?":
Moral internalism "holds that a person cannot sincerely make a moral judgment without being motivated at least to some degree to abide by her judgment". Regarding "Moral motivation: externalism or internalism?":
One of the largest changes in philosophers' views since the 2009 survey is that philosophers have somewhat shifted toward externalism. In 2009, internalism was 5% more popular than externalism; now externalism is 3% more popular than internalism.
(Again, the 2009-2020 comparisons give different numbers for 2020 in order to make the two surveys' methodologies more similar.)
3. Minds and animal ethics
Regarding "Hard problem of consciousness (is there one?): no or yes?":
Regarding "Mind: non-physicalism or physicalism?":
Regarding "Consciousness: identity theory, panpsychism, eliminativism, dualism, or functionalism?":
Regarding "Zombies: conceivable but not metaphysically possible, metaphysically possible, or inconceivable?" (also noting "agnostic/undecided" results):
+impossible
My understanding is that the "psychological view" of personal identity more or less says 'you're software', the "biological view" says 'you're hardware', and the "further-fact view" says 'you're a supernatural soul'. Regarding "Personal identity: further-fact view, psychological view, or biological view?":
Comparing this to some other philosophy subfields, as a gauge of their health:
Decision theorists come out of this looking pretty great (I claim). This is particularly interesting to me, because some people diagnose the 'academic decision theorist vs. LW decision theorist' disagreement as coming down to 'do you identify with your algorithm or with your physical body?'.
The above is some evidence that either this diagnosis is wrong, or academic decision theorists haven't fully followed their psychological view of personal identity to its logical conclusions.
Regarding "Mind uploading (brain replaced by digital emulation): survival or death?" (adding answers for "the question is too unclear to answer" and "there is no fact of the matter"):
From my perspective, decision theorists do great on this question — very few endorse "death", and a lot endorse "there is no fact of the matter" (which, along with "survival", strike me as good indirect signs of clear thinking given that this is a kind-of-terminological question and, depending on terminology, "death" is at best a technically-true-but-misleading answer).
Also, a respectable 25% of decision theorists say "agnostic/undecided", which is almost always something I give philosophers points for — no one's an expert on everything, a lot of these questions are confusing, and recognizing the limits of your own understanding is a very positive sign.
Regarding "Chinese room: doesn't understand or understands?" (adding "the question is too unclear to answer" responses):
Regarding "Other minds (for which groups are some members conscious?)" (looking only at the "2009-comparable departments", except for philosophy of computing and information because there aren't viewable results for that subgroup):
(Options: adult humans; cats; fish; flies; worms; plants; particles; newborn babies; current AI systems; future AI systems.)
(Respondent groups: philosophers; applied ethicists; decision theorists; meta-ethicists; metaphysicians; normative ethicists; philosophy of biology; philosophers of cognitive science; philosophers of computing and information; philosophers of mathematics; philosophers of mind.)
I am confused, delighted, and a little frightened that an equal (and not-super-large) number of decision theorists think adult humans and cats are conscious. (Though as always, small n.)
Also impressed that they gave a low probability to newborn humans being conscious — it seems hard to be confident about the answer to this, and being willing to entertain 'well, maybe not' seems like a strong sign of epistemic humility beating out motivated reasoning.
Also, 11% of philosophers of cognitive science think PLANTS are conscious??? Friendship with philosophers of cognitive science ended, decision theorists new best friend.
Regarding "Eating animals and animal products (is it permissible to eat animals and/or animal products in ordinary circumstances?): vegetarianism (no and yes), veganism (no and no), or omnivorism (yes and yes)?":
4. Metaphysics, philosophy of physics, and anthropics
Regarding "Sleeping beauty (woken once if heads, woken twice if tails, credence in heads on waking?): one-half or one-third?" (including the answers "this question is too unclear to answer," "accept an alternative view," "there is no fact of the matter," and "agnostic/undecided"):
Regarding "Cosmological fine-tuning (what explains it?): no fine-tuning, brute fact, design, or multiverse?":
Regarding "Quantum mechanics: epistemic, hidden-variables, many-worlds, or collapse?":
From SEP:
Regarding "Causation: process/production, primitive, counterfactual/difference-making, or nonexistent?":
Regarding Foundations of mathematics: constructivism/intuitionism, structuralism, set-theoretic, logicism, or formalism?:
5. Superstition
Regarding "God: atheism or theism?" (with subfields ordered by percentage that answered "theism"):
This question is 'philosophy in easy mode', so seems like a decent proxy for field health / competence (though the anti-religiosity of Marxism is a confounding factor in my eyes, for fields where Marx is influential).
The "A-theory of time" says that there is a unique objectively real "present", corresponding to "which time seems to me to be right now", that is universal and observer-independent, contrary to special relativity. The "B-theory of time" says that there is no such objective, universal "present".
This provides another good "reasonableness / basic science literacy" litmus test, so I'll order the subfields (where enough people in the field answered at all) by how much more they endorse B-theory over A-theory. Regarding "Time: B-theory or A-theory?":
Decision theorists doing especially well here is surprising to me! Especially since they didn't excel on theism; if they'd hit both out of the park, from my perspective that would have been a straightforward update to "wow, decision theorists are really exceptionally reasonable as analytic philosophers go, even if they're getting Newcomb's problem in particular wrong".
As is, this still strikes me as a reason to be more optimistic that we might be able to converge with working decision theorists in the future. (Or perhaps more so, a reason to be relatively optimistic about persuading decision theorists vs. people working in most other philosophy areas.)
(Added: OK, after writing this I saw decision theorists do great on the 'personal identity' and 'mind uploading' questions, and am feeling much more confident that productive dialogue is possible. I've added those two questions earlier in this post.)
(Added added: OK, decision theorists are also unusually great on "which things are conscious?" and they apparently love MWI. How have we not converged more???)
6. Identity politics topics
Regarding "Race: social, unreal, or biological?":
(Note that many respondents said 'yes' to multiple options.)
7. Metaphilosophy
Regarding "Philosophical progress (is there any?): a little, a lot, or none?":
Regarding "Philosophical knowledge (is there any?): a little, none, or a lot?":
Another interesting result is "Philosophical methods (which methods are the most useful/important?)", which finds (looking at analogous-to-2009 departments):
8. How have philosophers' views changed since 2009?
Bourget and Chalmers' paper has a table for the largest changes in philosophers' views since 2009:
As noted earlier in this post, one of the larger shifts in philosophers' views was a move away from moral internalism and toward externalism.
On 'which do you endorse, classical logic or non-classical?' (a strange question, but maybe this is something like 'what kind of logic is reality's source code written in?'), non-classical logic is roughly as unpopular as ever, but fewer now endorse classical logic, and more give answers like "insufficiently familiar with the issue" and "the question is too unclear to answer":
Epistemic contextualism says that the accuracy of your claim that someone "knows" something depends partly on contextual features — e.g., the standards for "knowledge" can rise "as the stakes rise or the skeptical doubts become more serious".
Here, it was the less popular view (invariantism) that lost favor; and the view that lost favor again lost it via an increase in 'other' answers (especially "insufficiently familiar with the issue" and "agnostic/undecided") more so than increased favor for its rival view (contextualism):
Humeanism (a misnomer, since Hume himself wasn't a Humean, though his skeptical arguments helped inspire the Humeans) say that "laws of nature" aren't fundamentally different from other observed regularities, they're just patterns that humans have given a fancy high-falutin name to; whereas anti-Humeans think there's something deeper about laws of nature, that they in some sense 'necessitate' things to go one way rather than another.
(Maybe Humeans = 'laws of nature are program outputs like any other', non-Humeans = 'laws of nature are part of reality's source code'?)
Once again, one view lost favor (the more popular view, non-Humeanism), but the other didn't gain favor; instead, more people endorsed "insufficiently familiar with the issue", and "agnostic/undecided", etc.:
Philosophers in 2020 are more likely to say that "yes", humans have a priori knowledge of some things (already very much the dominant view):
'Aesthetic value is objective' was favored over 'subjective' (by 3%) in 2009; now 'subjective' is favored over 'objective' (by 4%). "Agnostic/undecided" also gained ground.
Philosophers mostly endorsed "switch" in the trolley dilemma, and still do; but "don't switch" gained a bit of ground, and "insufficiently familiar with the issue" lost ground.
Moral realism also became a bit more popular (was endorsed by 56% of philosophers, now 60%), as did compatibilism about free will (was 59% compatibilism, 14% libertarianism, 12% no free will; now 62%, 13%. and 10%).
The paper also looked at the individual respondents who answered the survey in both 2009 and 2020. Individuals tended to update away from switching in the trolley dilemma, away from consequentialism, and toward virtue ethics and non-cognitivism. They also updated toward Platonism about abstract objects, and away from 'no free will'.
These are all comparisons across 2009-target-population philosophers in general, however. In most (though not all) cases, I'm more interested in the views of subfields specialized in investigating and debating a topic, and how the subfield's view changes over time. Hence my earlier sections largely focused on particular fields of philosophy.