[epistemic status: thinking out loud; reporting high-level impressions based on a decent amount of data, but my impressions might shift quickly if someone proposed a promising new causal story of what's going on]

[context warning: If you're a philosopher whose first encounter with LessWrong happens to be this post, you'll probably be very confused and put off by my suggestion that LW outperforms analytic philosophy.

To that, all I can really say without starting a very long conversation is: the typical Internet community that compares itself favorably to an academic field will obviously be crazy or stupid. And yet academic fields can be dysfunctional, and low-hanging fruit can sometimes go unplucked for quite a while; so while it makes sense to have a low prior probability on this kind of thing, this kind of claim can be true, and it's important to be able to update toward it (and talk about it) in cases where it does turn out to be true.

There are about 6700 US philosophy faculty, versus about 6000 LessWrong commenters to date; but the philosophy faculty are doing this as their day job, while the LessWrong users are almost all doing it in their off time. So the claim that LW outperforms is prima facie interesting, and warrants some explanation.

OK, enough disclaimers.]


A month ago, Chana Messinger said:

Rob says, "as an analytic philosopher, I vote for resolving this disagreement by coining different terms with stipulated meanings."

But I constantly hear people complain that philosophers are failing to distinguish different things they mean by words and if they just disambiguated, so many philosophical issues would be solved, most recently from Sam and Spencer on Spencer's podcast.

What's going on here? Are philosophers good at this or bad at this? Would disambiguation clear up philosophical disputes?

My cards on the table: I understand analytic philosophers to be very into clearly defining their terms, and a lot of what happens in academic philosophy is arguments about which definitions capture which intuitions or have what properties, and how much, but I'm very curious to find out if that's wrong.

Sam Rosen replied:

Philosophers are good at coming up with distinctions. They are not good at saying, “the debate about the true meaning of knowledge is inherently silly; let’s collaboratively map out concept space instead.”

An edited version of my reply to Chana and Sam:


Alternative hypothesis: philosophers are OK at saying 'this debate is unimportant'; but...

 

  • (a) ... if that's your whole view, there's not much to say about it.

    Sometimes, philosophers do convince the whole field in one fell swoop. A Bertrand Russell comes along and closes the door on a lot of disputes, and future generations just don't hear about them anymore.

    But if you fail to convince enough of your colleagues, then the people who think this is important will just keep publishing about it, while the people who think the debate is dumb will roll their eyes and work on something else. I think philosophers in a given subfield tend to think that a large number of the disputes in other subfields are silly and/or unimportant.
     
  • (b) ... there's a culture of being relaxed, or something to that effect, in philosophy?

    Philosophical fields are fine with playing around with cute conceptual questions, and largely feel no need to move on to more important things when someone gives a kinda-compelling argument for 'this is unimportant'.

    Prominent 20th-century philosophers like David Lewis and Peter van Inwagen acquired a lot of their positive reputation from the fact that all their colleagues agreed that their view was obviously silly and stupid, but there was some disagreement and subtlety in saying why they were wrong, and they proved to be a useful foil for a lot of alternative views. Philosophers don't get nerd-sniped from their more important work; nerd-sniping just is the effective measure of philosophical importance.

 

We've still ended up with a large literature of philosophers arguing that this or that philosophical dispute is non-substantive.

There's a dizzying variety of different words used for a dizzying variety of different positions to the effect of 'this isn't important' and/or 'this isn't real'.

There are massive literatures drawing out the fine distinctions between different deflationary vs. anti-realist vs. nominalist vs. nihilist vs. reductionist vs. eliminativist vs. skeptical vs. fictionalist vs. ... variants of positions.

Thousands of pages have been written on 'what makes a dispute merely verbal, vs. substantive? and how do we tell the difference?'. Thousands of journal articles cite Goodman's 'grue and bleen' (and others discuss Hirsch's 'incar and outcar', etc.) as classic encapsulations of the problem 'when are concepts joint-carving, and when are words poorly-fitted to the physical world's natural clusters?'. And then there's the legendary "Holes," written by analytic philosophers for analytic philosophers, satirizing and distilling the well-known rhythm of philosophical debates about which things are fundamental or real vs. derived or illusory.

It's obviously not that philosophers have never heard of 'what if this dispute isn't substantive?? what if it's merely verbal??'.

They hear about this constantly. This is one of the most basic and common things they argue about. Analytic philosophers sometimes seem to be trying to one-up each other about how deflationary and anti-realist they can be. (See "the picture of reality as an amorphous lump".) Other times, they seem to relish contrarian opportunities to show how metaphysically promiscuous they can be.

 

I do think LW strikingly outperforms analytic philosophy. But the reason is definitely not 'analytic philosophers have literally never considered being more deflationary'.

Arguably the big story of 20th-century analytic philosophy is precisely 'folks like the logical positivists and behaviorists and Quineans and ordinary language philosophers express tons of skepticism about whether all these philosophical disputes are substantive, and they end up dominating the landscape for many decades, until in the 1980s the intellectual tide starts turning around'.

Notably, I think the tide was right to turn around. I think mid-20th-century philosophers' skepticism (even though it touched on some very LW-y themes!) was coming from a correct place on an intuitive level, but their arguments for rejecting metaphysics were total crap. I consider it a healthy development that philosophy stopped prejudicially rejecting all 'unsciencey' things, and started demanding better arguments.

 

Why does LW outperform analytic philosophy? (Both in terms of having some individuals who have made surprisingly large progress on traditional philosophical questions; and in terms of the community as a whole successfully ending up with a better baseline set of positions and heuristics than you see in analytic philosophy? Taking into account that LW is putting relatively few person-hours into philosophy, many LWers lack formal training in philosophy, etc.)

I suspect it's a few subtler differences.

  • "Something to protect" is very much in the water here. It's normal and OK to actually care in your bones about figuring out which topics are unimportant—care in a tangible "lives are on the line" sort of way—and to avoid those.

    No one will look at you funny if you make big unusual changes to your life to translate your ideas into practice. If you're making ethics a focus area, you're expected to actually get better results, and if you don't, it's not just a cute self-deprecating story to tell at dinner parties.
     
  • LW has a culture of ambition, audacity, and 'rudeness', and historically (going back to Eliezer's sequence posts) there's been an established norm of 'it's socially OK to dive super deep into philosophical debates' and 'it's socially OK to totally dismiss and belittle philosophical debates when they seem silly to you'.

    I... can't think of another example of a vibrant intellectual community in the last century that made both of those moves 'OK'? And I think this is a pretty damned important combination. You need both moves to be fully available.
     
  • Likewise, LW has a culture of 'we love systematicity and grand Theories of Everything!' combined with the high level of skepticism and fox-ishness encouraged in modern science.

    There are innumerable communities that have one or the other, but I think the magic comes from the combination of the two, which can keep a community from flanderizing in one direction or the other.
     
  • More specifically, LWers are very into Bayesianism, and this actually matters a hell of a lot.

    E.g., I think the lack of a background 'all knowledge requires thermodynamic work' model in the field explains the popularity of epiphenomenalism-like views in philosophy of mind.

    And again, there are plenty of Bayesians in academic philosophy. There's even Good and Real, the philosophy book that independently discovered many of the core ideas in the sequences. But the philosophers of mind mostly don't study epistemology in depth, and there isn't a critical mass of 'enough Bayesians in analytic philosophy that they can just talk to each other and build larger edifices everywhere without constantly having to return to 101-level questions about why Bayes is good'.
     
  • This maybe points at an underlying reason that academic philosophy hasn't converged on more right answers: some of those answers require more technical ability than is typically expected in analytic philosophy. So when someone publishes an argument that's pretty conclusive, but requires strong technical understanding and well-honed formal intuitions, it's a lot more likely the argument will go ignored, or will take decades (rather than months) to change minds. More subtly, the kinds of questions and interests that shape the field are ones that are (or seem!!) easier to tackle without technical intuitions and tools.

    Ten years ago, Marcus Hutter made a focused effort to bring philosophers up to speed on Solomonoff induction and AIXI. But his paper has only been cited 96 times (including self-citations and citations by EAs and non-philosophers), while Schaffer's 2010 paper on whether wholes are metaphysically prior to their parts has racked up 808 citations. This seems to reflect a clear blind spot.
     
  • A meta-explanation: LW was founded by damned good thinkers like Eliezer, Anna, Luke M, and Scott who (a) had lots of freedom to build a new culture from scratch (since they were just casually sharing thoughts with other readers of the same blog, not trying to win games within academia's existing norms), and (b) were smart enough to pick a pretty damned good mix of norms.

    I don't think it's a coincidence that all these good things came together at once. I think there was deliberate reflection about what good thinking-norms and discussion-norms look like, and I think this reflection paid off in spades.

    I think you can get an awful lot of the way toward understanding the discrepancy by just positing that communities try to emulate their heroes, and Anna is a better hero than Leibniz or Kant (if only by virtue of being more recent and therefore being able to build on better edifices of knowledge), and unlike most recent philosophical heroes, LW's heroes were irreverent and status-blind enough to create something closer to a clean break with the errors of past philosophy, keeping the good while thoroughly shunning and stigmatizing the clearly-bad stuff. Otherwise it's too easy for any community that drinks deeply of the good stuff in analytic philosophy to end up imbibing the bad memes too, and recapitulate the things that make analytic philosophy miss the mark pretty often.

 

Weirdly, when I imagine interventions that could help philosophy along, I feel like philosophy's mild academic style gets in the way?

When I think about why LW was able to quickly update toward good decision-theory methods and views, I think of posts like "Newcomb's Problem and Regret of Rationality" that sort of served as a kick in the pants, an emotional reminder "hold on, this line of thinking is totally bonkers." The shortness and informality is good, not just for helping system 1 sit up and pay attention, but for encouraging focus on a simple stand-alone argument that's agnostic to the extra theory and details you could then tack on.

Absent some carefully aimed kicks in the pants, people are mostly happy and content to stick with the easy, cognitively natural grooves human minds find themselves falling into.

Of course, if you just dial up emotional kicks in the pants to 11, you end up with Twitter culture, not LW. So this seems like another smart-founder effect to me: it's important that smart self-aware people chose very specific things to carefully and judiciously kick each other in the pants over.

(The fact that LW is a small community surely helps when it comes to not being Twitter. Larger communities are more vulnerable to ideas getting watered down and/or viral-ized.)

Compare Eliezer's comically uncomplicated "RATIONALISTS SHOULD WIN" argument to the mild-mannered analytic-philosophy version.

(Which covers a lot of other interesting topics! But it's not clear to me that this has caused a course-correction yet. And the field's course-correction should have occurred in 2008–2009, at the latest, not 2018.)

(Also, I hear that the latter paper was written by someone socially adjacent to the rationalists? And they cite MIRI papers. So I guess this progress also might not have happened without LW.)

(Also, Greene's paper of course isn't the first example of an analytic philosopher calling for something like "success-first decision theory". As the paper notes, this tradition has a long history. I'm not concerned with priority here; my point in comparing Greene's paper to Eliezer's blog post is to speak to the sociological question of why, in this case, a community of professionals is converging on truth so much more slowly than a community of mostly-hobbyists.)

 

My story is sort of a Thiel-style capitalist account. It was hard to get your philosophy published and widely read/discussed except via academia. But academia had a lot of dysfunction that made it hard to innovate and change minds within that bad system.

The Internet and blogging made it much easier to compete with philosophers; a mountain of different blogs popped up; one happened to have a few unusually good founders; and once their stuff was out there and could compete, a lot of smart people realized it made more sense.

LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.

New Comment
47 comments, sorted by Click to highlight new comments since: Today at 9:09 AM

A conversation prompted by this post (added: and "What I'd Change About Different Philosophy Fields") on Twitter:

______________________

Ben Levinstein: Hmm. As a professional analytic philosopher, I find myself unable to judge a lot of this. I think philosophers often carve out sub-communities of varying quality and with varying norms. I read LW semi regularly but don't have an account and generally wouldn't say it outperforms.

Rob Bensinger: An example of what I have in mind: I think LW is choosing much better philosophical problems to work on than truthmakers, moral internalism, or mereology. I also think it's very bad that most decision theorists two-box, or that anyone worries about whether teleportation is death.

If the philosophical circles you travel in would strongly agree with all that, then I might agree they're on par with LW, and we might just be looking at different parts of a very big elephant.

Ben Levinstein: That could be. I realized I had no idea whether your critique of metaphysics, for instance, was accurate or not because I'm pretty disconnected from most of analytic metaphysics. Just don't know what's going on outside of the work of a very select few.

Rob Bensinger: (At least, most decision theorists two-boxed as of 2009. Maybe things have changed a lot!)

Ben Levinstein: I don't think that's changed, but I also tend not to buy the LW explanations for why decision theorists are thinking along the lines they do. E.g., Joyce and others definitely think they are trying to win but think the reference classes are wrong.

Not taking a side on the merits there, but just saying I have the impression from LW that their understanding of what CDT-defenders take the rules of the game to be tends to be inaccurate.

Rob Bensinger: Sounds like a likely sort of thing for LW to get wrong. Knowing why others think things is a hard problem. Gotta get Joyce posting on LW. :)

Ben Levinstein: I also think every philosopher I know who has looked at Solomonoff just doesn't think it's that good or interesting after a while. We all come away kind of deflated.

Rob Bensinger: I wonder if you feel more deflated than the view A Semitechnical Introductory Dialogue on Solomonoff Induction arrives at? I think Solomonoff is good but not perfect. I'm not sure whether you're gesturing at a disagreement or a different way of phrasing the same position.

Ben Levinstein: I'll take a look! Basically, after working through the technicals I didn't feel like it did much of anything to solve any deep philosophical problems related to induction despite being a very cool idea. Tom Sterkenburg had some good negative stuff, e.g., http://philsci-archive.pitt.edu/12429/

Ben Levinstein:

I guess I have a fair amount to say, but the very quick summary of my thoughts on SI remain the same:

1. Solomonoff Induction is really just subjective bayesianism+ Cromwell's rule + prob 1 that the universe is computable. I could be wrong about the exact details here, but I think this could even be exactly correct. Like for any subjective Bayesian prior that respects Cromwell's rule and is sure the universe is computable there exists some UTM that will match it. (Maybe there's some technical tweak I'm missing, but basically, that's right.) So if that's so, then SI doesn't really add anything to the problem of induction aside from saying that the universe is computable.

2. EY makes a lot out of saying you can call shenanigans with ridiculous-looking UTMs. But I mean, you can do the same with ridiculous looking priors under subjective bayes. Like, ok, if you just start with a prior of .999999 that Canada will invade the US, I can say you're engaging in shenanigans. Maybe it makes it a bit more obvious if you use UTMs, but I'm not seeing a ton of mileage shenanigans-wise.

3. What I like about SI is that it basically is just another way to think about subjective bayesianism. Like you get a cool reframing and conceptual tool, and it is definitely worth knowing about. But I don't at all buy the hype about solving induction and even codifying Ockham's Razor.

4. Man, as usual I'm jealous of some of EY's phrase-turning ability: that line about being a young intelligence with just two bits to rub together is great.

I think an important piece that's missing here is that LW simply assumes that certain answers to important questions are correct. It's not just that there are social norms that say it's OK to dismiss ideas as stupid if you think they're stupid, it's that there's a rough consensus on which ideas are stupid.

LW has a widespread consensus on bayseian epistemology, physicalist metaphysics and consequentialist ethics (not an exhaustive list). And it has good reasons for favoring these positions, but I don't think LW has great responses to all the arguments against these positions. Neither do the alternative positions have great responses to counterarguments from the LW-favored positions.

Analytic philosophy in the academy is stuck with a mess of incompatible views, and philosophers only occasionally succeed in organizing themselves into clusters that share answers to a wide range of fundamental questions.

And they have another problem stemming from the incentives in publishing. Since academic philosophers want citations, there's an advantage to making arguments that don't rely on particular answers to questions where there isn't widespread agreement. Philosophers of science will often avoid invoking causation, for instance, since not everyone believes in it. It takes more work to argue in that fashion, and it constrains what sorts of conclusions you can arrive at.

The obvious pitfalls of organizing around a consensus on the answers to unsolved problems are obvious.

I would draw an analogy like this one: 

Five hundred extremely smart and well-intentioned philosophers of religion (some atheists, some Christians, some Muslims, etc.) have produced an enormous literature discussing the ins and outs of theism and the efficacy of prayer, and there continue to be a number of complexities and unsolved problems related to why certain arguments succeed or fail, even though various groups have strong (conflicting) intuitions to the effect "claim x is going to be true in the end".

In a context like this, I would consider it an important mark in favor of a group if they were 50% better than the philosophers of religion at picking the right claims to say "claim x is going to be true in the end", even if they are no better than the philosophers of religion at conclusively proving to a random human that they're right. (In fact, even if they're somewhat worse.)

To sharpen this question, we can imagine that a group of intellectuals learns that a nearby dam is going to break soon, flooding their town. They can choose to divide up their time between 'evacuating people' and 'praying'. Since prayer doesn't work (I say with confidence, even though I've never read any scholarly work about this), I would score a group in this context based on how well they avoid wasting scarce minutes on prayer. I would give little or no points based on how good their arguments for one allocation or another are, since lives are on the line and the end result is a clearer test. Having compelling-sounding arguments matters, but in the end the physical world judges you on whether you ended up getting the right answer, not on your reasoning per se.

To clarify a few things:

  • Obviously, I'm not saying the difference between LW and analytic philosophy is remotely as drastic as the difference between LW and philosophy of religion. I'm just using the extreme example to highlight a qualitative point.
  • Obviously, if someone comes to this thread saying 'but two-boxing is better than one-boxing', I will reply by giving specific counter-arguments (both formal and heuristic), not by just saying 'my intuition is better than yours!' and stopping there. And obviously I don't expect a random philosopher to instantly assume I'm correct that LWers have good intuitions about this, without spending a lot of time talking with us. I can notice and give credit to someone who has a good empirical track record (by my lights), without expecting everyone on the Internet to take my word for it.
  • Obviously, being a LWer, I care about heuristics of good reasoning. :) And if someone gives sufficiently bad reasons for the right answer, I will worry about whether they're going to get other answers wrong in the future.

But also, I think there's such a thing as having good built-up intuitions about what kinds of conclusions end up turning out to be true, and about what kinds of evidence tend to deserve more weight than other kinds of evidence. This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.

I worry that this doesn't really end up explaining much. We think that our answers to philosophical questions are better than what the analytics have come up with. Why? Because they seem intuitively to be better answers. What explanation do we posit for why our answers are better? Because we start out with better intuitions.

Of course our intuitions might in fact be better, as I (intuitively) think they are. But that explanation is profoundly underwhelming.

This might actually be the big thing LW has over analytic philosophy, so I want to call attention to it and encourage people to poke at what this thing is.

I'm not sure what you mean here, but maybe we're getting at the same thing. Having some explanation for why we might expect our intuitions to be better would make this argument more substantive. I'm sure that anyone can give explanations for why their intuitions are more likely to be right, but it's at least more constraining. Some possibilities:

  • LWers are more status-blind, so their intuitions are less distorted by things that are not about being right
  • Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones
  • LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you're right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.

I'm not confident that any of these are good explanations, but they illustrate the sort of shape of explanation that I think would be needed to give a useful answer to the question posed in the article.

Those seem like fine partial explanations to me, as do the explanations I listed in the OP. I expect multiple things went right simultaneously; if it were just a single simple tweak, we would expect many other groups to have hit on the same trick.

[-]TAG3y10

Many LWers have a background in non-phil-of-mind cognitive sciences, like AI, neuroscience and psychiatry, which leads them to believe that someways of thinking are more apt to lead to truth than others, and then adopt the better ones

LWers are more likely than analytic philosophers to have extensive experience in a discipline where you get feedback on whether you’re right, rather than merely feedback on whether others think you are right, and that might train their intuitions in a useful direction.

It's common for people from other backgrounds to get frustrated with philosophy. But it isn't a good argument to the effect that philosophy is being done wrong. Since it is a separate discipline to science , engineering, and so on, there is no particular reason to think that the same techniques will work. If there are reasons why some Weird Trick would work across all disciplines , then it would work in philosophy. But is there a one weird trick?

[+]TAG3y-50

There are about 6700 US philosophy faculty, versus about 6000 LessWrong commenters to date

Ruby from the LW team tells me that there are 5,964 LW users who have made at least 4 (non-spam) comments ever.

The number of users with 10+ karma who have been active in the last 6 months is more like 1000—1500.

[+][comment deleted]3y80

These are some extraordinary claims. I wonder if there is a metric that mainstream analytical philosophers would agree to use to evaluate statements like 

LW outperform analytic philosophy

and 

LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.

Without an agreed upon evaluation criteria, this is just tooting one's own horn, wouldn't you agree?

On the topic of "horn-tooting": see my philosopher-of-religion analogy. It would be hard to come up with a simple metric that would convince most philosophers of religion "LW is better than you at thinking about philosophy of religion". If you actually wanted to reach consensus about this, you'd probably want to start with a long serious of discussions about object-level questions and thinking heuristics.

And in the interim, it shouldn't be seen as a status grab for LWers to toot their own horn about being better at philosophy of religion. Toot away! Every toot is an opportunity to be embarrassed later when the philosophers of religion show that they were right all along.

It would be bad to toot if your audience were so credulous that they'll just take your word for it, or if the social consequences of making mistakes were too mild to disincentivize empty boasts. But I don't think LW or analytic philosophy are credulous or forgiving enough to make this a real risk.

If anything, there probably isn't enough horn-tooting in those groups. People are too tempted to false modesty, or too tempted to just steer clear of the topic of relative skill levels. This makes it harder to get feedback about people's rationality and meta-rationality, and it makes a lot of coordination problems harder.

This sounds like a very Eliezer-like approach: "I don't have to convince you, a professional who spent decades learning and researching the subject matter, here is the truth, throw away your old culture and learn from me, even though I never bothered to learn what you learned!" While there are certainly plenty of cases where this is valid, in any kind of evidence-based sciences the odds of it being successful are slim to none (the infamous QM sequence is one example of a failed foray like that. Well, maybe not failed, just uninteresting). I want to agree with you on the philosophy of religion, of course, because, well, if you start with a failed premise, you can spend all your life analyzing noise, like the writers of Talmud did. But an outside view says that the Chesterton fence of an existing academic culture is there for a reason, including the philosophical traditions dating back millennia.

An SSC-like approach seems much more reliable in terms of advancing a particular field. Scott spends inordinate amount of time understanding the existing fences, how they came to be and why they are there still, before advancing an argument why it might be a good idea to move them, and how to test if the move is good. I think that leads to him being taken much more seriously by the professionals in the area he writes about. 

I gather that both approaches have merit, as there is generally no arguing with someone who is in a "diseased discipline", but one has to be very careful affixing that label on the whole field of research, even if it seems obvious to an outsider. Or to an insider, if you follow the debates about whether the String Theory is a diseased field in physics.

Still, except for the super-geniuses among us, it is much safer to understand the ins and outs before declaring that the giga-IQ-hours spent by humanity on a given topic are a waste or a dead end. The jury is still out on whether Eliezer and MIRI in general qualify.

Even if the jury's out, it's a poor courtroom that discourages the plaintiff, defendant, witnesses, and attorneys from sharing their epistemic state, for fear of offending others in the courtroom!

It may well be true that sharing your honest models of (say) philosophy of religion is a terrible idea and should never happen in public, if you want to have any hope of convincing any philosophers of religion in the future. But... well, if intellectual discourse is in as grim and lightless a state as all that, I hope we can at least have clear sights about how bad that is, and how much better it would be if we somehow found a way to just share our models of the field and discuss those plainly. I can't say it's impossible to end up in situations like that, but I can push for the conditional policy 'if you end up in that kind of situation, be super clear about how terrible this is and keep an eye out for ways to improve on it'.

You don't have to be extremely confident in your view's stability (i.e., whether you expect to change your view a lot based on future evidence) or its transmissibility in order to have a view at all. And if people don't share their views — or especially, if they are happier to share positive views of groups than negative ones, or otherwise have some systemic bias in what they share — the group's aggregate beliefs will be less accurate.

So, see my conversation with Ben Levinstein and my reply to adrusi for some of my reply. An example of what I have in mind by 'LWers outperforming' is the 2009 PhilPapers survey: I'd expect a survey of LW users with 200+ karma to...

  • ... have fewer than 9.1% of respondents endorse "skepticism" or "idealism" about the external world.
  • ... have fewer than 13.7% endorse "libertarianism" about free will (roughly defined as the view "(1) that we do have free will, (2) that free will is not compatible with determinism, and (3) that determinism is therefore false").
  • ... have fewer than 14.6% endorse "theism".
  • ... have fewer than 27.1% endorse "non-physicalism" about minds.
  • ... have fewer than 59.6% endorse "two boxes" in Newcomb's problem, out of the people who gave a non-"Other" answer.
  • ... have fewer than 44% endorse "deontology" or "virtue ethics".
  • ... have fewer than 12.2% endorse the "further-fact view" of personal identity (roughly defined as "the facts about persons and personal identity consist in some further [irreducible, non-physical] fact, typically a fact about Cartesian egos or souls").
  • ... have fewer than 16.9% endorse the "biological view" of personal identity (which says that, e.g., if my brain were put in a new body, I should worry about the welfare of my old brainless body, not about the welfare of my mind or brain).
  • ... have fewer than 31.1% endorse "death" as the thing that happens in "teletransporter (new matter)" thought experiments.
  • ... have fewer than 37% endorse the "A-theory" of time (which rejects the idea of "spacetime as a spread-out manifold with events occurring at different locations in the manifold"), out of the people who gave a non-"Other" answer.
  • ... have fewer than 6.9% endorse an "epistemic" theory of truth (i.e., a view that what's true is what's knowable, or known, or verifiable, or something to that effect).

This is in no way a perfect or complete operationalization, but it at least gestures at the kind of thing I have in mind.

Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted.

 (Also, I take issue with the last two.  The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time. So distinguishing between A and B is about humans, not about time, unlike, say, Special and General Relativity, which provide a useful model of time and spacetime.

A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)

Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions.

Well, it looks like you declare "outperforming" by your own metric, not by anything generally accepted.

I am indeed basing my view that philosophers are wrong about stuff on investigating the specific claims philosophers make.

If there were a (short) proof that philosophers were wrong about X that philosophers already accepted, I assume they would just stop believing X and the problem would be solved.

The philosophical ideas about time are generally not about time, but about "time", i.e. about how humans perceive and understand passage of time.

Nope, the 20th-century philosophical literature discussing time is about time itself, not about (e.g.) human psychological or cultural perceptions of time.

There is also discussion of humans' perception and construction of time -- e.g., in Kant -- but that's not the context in which A-theory and B-theory are debated.

The A-theory and B-theory were introduced in 1908, before many philosophers (or even physicsts) had heard of special relativity; and 'this view seems unbelievably crazy given special relativity' is in fact one of the main arguments that gets cited in the literature against the A-theory of time.

A non-epistemic theory of truth (e.g. there is an objective truth we try to learn) is detrimental in general, because it inevitably deteriorates into debates about untestables, like other branches of a hypothetical multiverse and how to behave morally in an infinite universe.)

"It's raining" is true even if you can't check. Also, what's testable for one person is different from what's testable for another person. Rather than saying that different things are 'true' or 'false' or 'neither true nor false' depending on which person you are, simpler to just say that "snow is white" is true iff snow is white.

It's not like there's any difficulty in defining a predicate that satisfies the correspondence theory of truth, and this predicate is much closer to what people ordinarily mean by "true" than any epistemic theory of truth's "true" is. So demanding that we abandon the ordinary thing people mean by "truth" just seems confusing and unnecessary.

Doubly so when there's uncertainty or flux about which things are testable. Who can possibly keep track of which things are true vs. false vs. meaningless, when the limits of testability are always changing? Seems exhausting.

Also, most people here, while giving lip service to non-libertarian views of free will, sneak it in anyway, as evidenced by relying on "free choice" in nearly all decision theory discussions.

This is a very bad argument. Using the phrase "free choice" doesn't imply that you endorse libertarian free will.

Well, we may have had this argument before, likely more than once, so probably no point rehashing it. I appreciate you expressing your views succinctly though. 

Just yesterday, a friend commented on the exceptionally high quality of the comments I get by posting on this website. Of your many good points, these are my favorite.

Likewise, LW has a culture of 'we love systematicity and grand Theories of Everything!' combined with the high level of skepticism and fox-ishness encouraged in modern science.

This maybe points at an underlying reason that academic philosophy hasn't converged on more right answers: some of those answers require more technical ability than is typically expected in analytic philosophy.

…unlike most recent philosophical heroes, LW's heroes were irreverent and status-blind enough to create something closer to a clean break with the errors of past philosophy, keeping the good while thoroughly shunning and stigmatizing the clearly-bad stuff.

Does anyone know of any significant effort to collect 'cute conceptual questions' in one place?

I thought you made some excellent points about many of these ideas are in the philosophical memespace, but just haven't gained dominance.

In Newcomb's Problem and Regret of Rationality, Eliezer's argument is pretty much "I can't provide a fully satisfactory solution, so let's just forget about the theoretical argument which we could never be certain about anyway and use common sense". While I agree that this is a good principle, philosophers who discuss the problem generally aren't trying to figure out what they'd do if they were actually in the sitution, but to discover what this problem tells us about the principles of decision theory.  The pragmatic solution wouldn't meet this aim. Further, the pragmatic principle would suggest not paying in Counterfactual Mugging.

I guess I have a somewhat interesting perspective on this given that I don't find the standard LW very satisfying for Newcomb's or Counterfactual Mugging and I've proposed my own approaches which haven't gained much traction, but I consider to be far more satisfying. Should I take the outside view and assume that I'm way too overconfident about being correct (since I have definitely been in the past and is very common among people who propose theories in general)? Or should I take the inside view and downgrade my assessment of how good LW is as a community for philosophy discussion?

Also note that Eliezer's "I haven't written this out yet" was in 2008, and by 2021 I think we have some decent things written on FDT, like Cheating Death in Damascus and Functional Decision Theory: A New Theory of Instrumental Rationality.

You can see some responses here and here. I find them uncompelling.

I think there's something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.

To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.

I think there's something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.

I agree with this. I especially agree that LWers (on average) are too prone to do things like:

  • Hear Eliezer's anti-zombie argument and conclude "oh good, there's no longer anything confusing about the Hard Problem of Consciousness!".
  • Hear about Tegmark's Mathematical Universe Hypothesis and conclude "oh good, there's no longer anything about why there's something rather than nothing!".

On average, I think LWers are more likely to make important errors in the direction of 'prematurely dismissing things that sound un-sciencey' than to make important errors in the direction of 'prematurely embracing un-sciencey things'.

But 'tendency to dismiss things that sound un-sciencey' isn't exactly the dimension I want LW to change on, so I'm wary of optimizing LW in that direction; I'd much rather optimize it in more specific directions that are closer to the specific things I think are true and good.

[+]TAG3y-50

In short, my position on Newcomb's is as follows: Perfect predictors require determinism which means that strictly there's only one decision that you can make. To talk about choosing between options requires us to construct a counterfactual to compare against. If we construct a counterfactual where you make a different choice and we want it to be temporally consistent then given determinism we have to edit the past. Consistency may force us to also edit Omega's prediction and hence the money in the box, but all this is fine since it is a counterfactual. CDT's may deny the need for consistency, but then they'd have to justify ignoring changes in past brain state *despite* the presence of a perfect predictor which may have a way of reading this state.

As far as I'm concerned, the Counterfactual Prisoner's Dilemma provides the most satisfying argument for taking the Counterfactual Mugging seriously. 

[-]TAG3y20

(b) … there’s a culture of being relaxed, or something to that effect, in philosophy

That is possibly a result of mainstream philosophy being better at meta philosophy... in the sense of more being skeptical. Once you have rejected the idea that you can converge on The One True Epistemology, you have to give up on the "missionary work " of telling people that they are wrong according to TOTE, and that's your "relaxation".

Philosophers are good at coming up with distinctions. They are not good at saying, “the debate about the true meaning of knowledge is inherently silly; let’s collaboratively map out concept space instead.”

If that means giving up on traditional epistemology, it's not going to help. The thing about traditional terms like "truth" and "knowledge" is that they connect to traditional social moves, like persuasion and agreement. If you can't put down the tagle stakes of truth and proof, you can't expect the payoff of agreement.

LW is academic philosophy, rebooted with better people than Plato as its Pater Patriae.

LW should not be comparing itself to Plato. It's trying to do something different. The best of what Plato did is, for the most part, orthogonal to what LW does.

You can take the LW worldview totally onboard and still learn a lot from Plato that will not in any way conflict with that worldview.

Or you may find Plato totally useless. But it won't be your adoption of the LW memeplex alone that determines which way you go.

[+]TAG3y-70
[+][comment deleted]3y40
[+][comment deleted]3y10