Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lukeprog comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong

106 Post author: lukeprog 20 March 2011 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (328)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 21 March 2011 09:41:26AM *  8 points [-]

What I'm saying is that Less Wrong shouldn't ignore mainstream philosophy.

What I demonstrated above is that, directly or indirectly, Less Wrong has already drawn heavily from mainstream philosophy. It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.

As for naturalistic philosophy's insights relevant to LW, they are forthcoming. I'll be writing some more philosophical posts in the future.

And actually, my statistical prediction rules post came mostly from me reading a philosophy book (Epistemology and the Psychology of Human Judgment), not from reading psychology books.

Comment author: Eliezer_Yudkowsky 21 March 2011 09:59:59AM 26 points [-]

I'll await your next post, but in retrospect you should have started with the big concrete example of mainstream philosophy doing an LW-style dissolution-to-algorithm not already covered on LW, and then told us that the moral was that we shouldn't ignore mainstream philosophy.

I did the whole sequence on QM to make the final point that people shouldn't trust physicists to get elementary Bayesian problems right. I didn't just walk in and tell them that physicists were untrustworthy.

If you want to make a point about medicine, you start by showing people a Bayesian problem that doctors get wrong; you don't start by telling them that doctors are untrustworthy.

If you want me to believe that philosophy isn't a terribly sick field, devoted to arguing instead of facing real-world tests and admiring problems instead of solving them and moving on, whose poison a novice should avoid in favor of eating healthy fields like settled physics (not string theory) or mainstream AI (not AGI), you're probably better off starting with the specific example first. "I disagree with your decision not to cover terminal vs. instrumental in CEV" doesn't cover it, and neither does "Quineans agree the world is made of atoms". Show me this field's power!

Comment author: lukeprog 21 March 2011 06:10:42PM *  53 points [-]

Eliezer,

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms. After all, most of the useful philosophy you've done on Less Wrong is not specifically related to that very particular thing... which again supports my point that mainstream philosophy has more to offer than dissolution-to-algorithm. (Unless you think most of your philosophical writing on Less Wrong is useless.)

Also, I don't disagree with your decision not to cover means and ends in CEV.

Anyway. Here are some useful contributions of mainstream philosophy:

  • Quine's naturalized epistemology. Epistemology is a branch of cognitive science: that's where recursive justification hits bottom, in the lens that sees its flaws.
  • Tarski on language and truth. One of Tarski's papers on truth recently ranked as the 4th most important philosophy paper of the 20th century by a survey of philosophers. Philosophers have much developed Tarski's account since then, of course.
  • Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.
  • Dennett on belief in belief. Used regularly on Less Wrong.
  • Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior. See, for example, pages 60-61 and 1041 of AIMA (3rd ed.).
  • Functionalism and multiple realizability. The philosophy of mind most natural to AI was introduced and developed by Putnam and Lewis in the 1960s, and more recently by Dennett.
  • Explaining the cognitive processes that generate our intuitions. Both Shafir (1998) and Talbot (2009) summarize and discuss as much as cognitive scientists know about the cognitive mechanisms that produce our intuitions, and use that data to explore which few intuitions might be trusted and which ones cannot - a conclusion that of course dissolves many philosophical problems generated from conflicts between intuitions. (This is the post I'm drafting, BTW.) Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases..." Sound familiar?
  • Pearl on causality. You acknowledge the breakthrough. While you're right that this is mostly a case of an AI researcher coming in from the outside to solve philosophical problems, Pearl did indeed make use of the existing research in mainstream philosophy (and AI, and statistics) in his book on causality.
  • Drescher's Good and Real. You've praised this book as well, which is the result of Drescher's studies under Dan Dennett at Tufts. And the final chapter is a formal defense of something like Kant's categorical imperative.
  • Dennett's "intentional stance." A useful concept in many contexts, for example here.
  • Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.
  • Ord on risks with low probabilities and high stakes. Here.
  • Deontic logic. The logic of actions that are permissible, forbidden, obligatory, etc. Not your approach to FAI, but will be useful in constraining the behavior of partially autonomous machines prior to superintelligence, for example in the world's first battlefield robots.
  • Reflective equilibrium. Reflective equilibrium is used in CEV. It was first articulated by Goodman (1965), then by Rawls (1971), and in more detail by Daniels (1996). See also the more computational discussion in Thagard (1988), ch. 7.
  • Experimental philosophy on the biases that infect our moral judgments. Experimental philosophers are now doing Kahneman & Tversky -ish work specific to biases that infect our moral judgments. Knobe, Nichols, Haidt, etc. See an overview in Experiments in Ethics.
  • Greene's work on moral judgment. Joshua Greene is a philosopher and neuroscientist at Harvard whose work using brain scanners and trolley problems (since 2001) is quite literally decoding the algorithms we use to arrive at moral judgments, and helping to dissolve the debate between deontologists and utilitarians (in his view, in favor of utilitarianism).
  • Dennett's Freedom Evolves. The entire book is devoted to explaining the evolutionary processes that produced the cognitive algorithms that produce the experience of free will and the actual kind of free will we do have.
  • Quinean naturalists showing intuitionist philosophers that they are full of shit. See for example, Schwitzgebel and Cushman demonstrating experimentally that moral philosophers have no special expertise in avoiding known biases. This is the kind of thing that brings people around to accepting those very basic starting points of Quinean naturalism as a first step toward doing useful work in philosophy.
  • Bishop & Trout on ameliorative psychology. Much of Less Wrong's writing is about how to use our awareness of cognitive biases to make better decisions and have a higher proportion of beliefs that are true. That is the exact subject of Bishop & Trout (2004), which they call "ameliorative psychology." The book reads like a long sequence of Less Wrong posts, and was the main source of my post on statistical prediction rules, which many people found valuable. And it came about two years before the first Eliezer post on Overcoming Bias. If you think that isn't useful stuff coming from mainstream philosophy, then you're saying a huge chunk of Less Wrong isn't useful.
  • Talbot on intuitionism about consciousness. Talbot (here) argues that intuitionist arguments about consciousness are illegitimate because of the cognitive process that produces them: "Recently, a number of philosophers have turned to folk intuitions about mental states for data about whether or not humans have qualia or phenomenal consciousness. [But] this is inappropriate. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system - System One - that will ignore qualia when making these judgments, even if qualia exist."
  • "The mechanism behind Gettier intuitions." This upcoming project of the Boulder philosophy department aims to unravel a central (misguided) topic of 20th century epistemology by examining the cognitive mechanisms that produce the debate. Dissolution to algorithm yet again. They have other similar projects ongoing, too.
  • Computational meta-ethics. I don't know if Lokhorst's paper in particular is useful to you, but I suspect that kind of thing will be, and Lokhorst's paper is only the beginning. Lokhorst is trying to implement a meta-ethical system computationally, and then actually testing what the results are.

Of course that's far from all there is, but it's a start.

...also, you occasionally stumble across some neato quotes, like Dennett saying "AI makes philosophy honest." :)

Note that useful insights come from unexpected places. Rawls was not a Quinean naturalist, but his concept of reflective equilibrium plays a central role in your plan for Friendly AI to save the world.

P.S. Predicate logic was removed from the original list for these reasons.

Comment author: [deleted] 21 March 2011 07:11:00PM 7 points [-]

It seems a shame to leave this list with several useful cites as a comment, where it is likely to be missed. Not sure what to suggest - maybe append it to the main article?

Comment author: lukeprog 21 March 2011 07:15:34PM 4 points [-]

I added a link to this list to the end of the original post.

Comment author: Eliezer_Yudkowsky 25 March 2011 07:15:59PM 21 points [-]

Quine's naturalized epistemology. Epistemology is a branch of cognitive science

Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn't shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn't do before, so from an LW perspective this isn't yet a move on the gameboard. At best it introduces a move on the gameboard.

Tarski on language and truth.

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn't imply people should study philosophy if they will also run into Tarski by doing mathematics.

Chalmers' formalization of Good's intelligence explosion argument...

...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you'll see that most of the issues raised didn't fit into Chalmers's decomposition at all. Not suggesting that he should've done it differently in a first paper, but still, Chalmers's formalization doesn't yet represent most of the debates that have been done in this community. It's more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.

Dennett on belief in belief.

Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.

Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior...

Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. "Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners... Michael Bratman has applied his "belief-desire-intention" model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992)." This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a "major inspiration", if so...

Functionalism and multiple realizability.

This comes under the heading of "things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward". I really don't think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.

Explaining the cognitive processes that generate our intuitions... Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy."...

Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it's not a move on the LW gameboard until you get to specifics. Then it's not a very impressive move unless it involves doing nonobvious reductionism, not just "Bias X might make philosophers want to believe in position Y". You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.

Pearl on causality.

Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It's what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast - I've done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl's work as "building" on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?

Drescher's Good and Real.

Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.

Dennett's "intentional stance."

For a change I actually did read about this before forming my own AI theories. I can't recall ever actually using it, though. It's for helping people who are confused in a way that I wasn't confused to begin with. Dennett is in any case a widely known and named exception.

Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.

A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who's done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal's Mugging, which was invented right here on Less Wrong by none other than yours truly - although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal's Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.

Reading Bostrom is a triumph of the rule "Read the most famous transhumanists" not "Read the most famous philosophers".

The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy - anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as "philosophy" rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.

Ord on risks with low probabilities and high stakes.

Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don't know if there was any academic debate or if the paper just dropped into the void.)

Deontic logic

Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.

...I'll stop there, but do want to note, even if it's out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume "Judgment Under Uncertainty: Heuristics and Biases" where it appears as a lovely chapter by Robyn Dawes on "The robust beauty of improper linear models", which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it's not work done in philosophy and, well, I didn't learn about it there so this particular citation feels a bit odd to me.

Comment author: Jack 25 March 2011 08:06:50PM *  15 points [-]

when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation?

That this isn't at all the case should be obvious even if the only thing you've read on the subject is Pearl's book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon's theory isn't about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.

Comment author: Eliezer_Yudkowsky 25 March 2011 08:15:38PM 4 points [-]

Stalnaker's name sounds familiar from Pearl, so I'll take your word for this and concede the point.

Comment author: komponisto 25 March 2011 07:47:46PM *  1 point [-]

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician.

As I pointed out before, the same is true for me of Quine. I don't know if lukeprog means to include Mathematical Logic when he keeps saying not to read Quine, but that book was effectively my introduction to the subject, and I still hold it in high regard. It's an elegant system with some important innovations, and features a particularly nice treatment of Gödel's incompleteness theorem (one of his main objectives in writing the book). I don't know if it's the best book on mathematical logic there is (I doubt it), but it appeals to a certain kind of personality, and I would certainly recommend it to a young high-schooler over reading Principia Mathematica, for example.

Comment author: lukeprog 25 March 2011 07:34:30PM *  1 point [-]

Cool. Let me know when you've finished your comment here and I'll respond.

Comment author: Eliezer_Yudkowsky 25 March 2011 08:07:49PM 0 points [-]

Done.

Comment author: lukeprog 25 March 2011 08:40:23PM *  4 points [-]

Quine's naturalized epistemology: agreed.

Tarski: But I thought you said you were not only influenced by Tarski's mathematics but also his philosophical work on truth?

Chalmers' paper: Yeah, it's mostly useful as an overview. I should have clarified that I meant that Chalmers' paper makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has in one place. Obviously, your work (and your debate with Robin) goes far beyond Chalmers' introductory paper, but it's scattered all over the place and takes a lot of reading to track down and understand.

And this would be the main reason to learn something from the mainstream: If it takes way less time than tracking down the same arguments and answers through hundreds of Less Wrong posts and other articles, and does a better job of pointing you to other discussions of the relevant ideas.

But we could have the best of both worlds if SIAI spent some time writing well-referenced survey articles on their work, in the professional style instead of telling people to read hundreds of pages of blog posts (that mostly lack references) in order to figure out what you're talking about.

Bratman: I don't know his influence first hand, either - it's just that I've seen his 1987 book mentioned in several books on AI and cognitive science.

Pearl: Jack beat me to the punch on this.

Talbot: I guess I'll have to read more about what you mean by dissolution to cognitive algorithm. I thought the point was that even if you can solve the problem, there's that lingering wonder about why people believe in free will, and once you explain why it is that humans believe in free will, not even a hint of the problem remains. The difference being that your dissolution of free will to cognitive algorithm didn't (as I recall) cite any of the relevant science, whereas Talbot's (and others') dissolutions to cognitive algorithms do cite the relevant science.

Is there somewhere where you explain the difference between what Talbot, and also Kip Werking, have done versus what you think is so special and important about LW-style philosophy?

As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn't, and people are better off reading statistics and AI and cognitive science, like I said. So I'm not sure there's anything left to argue.

The one major thing I'd like clarification on (if you can find the time) is the difference between what experimental philosophers are doing (or what Joshua Greene is doing) and the dissolution-to-algorithm that you consider so central to LW-style philosophy.

Comment author: Jack 25 March 2011 09:13:08PM *  9 points [-]

As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn't, and people are better off reading statistics and AI and cognitive science, like I said. So I'm not sure there's anything left to argue.

I'd like to emphasize, to no one in particular, that the evaluation that seems to be going on here is about whether or not reading these philosophers is useful for building a Friendly recursively self-improving artificial intelligence. While thats a good criteria for whether or not Eliezer should read them, failure to meet this criteria doesn't render the work of the philosopher valueless (really! it doesn't!). The question "is philosophy helpful for researching AI" is not the same as the question "is philosophy helpful for a rational person trying to better understand the world".

Comment author: Eliezer_Yudkowsky 25 March 2011 08:58:38PM 1 point [-]

Tarski did philosophical work on truth? Apart from his mathematical logic work on truth? Haven't read it if so.

What does Talbot say about a cognitive algorithm generating the appearance of free will? Is it one of the cognitive algorithms referenced in the LW dissolution or a different one? Does Talbot talk about labeling possibilities as reachable? About causal models with separate nodes for self and physics? Can you please take a moment to be specific about this?

Comment author: Jack 25 March 2011 09:27:13PM *  15 points [-]

Tarski did philosophical work on truth? Apart from his mathematical logic work on truth?

Okay, now you're just drawing lines around what you don't like and calling everything in that box philosophy.

Should we just hold a draft? With the first pick the philosophers select... Judea Pearl! What? whats that? The mathematicians have just grabbed Alfred Tarski from right under the noses the of the philosophers!

Comment author: lukeprog 25 March 2011 09:05:16PM 5 points [-]

To philosophers, Tarski's work on truth is considered one of the triumphs of 20th century philosophy. But that sort of thing is typical of analytic and especially naturalistic philosophy (including your own philosophy): the lines between mathematics and science and philosophy are pretty fuzzy.

Talbot's paper isn't about free will (though others in experimental philosophy are); it's about the cognitive mechanisms that produce intuitions in general. But anyway this is the post I'm drafting right now, so I'll be happy to pick up the conversation once I've posted it. I might do a post on experimental philosophy and free will, too.

Comment author: Perplexed 27 March 2011 02:51:15AM *  2 points [-]

To philosophers, Tarski's work on truth is considered one of the triumphs of 20th century philosophy.

Yet to Wikipedia, Tarski is a mathematician. Period. Philosophy is not mentioned.

It is true that mathematical logic can be considered as a joint construction by philosophers and mathematicians. Frege, Russell, and Godel are all listed in Wikipedia as both mathematicians and philosophers. So are a couple of modern contributors to logic - Dana Scott and Per Martin-Lof. But just about everyone else who made major contributions to mathematical logic - Peano, Cantor, Hilbert, Zermelo, Skolem, von Neumann, Gentzen, Church, Turing, Komolgorov, Kleene, Robinson, Curry, Cohen, Lawvere, and Girard are listed as mathematicians, not philosophers. To my knowledge, the only pure philosopher who has made a contribution to logic at the level of these people is Kripke, and I'm not sure that should count (because the bulk of his contribution was done before he got to college and picked philosophy as a major. :)

Quine, incidentally, made a minor contribution to mathematical logic with his idea of 'stratified' formulas in his 'New Foundations' version of set theory. Unfortunately, Quine's theory was found to be inconsistent. But a few decades later, a fix was discovered and today some of the most interesting Computer Science work on higher-order logic uses a variant of Quine's idea to avoid Girard's paradox.

Comment author: timtyler 25 March 2011 07:33:15PM *  3 points [-]

Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.

I thought Chalmers was a newbie to all this - and showed it quite a bit. However, a definite step forward from zombies. Next, see if Penrose or Searle can be recruited.

Comment author: Eliezer_Yudkowsky 21 March 2011 07:55:02PM 9 points [-]

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms.

No, it's more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.

Amy just arrived and I've got to start book-writing, but I'll take one example from this list, the first one, so that I'm not picking and choosing; later if I've got a moment I'll do some others, in the order listed.

  • Predicate logic.

Funny you should mention that.

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

Now considering that philosophers of the sort I inveighed against in "against modal logic" seem to talk and think like the GOFAI people and not like the model-theoretic people, I'm guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves "philosophers" instead of those descendants who considered themselves part of the thriving edifice of mathematics.

Anyway. If you and I agree that philosophy is an extremely sick field, that there is no standardized repository of the good stuff, that it would be a desperate and terrible mistake for anyone to start their life studying philosophy before they had learned a lot of cognitive science and math and AI algorithms and plain old material science as explained by non-philosophers, and that it's not worth my time to read through philosophy to pick out the good stuff even if there are a few small nuggets of goodness or competent people buried here and there, then I'm not sure we disagree on much - except this post sort of did seem to suggest that people ought to run out and read philosophy-qua-philosophy as written by professional philosophers, rather than this being a terrible mistake.

Will try to get to some of the other items, in order, later.

Comment author: lukeprog 14 May 2011 03:14:06AM 12 points [-]

You may enjoy the following exchange between two philosophers and one mathematician.

Bertrand Russell, speaking of Godel's incompleteness theorem, wrote:

It made me glad that I was no longer working at mathematical logic. If a given set of axioms leads to a contradiction, it is clear that at least one of the axioms must be false.

Wittgenstein dismissed the theorem as trickery:

Mathematics cannot be incomplete; any more than a sense can be incomplete. Whatever I can understand, I must completely understand.

Godel replied:

Russell evidently misinterprets my result; however, he does so in a very interesting manner... In contradistinction Wittgenstein... advances a completely trivial and uninteresting misinterpretation.

According to Gleick (in The Information), the only person who understood Godel's theorem when Godel first presented it was another mathematician, Neumann Janos, who moved to the USA and began presenting it wherever he went, by then calling himself John von Neumann.

The soundtrack for Godel's incompleteness theorem should be, I think, the last couple minutes of 'Ludus' from Tabula Rasa by Arvo Part.

Comment author: Wei_Dai 14 May 2011 08:22:07AM *  13 points [-]

I've been wondering why von Neumann didn't do much work in the foundations of mathematics. (It seems like something he should have been very interested in.) Your comment made me do some searching. It turns out:

John von Neumann was a vain and brilliant man, well used to putting his stamp on a mathematical subject by sheer force of intellect. He had devoted considerable effort to the problem of the consistency of arithmetic, and in his presentation at the Konigsberg symposium, had even come forward as an advocate for Hilbert's program. Seeing at once the profound implications of Godel's achievement, he had taken it one step further—proving the unprovability of consistency, only to find that Godel had anticipated him. That was enough. Although full of admiration for Godel—he'd even lectured on his work—von Neumann vowed never to have anything more to do with logic. He is said to have boasted that after Godel, he simply never read another paper on logic. Logic had humiliated him, and von Neumann was not used to being humiliated. Even so, the vow proved impossible to keep, for von Neumann's need for powerful computational machinery eventually forced him to return to logic.

ETA: Am I the only one who fantasizes about cloning a few dozen individuals from von Neumann's DNA, teaching them rationality, and setting them to work on FAI? There must be some Everett branches where that is being done, right?

Comment author: lukeprog 14 May 2011 08:35:19AM 2 points [-]

We'd need to inoculate the clones against vanity, it appears.

Interesting story. Thanks for sharing your findings.

Comment author: wedrifid 14 May 2011 07:02:26AM -1 points [-]

Russell evidently misinterprets my result; however, he does so in a very interesting manner... In contradistinction Wittgenstein... advances a completely trivial and uninteresting misinterpretation.

Well spoken! :)

Comment author: Oscar_Cunningham 21 March 2011 08:12:34PM 10 points [-]

Of course, since this is a community blog, we can have it both ways. Those of us interested in philosophy can go out and read (and/or write) lots of it, and we'll chuck the good stuff this way. No need for anyone to miss out.

Comment author: lukeprog 21 March 2011 09:15:49PM 5 points [-]

Exactly. Like I did with my statistical prediction rules post.

Comment author: lukeprog 21 March 2011 08:04:03PM *  6 points [-]

Anyway. If you and I agree...

Yeah, we don't disagree much on all those points.

I didn't say in my original post that people should run out and start reading mainstream philosophy. If that's what people got from it, then I'll add some clarifications to my original post.

Instead, I said that mainstream philosophy has some useful things to offer, and shouldn't be ignored. Which I think you agree with if you've benefited from the work of Bostrom and Dennett (including, via Drescher) and so on. But maybe you still disagree with it, for reasons that are forthcoming in your response to my other examples of mainstream philosophy contributions useful to Less Wrong.

But yeah, don't let me keep you from your book!

As for predicate logic, I'll have to take your word on that. I'll 'downgrade it' in my list above.

Comment author: TheOtherDave 21 March 2011 08:15:37PM 11 points [-]

If that's what people got from it, then I'll add some clarifications to my original.

FWIW, what I got from your original post was not "LW readers should all go out and start reading mainstream philosophy," but rather "LW is part of a mainstream philosophical lineage, whether its members want to acknowledge that or not."

Comment author: lukeprog 21 March 2011 08:22:15PM 2 points [-]

Thanks for sharing. That too. :)

Comment author: Eliezer_Yudkowsky 22 March 2011 12:08:11AM 1 point [-]

I'm part of Roger Bacon's lineage too, and not ashamed of it either, but time passes and things improve and then there's not much point in looking back.

Comment author: lukeprog 22 March 2011 12:21:57AM *  15 points [-]

Meh. Historical context can help put things in perspective. You've done that plenty of times in your own posts on Less Wrong. Again, you seem to be holding my post to a different standard of usefulness than your own posts. But like I said, I don't recommend anybody actually read Quine.

Comment author: [deleted] 02 April 2015 05:54:59PM 1 point [-]

Oftentimes you simply can't understand what some theorem or experiment was for without at least knowing about its historical context. Take something as basic as calculus: if you've never heard the slightest thing about classical mechanics, what possible meaning could a derivative, integral, or differential equation have to you?

Comment author: TheAncientGeek 02 April 2015 07:25:10PM 0 points [-]

Does human nature improve, too?

Comment author: ChristianKl 02 April 2015 07:38:45PM 0 points [-]

What's "human nature"?

Comment author: Perplexed 21 March 2011 11:24:09PM *  8 points [-]

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI.

I'd be curious to know what that "toxic view" was. My GOFAI academic advisor back in grad school swore by predicate logic. The only argument against that I ever heard was that proving or disproving something is undecidable (in theory) and frequently intractible (in practice).

And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

Model theory as opposed to proof theory? What is it you think is great about model theory?

Now considering that philosophers of the sort I inveighed against in "against modal logic" seem to talk and think like the GOFAI people and not like the model-theoretic people, I'm guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves "philosophers" instead of those descendants who considered themselves part of the thriving edifice of mathematics.

I have no idea what you are saying here. That "Against Modal Logic" posting, and some of your commentary following it strike me as one of your most bizarre and incomprehensible pieces of writing at OB. Looking at the karma and comments suggests that I am not alone in this assessment.

Somehow, you have picked up a very strange notion of what modal logic is all about. The whole field of hardware and software verification is based on modal logics. Modal logics largely solve the undecidability and intractability problems the bedeviled GOFAI approaches to these problems using predicate logic. Temporal logics are modal. Epistemic and game-theoretic logics are modal.

Or maybe it is just the philosophical approaches to modal logic that offended you. The classical modal logic of necessity and possibility. The puzzles over the Barcan formulas when you try to combine modality and quantification. Or maybe something bizarre involving zombies or Goedel/Anselm ontological proofs.

Whatever it was that poisoned your mind against modal logic, I hope it isn't contagious. Modal logic is something that everyone should be exposed to, if they are exposed to logic at all. A classic introductory text: Robert Goldblatt: Logics of Time and Computation (pdf) is now available free online. I just got the current standard text from the library. It - Blackburn et al.: Modal Logic (textbook) - is also very good. And the standard reference work - Blackburn et al.: Handbook of Modal Logic - is outstanding (and available for less than $150 as Borders continues to go out of business :)

Comment author: lukeprog 21 March 2011 11:35:45PM 7 points [-]

Reading Plantinga could poison almost anybody's opinion of modal logic. :)

Comment author: Perplexed 21 March 2011 11:50:37PM 3 points [-]

That is entirely possible. A five star review at the Amazon link you provided calls this "The classic work on the metaphysics of modality". Another review there says:

Plantinga's Nature of Necessity is a philosophical masterpiece. Although there are a number of good books in analytic philosophy dealing with modality (the concepts of necessity and possibility), this one is of sufficient clarity and breadth that even non-philosophers will benefit from it. Modal logic may seem like a fairly arcane subject to outsiders, but this book exhibits both its intrinsic interest and its general importance.

Yet among the literally thousands of references in the three books I linked, Platinga is not even mentioned. A fact which pretty much demonstrates that modal logic has left mainstream philosophy behind. Modal logic (in the sense I am promoting) is a branch of logic, not a branch of metaphysics.

Comment author: PhilGoetz 02 April 2011 03:04:02PM 3 points [-]

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

I'd very much like to see a post explaining that.

Comment author: lukeprog 22 March 2011 01:32:08PM *  3 points [-]

it's more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.

I'm not sure what "of that level" (of dissolving-to-algorithm) means, but I think I've demonstrated that quite a lot of useful stuff comes from mainstream philosophy, and indeed that a lot of mainstream philosophy is already being used by yourself and Less Wrong.

Comment author: DuncanS 22 March 2011 12:26:36AM *  5 points [-]

I believe I understand the warning here. The whole field of philosophy reminds me of the introduction to one of the first books on computer system development - The mythical man-month.

"No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits. In the mind's eye one sees dinosaurs, mammoths, and saber-toothed tigers struggling against the grip of the tar. The fiercer the struggle, the more entangling the tar, and no beast is so strong or so skillful but that he ultimately sinks.

Large-system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it. Most have emerged with running systems—few have met goals, schedules, and budgets. Large and small, massive or wiry, team after team has become entangled in the tar. No one thing seems to cause the difficulty—any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion. Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. But we must try to understand it if we are to solve it."

The tar pit, as the book goes on to describe, is information complexity, and far too many philosophers seem content to jump right into the middle of that morass, convinced they will be able to smash their way out. The problem is not the strength of their reason, but the lack of a solid foothold - everything is sticky and ill-defined, there is nothing solid to stand on. The result is much thrashing, but surprisingly little progress.

The key to progress, for nearly everyone, is to stay where you know solid ground is. Don't jump in the tar pit unless you absolutely have no other choice. Logic is of very little help when you have no clear foundation to rest it on.

Comment author: lukeprog 22 March 2011 12:55:25AM 1 point [-]

Yup! Most of analytic philosophy's foundation has been intuition, and, well... thar's yer problem right thar!

Comment author: FiftyTwo 22 March 2011 04:13:02AM 2 points [-]

There has been some recent work in tackling the dependence on intuitions. The Experimental Philosophy (X-Phi) movement has been doing some very interesting stuff examining the role of intuition in philosophy, what intuitions are and to what extent they are useful.

One of the landmark experiments was doing surveys that showed cross cultural variation in responses to certain philosophical thought experiments, (for example in what cases someone is acting intentionally) e.g. Weinberg et al (2001). Which obviously presents a problem for any Philosophical argument that uses such intuitions as premises.

The next stage being explaining these variations, and how by acknowledging these issues you can remove biases, without going too far into skepticism to be useful. To caricature the problem, if I can't trust certain of my intuitions I shouldn't trust them in general. But then how can I trust very basic foundations, (such as: a statement cannot be simultaneously true and false) and from there build up to any argument.

This area seems particularly relevant to this discussion, as there has been definite progress in the very recent past, in a manner very consistent rationalist techniques and goals.

[This is my first LW post, so apologies for any lack of clarity or deviation from accepted practice]

Comment author: lukeprog 22 March 2011 04:17:43AM 0 points [-]

Welcome to LW!

You're right that there has been lots of progress on this issue in the recent past. Other resources include the book Rethinking Intuition, this issue of SPE, Brian Talbot's dissertation, and more.

In fact I'm writing up a post on this subject, so if you have other resources to point me to, please do!

Weinberg is awesome. He's going to be a big deal, I think.

Comment author: Mitchell_Porter 21 March 2011 12:21:04PM 17 points [-]

I did the whole sequence on QM to make the final point that people shouldn't trust physicists to get elementary Bayesian problems right.

Unfortunately for your argument in that sequence, very few actual physicists see the interpretation of quantum mechanics as a choice between "wavefunctions are real, and they collapse" and "wavefunctions are real, and they don't". I think life set you up for that choice because you got some of your early ideas about QM from Penrose, who does advocate a form of objective collapse theory. But the standard interpretation is that the wavefunction is not the objective state of the system, it is a tabulation of dispositional properties (that is philosophical terminology and would be unfamiliar to physicists, but it does express what the Copenhagen interpretation is about).

I might object to a lot of what physicists say about the meaning of quantum mechanics - probably the smartest ones are the informed realist agnostics like Gerard 't Hooft, who know that an observer-independent objectivity ought to be restored but who also know just how hard that will be to achieve. But the interpretation of quantum mechanics is not an "elementary Bayesian problem", nor is it an elementary problem of any sort. Given how deep the quantumness of the world goes, and the deep logical interconnectedness of things in physics, the correct explanation is probably one of the last fundamental facts about physics that we will figure out.

Comment author: DuncanS 22 March 2011 12:51:33AM 6 points [-]

Unfortunately this is a typical example of the kind of thing that goes wrong in philosophy.

Our actual knowledge in this area is actually encapsulated by the equations of quantum mechanics. This is the bit we can test, and this is the bit we can reason about correctly, because we know what the rules are.

We then go on to ask what the real meaning of quantum mechanics is. Well, perhaps we should remind ourselves that what we actually know is in the equations of quantum mechanics, and in the tests we've made of them. Anything else we might go on to say might very well not be knowledge at all.

So in interpreting quantum mechanics, we tend to swap a language we can work with (maths) for another language which is more difficult (English). OK - there are some advantages in that we might achieve more of an intuitive feel by doing that, but it's still a translation exercise.

Many worlds versus collapse? Putting it pointedly, the equations themselves don't distinguish between a collapse and a superposition of correlated states. Why do I think that my 'interpretation' of quantum mechanics should do something else? But in fact I wouldn't say either one is 'correct'. They are both translations into English / common-sense-ese of something that's actually best understood in its native mathematics.

Translation is good - it's better than giving up and just "shutting up and calculating". But the native truth is in the mathematics, not the English translation.

Comment author: loqi 26 March 2011 07:29:40AM 2 points [-]

In other words, the Born probabilities are just numbers in the end. Their particular correlation with our anticipated experience is a linguistic artifact arising from a necessarily imperfect translation into English. Asking why we experience certain outcomes more frequently than others is good, but the answer is a lower-status kind of truth - the native truth is in the mathematics.

Comment author: Peterdjones 22 May 2011 10:51:48PM -1 points [-]

Putting it pointedly, the equations themselves don't distinguish between a collapse and a superposition of correlated states.

Yes they do. Experimentation doesn't. Yet.

Comment author: lukeprog 21 March 2011 04:54:48PM 3 points [-]

you should have started with the big concrete example of mainstream philosophy doing an LW-style dissolution-to-algorithm not already covered on LW

But I've already pointed out that you do a lot more philosophy than just dissolution-to-algorithm. Dissolution to algorithm is not the only valuable thing to do in philosophy. Not all philosophical problems can be dissolved that way. Some philosophical problems turn out to be genuine problems that need an answer.

My claim that we shouldn't ignore philosophy is already supported by the points I made about how vast swaths of the useful content on Less Wrong have been part of mainstream philosophy for decades.

I'm not going to argue that philosophy isn't a terribly sick field, because it is a terribly sick field. Instead I'm arguing that you have already taken a great deal of value (directly or indirectly) from mainstream philosophy, and I gave more interesting examples than "metaphysical libertarianism is false" and "people are made of atoms."

Comment author: r90 21 March 2011 11:53:37AM 0 points [-]

Well, show me the power of LW then.

Since Quinean philosophy is just LW rationality but earlier, then that should settle it.

I find it likely that if someone were to trace the origins of LW rationality one would end up with Quine or someone similar. E.g. perhaps you read an essay by a Quinean philosopher when you were younger.

Comment author: [deleted] 21 March 2011 12:26:51PM 11 points [-]

I doubt it. In fact I'm pretty certain that Quine had nothing to do with 'the origins of LW rationality'. I came to many (though by no means all) of the same conclusions as Eliezer independently, some of them in primary school, and never heard of Quine until my early 20s. What I had read - and what it's apparent Eliezer had read - was an enormous pile of hard science fiction, Feynman's memoirs, every pop-science book and issue of New Scientist I could get my hands on and, later, Feynman's Lectures In Physics. If you start out with a logical frame of mind, and fill that mind up with that kind of stuff, then the answers to certain questions come out as just "that's obvious!" or "that's a stupid question!" Enough of them did to me that I'm pretty certain that Eliezer also came to those conclusions (and the others he's come to and written about) independently.

Comment author: [deleted] 21 March 2011 04:42:03PM 13 points [-]

Timing argues otherwise. We don't see Quine-style naturalists before Quine; we see plenty after Quine.

Eliezer doesn't recognize and acknowledge the influence? He probably wouldn't! People to a very large extent don't recognize their influences. To give just a trivial example, I have often said something to someone, only to find them weeks later repeating back to me the very same thing, as if they had thought of it. To give another example, pick some random words from your vocabulary - words like "chimpanzee", "enough", "unlikely". Which individual person taught you each of these words (probably by example), or which set of people? Do you remember? I don't. I really have no idea where I first picked up any bit of my language, with occasional exceptions.

For the most part we don't remember where exactly it was that we picked up this or that idea.

Of course, if Eliezer says he never read Quine, I don't doubt that he never read Quine. But that doesn't mean that he wasn't influenced by Quine. Quine influenced a lot of people, who influenced a lot of other people, who influenced still more people, some of whom could very easily have influenced Eliezer without Eliezer having the slightest notion that the influence originated with Quine.

It's hard to trace influence. What's not so hard is to observe timing. Quine comes first - by decades.

Comment author: MichaelVassar 22 March 2011 07:52:40PM 5 points [-]

Eliezer knows Bostrom pretty well and Bostrom is influenced by Quine, but I simply doubt the claim about no Quine style naturalists before Quine. Hard to cite non-citations though, so I can go on not believing you, but can't really say much to support it.

Comment author: [deleted] 22 March 2011 08:38:32PM 3 points [-]

Well, my own knowledge is spotty, and I have found that philosophy changes gradually, so that immediately before Quine I would expect you to find philosophers who in many ways anticipate a significant fraction of what Quine says. That said, I think that Quine genuinely originated much that was important. For example I think that his essay Two Dogmas of Empiricism contained a genuinely novel argument, and wasn't merely a repeat of something someone had written before.

But let's suppose, for the sake of argument, that Quine was not original at all, but was a student of Spline, and Spline was the actual originator of everything associated with Quine. I think that the essential point that Eliezer probably is the beneficiary of influence and is standing on the shoulders of giants is preserved, and the surrounding points are also preserved, only they are not attached specifically to Quine. I don't think Quine specifically is that important to what lukeprog was saying. He was talking about a certain philosophical tradition which does not go back forever.

Comment author: PhilGoetz 30 March 2011 03:53:56AM *  3 points [-]

(EDIT: Quine was not Rapaport's advisor; Hector-Neri Castaneda was.) William Rapaport, together with Stu Shapiro, applied Quine's ideas on semantics and logic to knowledge representation and reasoning for artificial intelligence. Stu Shapiro edited the Encyclopedia of Artificial Intelligence, which may be the best survey ever made of symbolic artificial general intelligence. Bill and Stu referenced Quine in many of their papers, which have been widely read in artificial intelligence since the early 1980s.

There are many concepts from Stu and Bill's representational principles that I find useful for dissolving philosophical problems. These include the concepts of intensional vs. extensional representation, deictic representations, belief spaces, and the unique variable binding rule. But I don't know if any of these ideas originate with Quine, because I haven't studied Quine. Bill and Stu also often cited Meinong and Carnap; I think many of Bill's representational ideas came from Meinong.

A quick google of Quine shows that a paper that I'm currently making revisions on is essentially a disproof of Quine's "indeterminacy of translation".

Comment author: Davorak 22 March 2011 09:02:59AM 2 points [-]

Eliezer doesn't recognize and acknowledge the influence? He probably wouldn't! People to a very large extent don't recognize their influences.

Applying the above to Quine would seem to at least weakly contradict:

Timing argues otherwise. We don't see Quine-style naturalists before Quine; we see plenty after Quine.

You seem to be singling out Quine as unique rather then just a link in a chain, unlike Eliezer and people who do not recognize their influences. This seems unlikely to me. Is this what you ment to communicate?

Comment author: [deleted] 22 March 2011 09:41:48AM *  2 points [-]

I don't assume Quine to be any different from anyone else in recognizing his influences.

It is because I have no particular confidence in anyone recognizing their own influences that I turn to timing to help me answer the question of independent creation.

1) If a person is the first person to give public expression to an idea, then the chance is relatively high that he is the originator of the idea. It's not completely certain, but it's relatively high.

2) In contrast, if a person is not the first person to give public expression to an idea but is, say, the 437th person to do so, the first having done so fifty years before, then chances are relatively high that he picked up the idea from somewhere and didn't remember picking it up. The fact that nobody expressed the idea before fifty years earlier suggests that the idea is pretty hard to come up with independently, because had it been easy, people would have been coming up with it all through history.

3) Finally, if a person is not the first person to give public expression to an idea but people have been giving public expression to the idea for as long as we have records, then the chance is relatively high once again that he independently rediscovered the idea, since it seems to be the sort of idea that is relatively easy to rediscover independently.

Comment author: TomM 23 March 2011 01:20:19AM *  2 points [-]

The fact that nobody expressed the idea before fifty years earlier suggests that the idea is pretty hard to come up with independently, because had it been easy, people would have been coming up with it all through history.

This can be true, but it is also possible that an idea may be hard to independently develop because the intellectual foundations have not yet been laid.

Ideas build on existing understandings, and once the groundwork has been done there may be a sudden eruption of independent-but-similar new ideas built on those foundations. They were only hard to come up with until that time.

Comment author: [deleted] 23 March 2011 01:38:38AM *  2 points [-]

This can be true, but it is also possible that an idea may be hard to independently develop because the intellectual foundations have not yet been laid.

Well, yes, but that's essentially my point. What you've done is pointed out that the foundation might lie slightly before Quine. Indeed it might. But I don't think this changes the essential idea. See here for discussion of this point.

Comment author: Davorak 24 March 2011 03:02:48AM 0 points [-]

1) If a person is the first person to give public expression to an idea, then the chance is relatively high that he is the originator of the idea. It's not completely certain, but it's relatively high.

Our view point diverge here. I do not agree the the first person to give public expression and be recorded for history, alone gives a high probability that he/she is the originator of the idea. You also said you factor in the originality of the idea. I only know Quine through what little I have read here and wikipedia and did not judge it original enough to be confident that the ideas he popularized could be thought of as his creation. It seems unlikely, I would however need more data to argue strongly oneway or another.

Comment author: [deleted] 24 March 2011 03:24:59AM 1 point [-]

I do not agree the the first person to give public expression and be recorded for history, alone gives a high probability

I didn't say "high probability", I said "relatively high". By which I mean it is high relative to some baseline in which we don't know anything, or relative to the second case. In other words, what I am saying is that if a person is the first to give public expression, this is evidence that he originated it.

I only know Quine through what little I have read here and wikipedia and did not judge it original enough

Many others thought it  highly original. Also, I'm not confident that you're in a position to make that judgment. You would need to be pretty familiar with the chronology of ideas to make that call, and if you were, you would probably be familiar with Quine.

Comment author: Davorak 24 March 2011 02:35:00PM -1 points [-]

Many others thought it highly original. Also, I'm not confident that you're in a position to make that judgment. You would need to be pretty familiar with the chronology of ideas to make that call, and if you were, you would probably be familiar with Quine.

I do not think asserting this is not helpful to the conversation. I did not clam confidence, I have admitted to wanting more data. This is an opportunity to teach what you know and/or share resources. If you are not interested then I will put it on my list of things to do later.

Comment author: thomblake 21 March 2011 04:07:45PM 1 point [-]

devoted to arguing

Even the best philosophy is this. Dan Dennett is devoted to arguing.

Of course, by Beisutsukai standards, philosophy is almost as good as physics. Both are very too slow.

Comment author: lukeprog 21 March 2011 09:14:37PM 0 points [-]

By the way, it's not that I disagree with your decision not to cover means vs. ends in CEV. I explained how it would be useful and clarifying. You then agreed that that insight from mainstream philosophy is useful for CEV, but you didn't feel it was necessary to mention in the paper because your CEV paper didn't go into enough detail to make it necessary. I don't have a problem with that.

Comment author: Vladimir_Nesov 21 March 2011 12:42:29PM *  3 points [-]

It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.

Given that your audience at least in some sense disagrees, you'd do well to use a more powerful argument than "it would be odd" (it would be a fine argument if you expected the audience's intuitions to align with the statement, but it's apparently not the case), especially given that your position suggests how to construct one: find an insight generated by mainstream philosophy that would be considered new and useful on LW (which would be most effective if presented/summarized in LW language), and describe the process that allowed you to find it in the literature.

On a separate note, I think finding a place for LW rationality in academic philosophy might be a good thing, but this step should be distinguished from the connotation that brings about usefulness of (closely located according to this placement) academic philosophy.

So, I agree denotationally with your post (along the lines of what you listed in this comment), but still disagree connotationally with the implication that standard philosophy is of much use (pending arguments that convince me otherwise, the disagreement itself is not that strong). I disagree strongly about the way in which this connotation feels to argue its case through this post, not presenting arguments that under its own assumptions should be available. I understand that you were probably unaware of this interpretation of your post (i.e. arguing for mainstream philosophy being useful, as opposed to laying out some groundwork in preparation for such argument), or consider it incorrect, but I would argue that you should've anticipated it and taken into account.

(I expect if you add a note at the beginning of the post to the effect that the point of this particular post is to locate LW philosophy in mainstream philosophy, perhaps to point out priority for some of the ideas, and edit the rest with that in mind, the connotational impact would somewhat dissipate, without changing the actual message. But given the discussion that has already taken place, it might be not worth doing.)

Comment author: lukeprog 21 March 2011 06:15:13PM 3 points [-]

No, I didn't take the time to make an argument.

But I am curious to discuss this with someone who doesn't find it odd that mainstream philosophy could make useful contributions up until a certain point and then suddenly stop. That's far from impossible, but I'd be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen? In the 1960's, after philosophy's predicate logic and Tarskian truth-conditional theories of language were mature? In the 1980s? Around 2000?

Comment author: Randaly 21 March 2011 06:30:42PM *  4 points [-]

The inability of philosophers to settle on a position on an issue and move on. It's very difficult to make progress (ie additional useful contributions) if your job depends, not on moving forwards and generating new insights, but rather on going back and forth over old arguments. People like, e.g. Yudkowsky, whose job allows/requires him to devote almost all of his time to new research, would be much more productive- possibly, depending on the philosopher and non-philosopher in question, so much more productive that going back over philosophical arguments and positions isn't very useful.

The time would depend on the field in question, of course; I'm no expert, but from an outsider's perspective I feel like, e.g. linguistics and logic have had much more progress in recent decades than, e.g. philosophical consciousness studies or epistemology. (Again, no expert.) However, again, my view is less that useful philosophical contributions have stopped, and more that they've slowed to a crawl.

Comment author: lukeprog 21 March 2011 06:46:39PM 7 points [-]

This is indeed why most philosophy is useless. But I've asserted that most philosophy is useless for a long time. This wouldn't explain why philosophy would nevertheless make useful progress up until the 60s or 80s or 2000s and then suddenly stop. That suggestion remains to be explained.

Comment author: Randaly 21 March 2011 11:12:00PM 2 points [-]

(My apologies; I didn't fully understand what you were asking for.)

First, it doesn't claim that philosophy makes zero progress, just that science/AI research/etc. makes more. There were still broad swathes of knowledge (e.g. linguistics and psychology) that split off relatively late from philosophy, and in which philosophers were still making significant progress right up to the point where the point where they became sciences.

Second, philosophy has either been motivated by or freeriding off of science and math (e.g. to use your example, Frege's development of predicate logic was motivated by his desire to place math on a more secure foundation.) But the main examples (that are generally cited elsewhere, at least) of modern integration or intercourse between philosophy and science/math/AI (e.g. Dennett, Drescher,, Pearl, etc.) have already been considered, so it's reasonable to say that mainstream philosophy probably doesn't have very much more to offer, let alone a "centralized repository of reductionist-grade naturalistic cognitive philosophy" of the sort Yudkowsky et al. are looking for.

Third, the low-hanging fruit would have been taken first; because philosophy doesn't settle points and move on to entire new search spaces, it would get increasingly difficult to find new, unexplored ideas. While they could technically have moved on to explore new ideas anyways, it's more difficult than sticking to established debates, feels awkward, and often leads people to start studying things not considered part of philosophy (e.g. Noam Chomsky or, to an extent, Alonzo Church.) Therefore, innovation/research would slow down as time went on. (And where philosophers have been willing to go out ahead and do completely original thinking, even where they're not very influenced by science, LW has seemed to integrate their thinking; e.g. Parfit.)

(Btw, I don't think anybody is claiming that all progress in philosophy had stopped; indeed, I explicitly stated that I thought that it hadn't. I've already given four examples above of philosophers doing innovative work useful for LW.)

Comment author: lukeprog 21 March 2011 11:17:30PM 3 points [-]

Yeah, I'm not sure we disagree on much. As you say, Less Wrong has already made use of some of the best of mainstream philosophy, though I think there's still more to be gleaned.

Comment author: Vladimir_Nesov 21 March 2011 06:48:34PM *  2 points [-]

That's far from impossible, but I'd be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen?

Just now. As of today, I don't expect to find useful stuff that I don't already know in mainstream philosophy already written, commensurate with the effort necessary to dig it up (this situation could be improved by reducing the necessary effort, if there is indeed something in there to find). The marginal value of learning more existing math or cognitive science or machine learning for answering the same (philosophical) questions is greater. But future philosophy will undoubtedly bring new good insights, in time, absent defeaters.

Comment author: lukeprog 21 March 2011 06:52:02PM *  3 points [-]

So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it's worth? If so, I find that plausible. Like I said, I don't think Eliezer should spend his time digging through mainstream philosophy. Digging through math books and AI books will be much more rewarding. I don't know what your fields of expertise are, but I suspect digging through mainstream philosophy would not be the best use of your time, either.

Comment author: Vladimir_Nesov 21 March 2011 07:13:54PM *  1 point [-]

So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it's worth?

I don't believe that for the purposes of development of human rationality or FAI theory this should be on anyone's worth-doing list for some time yet, before we can afford this kind of specialization to go after low-probability perks.

I expect that there is no existing work coming from philosophy useful-in-itself to an extent similar to Drescher's Good and Real (and Drescher is/was an AI researcher), although it's possible and it would be easy to make such work known to the community once it's discovered. People on the lookout for these things could be useful.

I expect that reading a lot of related philosophy with a prepared mind (so that you don't catch an anti-epistemic cold or death) would refine one's understanding of many philosophical questions, but mostly not in the form of modular communicable insights, and not to a great degree (compared to background training from spending the same time studying math/AI, that is ways of thinking you learn apart from the subject matter). This limits the extent to which people specializing in studying potentially relevant philosophy can contribute.

Comment author: lukeprog 21 March 2011 07:19:15PM 0 points [-]

Do you still think this was after reading my for starters list of mainstream philosophy contributions useful to Less Wrong? (below)

Comment author: Vladimir_Nesov 21 March 2011 07:47:08PM *  2 points [-]

The low-hanging fruit is already gathered. That list (outside of AI/decision theory references) looks useful for discussing questions of priority and for gathering real-world data (where it refers to psychological experiments). Bostrom's group and Drescher's and Pearl's work we already know, pointing these out is not a clear example of potential fruits of the quest for scholarship in philosophy (confusingly enough, but keep in mind the low-hanging fruit part, and the means for finding these being unrelated to scholarship in philosophy; also, being on the lookout for self-contained significant useful stuff is the kind of activity I was more optimistic about in my comment).

Comment author: lukeprog 21 March 2011 07:56:50PM *  13 points [-]

I don't get it. When low-hanging fruit is covered on Less Wrong, it's considered useful stuff. When low-hanging fruit comes from mainstream philosophy, it supposedly doesn't help show that mainstream philosophy is useful. If that's what's going on, it's a double standard, and a desperate attempt to "show" that mainstream philosophy isn't useful.

Also, saying "Well, we already know about lots of mainstream philosophy that's useful" is direct support for the central claim of my original post: That mainstream philosophy can be useful and shouldn't be ignored.

Comment author: Vladimir_Nesov 21 March 2011 08:14:03PM *  4 points [-]

Most of the stuff already written on Less Wrong is not useful to the present me in the same sense as philosophy isn't, because I already learned what I expected to be the useful bits. I won't be going on a quest for scholarship in Less Wrong either. And if I need to prepare an apprentice, I would give them some LW sequences and Good and Real first (on the philosophy side), and looking through mainstream philosophy won't come up for a long time.

These two use cases are the ones that matter to me, what use case did you think about? Just intuitive "usefulness" is too unclear.

Comment author: LukeStebbing 21 March 2011 08:14:51PM 2 points [-]

What's the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.