Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Less Wrong Rationality and Mainstream Philosophy

106 Post author: lukeprog 20 March 2011 08:28PM

Part of the sequence: Rationality and Philosophy

Despite Yudkowsky's distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free willmetaphysics, metaethics, normative ethics, machine ethicsaxiology, philosophy of mind, and more.

Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called "Quinean naturalism" after Harvard's W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I'm not talking about an obscure movement in philosophy.

Let us survey the connections. Quine thought that philosophy was continuous with science - and where it wasn't, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn't try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:

The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?

But isn't this using science to justify science? Isn't that circular? Not quite, say Quine and Yudkowsky. It is merely "reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else." Luckily, the brain is the lens that sees its flaws. And thus, says Quine:

Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.

Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it."

When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!



Non-Quinean philosophy

But I should also mention that LW philosophy / Quinean naturalism is not the largest strain of mainstream philosophy. Most philosophy is still done in relative ignorance (or ignoring) of cognitive science. Consider the preface to Rethinking Intuition:

Perhaps more than any other intellectual discipline, philosophical inquiry is driven by intuitive judgments, that is, by what "we would say" or by what seems true to the inquirer. For most of philosophical theorizing and debate, intuitions serve as something like a source of evidence that can be used to defend or attack particular philosophical positions.

One clear example of this is a traditional philosophical enterprise commonly known as conceptual analysis. Anyone familiar with Plato's dialogues knows how this type of inquiry is conducted. We see Socrates encounter someone who claims to have figured out the true essence of some abstract notion... the person puts forward a definition or analysis of the notion in the form of necessary and sufficient conditions that are thought to capture all and only instances of the concept in question. Socrates then refutes his interlocutor's definition of the concept by pointing out various counterexamples...

For example, in Book I of the Republic, when Cephalus defines justice in a way that requires the returning of property and total honesty, Socrates responds by pointing out that it would be unjust to return weapons to a person who had gone mad or to tell the whole truth to such a person. What is the status of these claims that certain behaviors would be unjust in the circumstances described? Socrates does not argue for them in any way. They seem to be no more than spontaneous judgments representing "common sense" or "what we would say." So it would seem that the proposed analysis is rejected because it fails to capture our intuitive judgments about the nature of justice.

After a proposed analysis or definition is overturned by an intuitive counterexample, the idea is to revise or replace the analysis with one that is not subject to the counterexample. Counterexamples to the new analysis are sought, the analysis revised if any counterexamples are found, and so on...

Refutations by intuitive counterexamples figure as prominently in today's philosophical journals as they did in Plato's dialogues...

...philosophers have continued to rely heavily upon intuitive judgments in pretty much the way they always have. And they continue to use them in the absence of any well articulated, generally accepted account of intuitive judgment - in particular, an account that establishes their epistemic credentials.

However, what appear to be serious new challenges to the way intuitions are employed have recently emerged from an unexpected quarter - empirical research in cognitive psychology.

With respect to the tradition of seeking definitions or conceptual analyses that are immune to counterexample, the challenge is based on the work of psychologists studying the nature of concepts and categorization of judgments. (See, e.g., Rosch 1978; Rosch and Mervis 1975; Rips 1975; Smith and Medin 1981). Psychologists working in this area have been pushed to abandon the view that we represent concepts with simple sets of necessary and sufficient conditions. The data seem to show that, except for some mathematical and geometrical concepts, it is not possible to use simple sets of conditions to capture the intuitive judgments people make regarding what falls under a given concept...

With regard to the use of intuitive judgments exemplified by reflective equilibrium, the challenge from cognitive psychology stems primarily from studies of inference strategies and belief revision. (See, e.g., Nisbett and Ross 1980; Kahneman, Slovic, and Tversky 1982.) Numerous studies of the patterns of inductive inference people use and judge to be intuitively plausible have revealed that people are prone to commit various fallacies. Moreover, they continue to find these fallacious patterns of reasoning to be intuitively acceptable upon reflection... Similarly, studies of the "intuitive" heuristics ordinary people accept reveal various gross departures from empirically correct principles...

There is a growing consensus among philosophers that there is a serious and fundamental problem here that needs to be addressed. In fact, we do not think it is an overstatement to say that Western analytic philosophy is, in many respects, undergoing a crisis where there is considerable urgency and anxiety regarding the status of intuitive analysis.

 

Conclusion

So Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science - a movement that has been active for at least two decades. Moreover, Less Wrong-style philosophy has its roots in Quinean naturalism from fifty years ago.

And I haven't even covered all the work in formal epistemology toward (1) mathematically formalizing concepts related to induction, belief, choice, and action, and (2) arguing about the foundations of probability, statistics, game theory, decision theory, and algorithmic learning theory.

So: Rationalists need not dismiss or avoid philosophy.

Update: To be clear, though, I don't recommend reading Quine. Most people should not spend their time reading even Quinean philosophy; learning statistics and AI and cognitive science will be far more useful. All I'm saying is that mainstream philosophy, especially Quinean philosophy, does make some useful contributions. I've listed more than 20 of mainstream philosophy's useful contributions here, including several instances of classic LW dissolution-to-algorithm.

But maybe it's a testament to the epistemic utility of Less Wrong-ian rationality training and thinking like an AI researcher that Less Wrong got so many things right without much interaction with Quinean naturalism. As Daniel Dennett (2006) said, "AI makes philosophy honest."

 

Next post: Philosophy: A Diseased Discipline

 

 

References

Dennett (2006). Computers as Prostheses for the Imagination. Talk presented at the International Computers and Philosophy Conference, Laval, France, May 3, 2006.

Kahneman, Slovic, & Tversky (1982). Judgment Under Uncertainty: Heuristics and Biases. Cambridge University Press.

Nisbett and Ross (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Prentice-Hall.

Rips (1975). Inductive judgments about natural categories. Journal of Verbal Learning and Behavior, 12: 1-20.

Rosch (1978). Principles of categorization. In Rosch & Lloyd (eds.), Cognition and Categorization (pp. 27-48). Lawrence Erlbaum Associates.

Rosch & Mervis (1975). Family resemblances: studies in the internal structure of categories. Cognitive Psychology, 8: 382-439.

Smith & Medin (1981). Concepts and Categories. MIT Press.

Comments (328)

Comment author: pjeby 20 March 2011 08:36:39PM 8 points [-]

Also, how about William James and pragmatism? I read Pragmatism recently, and had been meaning to post about the many bits that sound like they could've been cut straight from the sequences -- IIRC, there was some actual discussion of making beliefs "pay" -- in precisely the same manner as the sequences speak of beliefs paying rent.

Comment author: lukeprog 20 March 2011 08:42:19PM 9 points [-]

Yup.

Quinean naturalism, and especially Quine's naturalized epistemology, are merely the "fullest" accounts of Less Wrong-ian philosophy to be found in the mainstream literature. Of course particular bits come from earlier traditions.

Parts of pragmatism (Peirce & Dewey) and pre-Quinean naturalism (Sellars & Dewey and even Hume) are certainly endorsed by much of the Less Wrong community. As far as I can tell, Eliezer's theory of truth is straight-up Peircian pragmatism.

Comment author: Perplexed 20 March 2011 09:03:09PM 2 points [-]

Eliezer's theory of truth is straight-up Peircian pragmatism.

I see it as a closer match to Korzybski by way of Hayakawa.

Comment author: lukeprog 20 March 2011 09:08:00PM 4 points [-]

Eliezer's philosophy of language is clearly influenced by Korzybski via Hayakawa, but what is Korzybski's theory of truth? I'm just not familiar.

Comment author: Perplexed 20 March 2011 09:19:32PM 2 points [-]

Maybe I'm out of my depth here. But from a semantic standpoint, I thought that a theory of language pretty much is a theory of truth. At least in mathematical logic with Tarskian semantics, the meaning of a statement is given by saying what conditions make the statement true.

Comment author: lukeprog 20 March 2011 09:47:36PM 3 points [-]

Perplexed,

Truth-conditional accounts of truth, associated with Tarski and Davidson, are popular in philosophy of language. But most approaches to language do not contain a truth-conditional account of truth. Philosophy of language is most reliably associated with a theory of meaning: How is it that words and sentences relate to reality?

You might be right that Eliezer's theory of truth comes from something like Korzybski's (now defunct) theory of language, but I'm not familiar with Korzybski's theory of truth.

Comment author: Eliezer_Yudkowsky 20 March 2011 10:31:20PM 10 points [-]

My theory of truth is explicitly Tarskian. I'm explicitly influenced by Korzybski on language and by Peirce on "making beliefs pay rent", but I do think there are meaningful and true beliefs such that we cannot experientally distinguish between them and mutually exclusive alternatives, i.e., a photon going on existing after it passes over the horizon of the expanding universe as opposed to it blinking out of existence.

Comment author: lukeprog 20 March 2011 10:36:30PM *  2 points [-]

Thanks for clarifying!

For the record, my own take:

As a descriptive theory of how humans use language, I think truth-conditional accounts of meaning are inadequate. But that's the domain of contemporary linguistics, anyway - which tends to line up more with the "speech acts" camp in philosophy of language.

But we need something like a Tarskian theory of language and truth in order to do explicit AI programming, so I'm glad we've done so much work on that. And in certain contexts, philosophers can simply adopt a Tarskian way of talking rather than a more natural-language way of talking - if they want to.

And I agree about there being meaningful and true beliefs that we cannot experientially distinguish. That is one point at which you and I disagree with the logical positivists and, I think, Korzybski.

Comment author: Perplexed 20 March 2011 10:24:25PM 2 points [-]

I'm not familiar with Korzybski's theory of truth.

I'm only familiar with it through Hayakawa. The reference you provided to support your claim that the General Semantics theory of language is "defunct" says this about the GS theory of truth:

Hayakawa is quoted as saying:

[General semantics] tells you what to do and what to observe in order to bring the thing defined or its effects within the range of one’s experience.

which the ELL entry precisifies as:

The literal meaning of a statement expressed by sentence Σ is given by defining the method for observationally verifying the conditions under which Σ is properly used.

All of which sounds pretty close to Davidson and Tarski to me, though I'm not an expert. And not all that far from Yudkowsky.

I made my comment mentioning Language in Thought and Action before reading your post. I now see that your point was to fit Eliezer into the mainstream of Anglophone philosophy. I agree; he fits pretty well. And in particular, I agree (and regret) that he has been strongly influenced, directly or indirectly, by W. V. O. Quine. I'm not sure why I decided to mention Hayakawa's book - since it (like the sequences) definitely is too lowbrow to be part of that mainstream. I didn't mean for my comment to be taken as disagreement with you. I only meant to contribute some of that scholarship that you are always talking about. My point is, simply speaking, that if you are curious about where Eliezer 'stole' his ideas, you will find more of them in Hayakawa than in Peirce.

Comment author: lukeprog 20 March 2011 10:31:19PM 2 points [-]

Probably, though Yudkowsky quotes Peirce here.

Comment author: [deleted] 21 March 2011 12:39:17AM 6 points [-]

Possibly helpful: a PDF version of Medin's "Concepts and Categories."

Comment author: Matt_Simpson 20 March 2011 09:09:32PM 4 points [-]

In undergrad I had to read Quine's From Stimulus to Science for one of my philosophy classes, and I remember thinking "so what's your point?" It seemed like what Quine really needed to do in that work was talk about induction, but he just skirted the issue. Have you read it? What's your take? This was my only real exposure to Quine, so it's probably part of the reason I dismissed him.

(It's been a couple years since I've read it, so my memory may be off or I might have a different view if I read it now.)

Comment author: lukeprog 20 March 2011 09:49:14PM *  11 points [-]

I think Quine's original works are hard to read, and not the best presentation of his own work. I recommend instead Quine: a guide for the perplexed.

In general, I think primary literature is over-recommended for initial learning. There is almost always better coverage of the subject in secondary literature.

Comment author: gjm 20 March 2011 11:59:54PM 1 point [-]

FWIW, I think most of Quine's original work that I've read is very nicely written and very clear. (Not all; for some reason I never really got on with Word and object.)

Comment author: lukeprog 22 May 2011 03:57:01AM 10 points [-]

Philosophy quote of the day:

I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory.

Aaron Sloman (1978)

Comment author: Perplexed 22 May 2011 04:30:34AM *  9 points [-]

According to the link:

Aaron Sloman is a philosopher and researcher on artificial intelligence and cognitive science.

So, we have a spectacular mis-estimation of the time frame - claiming 33 years ago that AI would be seen as important "within a few years". That is off by one order of magnitude (and still counting!) Do we blame his confusion on the fact that he is a philosopher, or was the over-optimism a symptom of his activity as an AI researcher? :)

ETA:

as irresponsible as giving a degree course in physics which includes no quantum theory.

I'm not sure I like the analogy. QM is foundational for physics, while AI merely shares some (as yet unknown) foundation with all those mind-oriented branches of philosophy. A better analogy might be "giving a degree course in biology which includes no exobiology".

Hmmm. I'm reasonably confident that biology degree programs will not include more than a paragraph on exobiology until we have an actual example of exobiology to talk about. So what is the argument for doing otherwise with regard to AI in philosophy?

Oh, yeah. I remember. Philosophers, unlike biologists, have never shied away from investigating things that are not known to exist.

Comment author: ata 22 May 2011 06:32:22AM 4 points [-]

So, we have a spectacular mis-estimation of the time frame - claiming 33 years ago that AI would be seen as important "within a few years".

He didn't necessarily predict that AI would be seen as important in that timeframe; what he said was that if it wasn't, philosophers would have to be incompetent and their teaching irresponsible.

Comment author: wedrifid 22 May 2011 06:46:19AM *  5 points [-]

what he said was that if it wasn't, philosophers would have to be incompetent and their teaching irresponsible.

Full marks... but let's be honest, he doesn't get too many difficulty points for making that prediction...

Comment author: lukeprog 24 May 2011 05:04:08AM 0 points [-]

I didn't read the whole article. Where did Sloman claim that AI would be seen as important within a few years?

Comment author: Perplexed 24 May 2011 03:26:02PM 0 points [-]

Where did Sloman claim that AI would be seen as important within a few years?

I inferred that he would characterize it as important in that time frame from:

... within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence ...

together with a (perhaps unjustified) assumption that philosophers refrain from calling their colleagues "professionally incompetent" unless the stakes are important. And that they generally do what is fair.

Comment author: Eliezer_Yudkowsky 25 March 2011 08:33:15AM 17 points [-]

Note the way I speak with John Baez in the following interview, done months before the present post:

http://johncarlosbaez.wordpress.com/2011/03/25/this-weeks-finds-week-313/

In terms of what I would advocate programming a very powerful AI to actually do, the keywords are “mature folk morality” and “reflective equilibrium”...

In terms of Google keywords, my brand of metaethics is closest to analytic descriptivism or moral functionalism...

I was happy to try and phrase this interview as if it actually had something to do with philosophy.

Although I actually invented the relevant positions myself, on the fly when FAI theory needed it, then Googled around to find the philosophical nearest neighbor.

The fact that you are skeptical about this, and suspect I suppose that I accidentally picked up some analytic descriptivism or mature folk morality elsewhere and then forgot I'd read about it, even though I hadn't gone anywhere remotely near that field of philosophy until I wanted to try speaking their language, well, that strikes at the heart of why all this praise of "mainstream" philosophy strikes me the wrong way. Because the versions of "mature folk morality" and "reflective equilibrium" and "analytic descriptivism" and "moral functionalism" are never quite exactly right, they are built on entirely different premises of argument and never quite optimized for Friendly-AI thinking. And it seems to me, at least, that it is perfectly reasonable to simply ignore the field of philosophy and invent all these things the correct way, on the fly, and look up the nearest neighbor afterward; some wheels are simple enough that they're cheaper to reinvent than to look up and then modify.

Can philosophers be useful? Yes. Is it possible and sometimes desirable to communicate with people who've previously read philosophy in philosophical standard language? Yes. Is Less Wrong a branch from the mighty tree of mainstream philosophy? No.

Comment author: lukeprog 25 March 2011 05:05:51PM 19 points [-]

With this comment, I think our disagreement is resolved, at least to my satisfaction.

We agree that philosophy can be useful, and that sometimes it's desirable to speak the common language. I agree that sometimes it is easier to reinvent the wheel, but sometimes it's not.

As for whether Less Wrong is a branch of mainstream philosophy, I'm not much interested to argue about that. There are many basic assumptions shared by Quinean philosophy and Yudkowskian philosophy in opposition to most philosophers, even down to some very specific ideas like naturalized epistemology that to my knowledge had not been articulated very well until Quine. And both Yudkowskian philosophy and Quinean naturalism spend an awful lot of time dissolving philosophical debates into cognitive algorithms and challenging intuitionist thinking - so far, those have been the main foci of experimental philosophy, which is very Quinean, and was mostly founded by one of Quine's students, Stephen Stich. Those are the reasons I presented Yudkowskian philosophy as part of the broadly Quinean movement in philosophy.

On the other hand, I'm happy to take your word for it that you came up with most of this stuff on your own, and only later figured out what the philosophers have been calling it, so in another way Yudkowskian philosophy is thoroughly divorced from mainstream philosophy - maybe even more than, say, Nassim Taleb's philosophical work.

And once we've said all that, I don't think any question remains about whether Less Wrong is really part of a larger movement in philosophy.

Anyway, thanks for this further clarification. I've learned a lot from our discussion. And I'm enjoying your interview with Baez. Cheers.

Comment author: ToddStark 17 December 2012 04:40:41AM 1 point [-]

On the general issue of the origin of various philosophical ideas, I had a thought. Perhaps we take a lot of our tacit knowledge for granted in our thinking about attributions. I suspect that abstract ideas become part of wider culture and then serve as part of the reasoning of other people without them explicitly realizing the role of those abstracts. For example, Karl Popper had a concept of "World 3" which was essentially the world of artifacts that are inherited from generation to generation and become a kind of background for the thinking of each successive generation who inherits that culure. That concept of "unconscious ideas" was also found in a number of other places (and has been of course for as far back as we can remember) and has been incorporated into many theories and explanations of varying usefulness. Some of Freud's ideas have a similar rough feel to them and his albeit unscientific ideas became highly influential in popular culture and influenced all sorts of things, including some productive psychology programs that emphasize influences outside of explcit awareness. Our thinking is given shape in part by a background that we aren't explicitly aware of and as a result we can'[t always make accurate attributions of intellectual history except in terms of what has been written down. Some of the influence happens outside of our awareness via various mechanisms of implicit or tacit learning. We know a lot more than we realize we know, we "stand on the shoulders of others" in a somewhat obscure sense as well as the more obvious one.

An important implication of this might be that our reasoning starts from assumptions and conceptual schemes that we don't really think about because it is "intuitive" and appears to each of us as "commonsense." However it may be that "commonsense" and "intuition" are forms of ubiquitous expertise that differ somewhat between people. If that is the case, then people reason from different starting points and perhaps can reason to different conclusions even when rigorously logical, and this would seemingly support a perspectivist view where logic is not by itself adequate to reconcile differences in opinion.

If that is the case, then it helps explain why we can't seem to get rid of some fundamental problems just by clarifying concepts and reasoning from evidence. Those operations are themselves shaped by a background. One of the important roles of philosophy may be to give a voice to some of that background, a voice which may not always be scientific (that is, empirical, testable, effectively communicated through mathematics). So it may not be the philosophers who actually make the ideas available ot us, but the philosophers who make them explicit outside of science.

I'm not saying that contradicts the possibly unique value of naturalistic and reductionistic approaches, systematization, etc., just that if we think of philosophy purely in utilitarian terms as a provider of new theories that feed science, we may miss the point of its role in culture and our tracking and understanding of the genesis of ideas.

Comment author: BobTheBob 27 March 2011 04:40:34PM 4 points [-]

You say,

the versions of "mature folk morality" and "reflective equilibrium" and "analytic descriptivism" and "moral functionalism" are never quite exactly right, they are built on entirely different premises of argument and never quite optimized for Friendly-AI thinking.

and that you prefer to "invent all these things the correct way".

From this and your preceding text I understand,

  • that philosophers have identified some meta-ethical theses and concepts similar to concepts and theses you've invented all by yourself,

  • that the philosophers' theses and concepts are in some way systematically defective or inadequate, and

  • that the arguments used to defend the theses are different than the arguments which you would use to defend them.

(I'm not sure what you mean in saying the concepts and theses aren't optimized for Friendly-AI thinking.)

You imply that you've done a comprehensive survey, to arrive at these conclusions. It'd be great if you could share the details. Which discussions of these ideas have you studied, how do your concepts differ from the philosophers', and what specifically are the flaws in the philosophers' versions? I'm not familiar with these meta-ethical theses but I see that Frank Jackson and Philip Pettit are credited with sparking the debate in philosophy - what in their thinking do you find inadequate? And what makes your method of invention (to use your term) of these things the correct one?

I apologize if the answers to these questions are all contained in your sequences. I've looked at some of them but the ones I've encountered do not answer these questions.

You disparage the value of philosophy, but it seems to me you could benefit from it. In another of your posts, 'How An Algorithm Feels From Inside', I came across the following:

When you look at a green cup, you don't think of yourself as seeing a picture reconstructed in your visual cortex - although that is what you are seeing - you just see a green cup. You think, "Why, look, this cup is green," not, "The picture in my visual cortex of this cup is green."

This is false - the claim, I mean, that when you look at a green cup, you are seeing a picture in your visual cortex. On the contrary, the thing you see is reflecting light, is on the table in front of you (say), has a mass of many grams, is made of ceramic (say), and on an on. It's a cup -it emphatically is not in your brainpan. Now, if you want to counter that I'm just quibbling over the meaning of the verb 'to see', that's fine - my point is that it is you who are using it in a non-standard way, and it behoves you to give a coherent explication of your meaning. The history of philosophical discussions suggests this is not an easy task. The root of the problem is the effort to push the subject/object distinction -which verbs of perception seem to require- within the confines of the cranium. Typically, the distinction is only made more problematic - the object of perception (now a 'picture in the visual cortex') still doesn't have the properties it's supposed to (greenness), and the subject doing the seeing seems even more problematic. The self is made identical to or resident within some sub-region of the brain, about which various awkward questions now arise. Daniel Dennett has criticized this idea as the 'Cartesian Theatre' model of perception.

Having talked to critics of philosophy before, I know such arguments are often met with considerable impatience and derision. They are irrelevant to the understanding being sought, a waste of time, etc. This is fine - it may be true, for many, including you. If this is so, though, it seems to me the rational course is simply to acknowledge it's concerns are orthogonal to your own, and if you seem to come into collision (as above), to show that your misleading metaphor isn't really doing any work, and hence is benign. In this case you aren't re-inventing the wheel in coming up with your own theories, but something altogether different - a skid, maybe.

Comment author: [deleted] 20 March 2011 09:50:48PM 20 points [-]

The community definitely needs to work on this whole "virtue of scholarship" thing.

Comment author: Davorak 22 March 2011 08:40:01AM 2 points [-]

LW community or the philosophy community?

Comment author: [deleted] 22 March 2011 01:40:57PM 3 points [-]

I was talking about the LW community.

Comment author: djc 23 March 2011 11:00:55AM 11 points [-]

It's not Quinean naturalism. It's logical empiricism with a computational twist. I don't suggest that everyone go out and read Carnap, though. One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas. Premature engagement can minimize all three.

Comment author: lukeprog 23 March 2011 07:49:46PM 1 point [-]

To some degree. It might be more precise to say that many AI programs in general are a computational update to Carnap's The Logical Structure of the World (1937).

But logical empiricism as a movement is basically dead, while what I've called Quinean naturalism is still a major force.

Comment author: Jack 23 March 2011 08:20:19PM 1 point [-]

I'd actually say the central shared features that you're identifying- the dissolving of the philosophical paradox instead of reifying it as well as the centrality of observation and science goes back to Hume.

Comment author: Vladimir_M 20 March 2011 10:38:14PM 12 points [-]

Many mainstream philosophers have been defending Less Wrong-ian positions for decades before Overcoming Bias or Less Wrong existed.

When I read posts on Overcoming Bias (and sometimes also LW) discussing various human frailties and biases, especially those related to status and signaling, what often pops into my mind are observations by Friedrich Nietzsche. I've found that many of them represent typical OB insights, though expressed in a more poetic, caustic, and disorganized way. Now of course, there's a whole lot of nonsense in Nietzsche, and a frightful amount of nonsense in the subsequent philosophy inspired by him, but his insight about these matters is often first-class.

Comment author: MichaelVassar 22 March 2011 07:59:49PM 3 points [-]

I agree with this actually.

Comment author: Nisan 21 March 2011 12:26:39AM 17 points [-]

That's Kornblith and Stich and Bickle [...]

Those names are clearly made-up :)

Comment author: Will_Sawin 21 March 2011 02:30:03AM *  7 points [-]

From my small but nontrivial knowledge of Quine, he always struck me as having a critically wrong epistemology.

LW-style epistemology looks like this:

  1. Let's figure out how a perfectly rational being (AI) learns.
  2. Let's figure out how humans learn.
  3. Let's use that knowledge to fix humans so that they are more like AIs.

whereas Quine's seems more like

  1. Let's figure out how humans learn

which seems to be missing most of the point.

His boat model always struck me as something confused that should be strongly modified or replaced by a Bayesian epistemology in which posterior follows logically and non-destructively from prior, but I may be in the minority in LW on this.

Comment author: lukeprog 21 March 2011 02:36:27AM 4 points [-]

It's true that Quine lacked the insights of contemporary probability theory and AI, but remember that Quine's most significant work was done before 1970. Quine was also a behaviorist. He was wrong about many things.

My point was that both Quine and Yudkowsky think that recursive justification bottoms out in using the lens that sees its own flaws to figure out how humans gain knowledge, and correcting mistakes that come in. That's naturalized epistemology right there. Epistemology as cognitive science. Of course, naturalized epistemology has made a lot of progress since then thanks to the work of Kahneman and Tversky and Pearl and so on - the people that Yudkowsky learned from.

Comment author: Will_Sawin 21 March 2011 11:17:25AM 4 points [-]

Bayesian inference is not a big step up from Laplace, and the idea of an optimal model that humans should try to approximate is a common philosophical position.

Comment author: MichaelVassar 22 March 2011 07:54:30PM 6 points [-]

Who cares when his work was done. We want to know how to find work that helps us to understand things today. It's not about how smart he was, but about how much his ideas can help us.

Comment author: lukeprog 23 March 2011 05:28:26AM *  1 point [-]

And my answer is "not much." Like I say, all the basics of Quinean philosophy are already assumed by Less Wrong. I don't recommend anyone read Quine. It's (some of) the stuff his followers have done in the last 30 years that is useful - both stuff that is already being used by SIAI people, and stuff that is useful but (previously) undiscovered by SIAI people. I listed some of that stuff here.

Comment author: Apprentice 21 March 2011 10:45:39AM 4 points [-]

What's wrong with behaviorism? I was under the impression that behaviorism was outdated but when my daughter was diagnosed as speech-delayed and borderline autistic we started researching therapy options. The people with the best results and the best studies (those doing 'applied behavior analysis') seem to be pretty much unreconstructed Skinnerists. And my daughter is making good progress now.

I'll take flawed philosophy with good results over the opposite any day of the week. But I'm still curious about flaws in the philosophy.

Comment author: ciphergoth 21 March 2011 10:55:23AM 4 points [-]

May I recommend Dennett's "Skinner Skinned", in Brainstorms?

Comment author: Apprentice 21 March 2011 11:46:13AM 7 points [-]

Okay, I read it. It's funny how Dennett's criticism of Skinner partially mirrors Luke's criticism of Eliezer. Because Skinner uses terminology that's not standard in philosophy, Dennett feels he needs to be "spruced up".

"Thus, spruced up, Skinner's position becomes the following: don't use intentional idioms in psychology" (p. 60). It turns out that this is Quine's position and Dennett sort of suggests that Skinner should just shut up and read Quine already.

Ultimately, I can understand and at least partially agree with Dennett that Skinner goes too far in denying the value of mental vocabulary. But, happily, this doesn't significantly alter my belief in the value of Skinner type therapy. People naturally tend to err in the other direction and ascribe a more complex mental life to my daughter than is useful in optimizing her therapy. And I still think Skinner is right that objections to behaviorist training of my daughter in the name of 'freedom' or 'dignity' are misplaced.

Anyway, this was a useful thing to read - thank you, ciphergoth!

Comment author: Apprentice 21 March 2011 11:12:23AM 1 point [-]

Thank you, holding the book in my hand and reading it now.

Comment author: lukeprog 21 March 2011 04:57:44PM 3 points [-]

No, I'm talking about behaviorist psychology. Behaviorist psychology denied the significance (and sometimes the existence) of cognitive states. Showing that cognitive states exist and matter was what paved the way to cognitive science. Many insights from behaviorist psychology (operant conditioning) remain useful, but it's central assumption is false, and it must be false for anyone to be doing cognitive science.

Comment author: Apprentice 21 March 2011 05:26:40PM *  3 points [-]

Okay, but now I'm getting a bit confused. You seem to me to have come out with all the following positions:

  • The worthwhile branch of philosophy is Quinean. (this post)
  • Quine was a behaviorist. (a comment on this post)
  • Behaviorism denies the possibility of cognitive science. (a comment on this post)
  • The worthwhile part of philosophy is cognitive science. ("for me, philosophy basically just is cognitive science" - Lukeprog)

Those things don't seem to go well together. What am I misunderstanding?

Comment author: Apprentice 21 March 2011 05:26:52PM 1 point [-]

Quine apparently said, "I consider myself as behavioristic as anyone in his right mind could be". That sounds good, can I subscribe to that?

Comment author: David_Gerard 21 March 2011 01:37:41PM *  8 points [-]

Personally, I'm finding that avoiding anthropomorphising humans, i.e. ignoring the noises coming out of their mouths in favour of watching their actions, pays off quite well, particularly when applied to myself ;-) I call this the "lump of lard with buttons to push" theory of human motivation. Certainly if my mind had much effect on my behaviour, I'd expect to see more evidence than I do ...

Comment author: TheOtherDave 21 March 2011 01:59:10PM 8 points [-]

"lump of lard with buttons to push"

I take exception to that: I have a skeletal structure, dammit!

Comment author: NancyLebovitz 22 March 2011 07:57:49PM 4 points [-]

I think the reference is to the brain rather than to the whole body.

Comment author: TheOtherDave 22 March 2011 09:01:15PM 3 points [-]

(blink)

(nods) Yes, indeed.

Exception withdrawn.

Well played!

Comment author: [deleted] 22 March 2011 08:53:39PM 2 points [-]

It sounds like what you are describing is rationalization, either doing it yourself or accepting people's rationalization about themselves.

Comment author: MichaelVassar 22 March 2011 07:55:10PM 2 points [-]

Yep. Anthropomorphizing humans is a disasterously wrong thing to do. Too bad everyone does it.

Comment author: SilasBarta 22 March 2011 08:02:42PM 12 points [-]

No, they just look like they're doing it; saying humans are athropomorphizing would attribute more intentionality to humans than is justified by the data.

Comment author: [deleted] 03 April 2011 09:24:22AM 0 points [-]

Is this an example? I've been working on paying attention to intention. If I know someone cares about me, but is expressing it poorly, I try to focus on their intent rather than their expression of that intent.

Comment author: Eliezer_Yudkowsky 22 March 2011 08:51:42PM 9 points [-]

If you're wondering why I'm afraid of philosophy, look no further than the fact that this discussion is assigning salience to LW posts in a completely different way to I do.

I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project, or the amount of confusion that it permanently and completely dissipates, all of this here is prioritizing LW posts to the extent that they happen to imply positions on famous ongoing philosophical arguments.

That's why I'm afraid to be put into any philosophical tradition, Quinean or otherwise - and why I think I'm justified in saying that their cognitive workflow is not like unto my cognitive workflow.

Comment author: lukeprog 23 March 2011 05:43:10AM *  10 points [-]

With this comment at least, you aren't addressing the list of 20+ useful contributions of mainstream philosophy I gave.

Almost none of the items I listed have to do with famous old "problems" like free will or reductionism.

Instead, they're stuff that (1) you're already making direct use of in building FAI, like reflective equilibrium, or (2) stuff that is almost identical to the 'coping with cognitive biases' stuff you've written about so much, like Bishop & Trout (2004), or (3) stuff that is dissolving traditional debates into the cognitive algorithms that produce them, which you seem to think is the defining hallmark of LW-style philosophy, or (4) generally useful stuff like the work on catastrophic risks coming out of FHI at Oxford.

I hope you aren't going to keep insisting that mainstream philosophy has nothing useful to offer after reading my list. On this point, it may be time for you to just say "oops" and move on.

After all, we already agree on most of the important points, like you said. We agree that philosophy is an incredibly diseased discipline. We agree that people shouldn't go out and read Quine. We agree that almost everyone should be reading statistics and AI and cognitive science, not mainstream philosophy. We agree that Eliezer Yudkowsky should not read mainstream philosophy. We agree that "their" cognitive workflow is "not like unto" your cognitive workflow.

So I don't understand why you would continue to insist that nothing (or almost nothing) useful comes out of mainstream philosophy, after the long list of useful things I've provided, many of which you are already using yourself, and many more of which closely parallel what you've been doing on Less Wrong all along, like dissolving traditional debates into cognitive algorithms and examining how to get at the truth more often through awareness and counteracting of our cognitive biases.

The sky won't fall if you admit that some of mainstream philosophy is useful, and that you already make use of some of it. I'm not going to go around recommending people join philosophy programs. This is simply about making use of the resources that are out there. Most of those resources are in statistics and AI and cognitive science and physics and so on. But a very little of it happens to come out of mainstream philosophy, especially from that corner of mainstream philosophy called Quinean naturalism which shares lots of (basic) assumptions with Less Wrong philosophy.

As you know, this stuff matters. We're trying to save the world, here. Either some useful stuff comes out of mainstream philosophy, or it doesn't. There is a correct answer to that question. And the correct answer is that some useful stuff does come out of mainstream philosophy - as you well know, because you're already making use of it.

Comment author: Emile 24 March 2011 07:22:46PM 5 points [-]

We agree that people shouldn't go out and read Quine. We agree that almost everyone should be reading statistics and AI and cognitive science, not mainstream philosophy.

I think it would be good for LessWrong to have a bit more academic philosophers and students of philosophy, to have a slightly higher philosophers/programmers ratio (as long as it doesn't come with the expectation that everybody should understand a lot of concepts in philosophy that aren't in the sequences).

Comment author: Vladimir_Nesov 23 March 2011 10:55:20AM *  2 points [-]

So I don't understand why you would continue to insist that nothing (or almost nothing) useful comes out of mainstream philosophy

You still haven't given an actual use case for your sense of "useful", only historical priority (the qualifier "come out" is telling, for example), and haven't connected your discussion that involves the word "useful" to the use case Eliezer assumes (even where you answered that side of the discussion without using the word, by agreeing that particular use cases for mainstream philosophy are a loss). It's an argument about definition of "useful", or something hiding behind this equivocation.

I suggest tabooing "useful", when applied to literature (as opposed to activity with stated purpose) on your side.

Comment author: lukeprog 23 March 2011 12:15:16PM *  4 points [-]

Eliezer and I, over the course of our long discussion, have come to some understanding of what would constitute useful. Though, Philosophy_Tutor suggested that Eliezer taboo his sense of "useful" before trying to declare every item on my list as useless.

Whether or not I can provide a set of necessary and sufficient conditions for "useful", I've repeatedly pointed out that:

  1. Several works from mainstream philosophy do the same things he has spent a great deal of time doing and advocating on Less Wrong, so if he thinks those works are useless then it would appear he thinks much of what he has done on Less Wrong is uesless.

  2. Quite a few works from mainstream philosophy have been used by him, so presumably he finds them useful.

I can't believe how difficult it is to convince some people that some useful things come out of mainstream philosophy. To me, it's a trivial point. Those resisting this truth keep trying to change the subject and make it about how philosophy is a diseased subject (agreed!), how we shouldn't read Quine (agreed!), how other subjects are more important and useful (agreed!), and so on.

Comment author: Vladimir_Nesov 23 March 2011 04:34:21PM *  6 points [-]

I agree that you've agreed on many specific things. I suggest that the sense of remaining disagreement is currently confused through refusing to taboo "useful". You use one definition, he uses a different one, and there is possibly genuine disagreement in there somewhere, but you won't be able to find it without again switching to more specific discussion.

Also, taboo doesn't work by giving a definition, instead you explain whatever you wanted without using the concept explicitly (so it's always a definition in a specific context).

For example:

Quite a few works from mainstream philosophy have been used by him, so presumably he finds them useful.

Instead of debating this point of the definition (and what constitutes "being used"), consider the questions of whether Eliezer agrees that he was influenced (in any sense) by quite a few works from mainstream philosophy (obviously), whether they provided insights that would've been unavailable otherwise (probably not), whether they happen to already contain some of the same basic insights found elsewhere (yes), whether they originate them (it depends), etc.

It's a long list, not as satisfying as the simple "useful/not", but this is the way to unpack the disagreement. And even if you agree on every fact, his sense of "useful" can disagree with yours.

Comment author: Yvain 23 March 2011 06:54:24PM 12 points [-]

I can't believe how difficult it is to convince some people that some useful things come out of mainstream philosophy. To me, it's a trivial point.

If it's not immediately obvious how an argument connects to a specific implementable policy or empirical fact, default is to covertly interpret it as being about status.

Since there are both good and bad things about philosophy, we can choose to emphasize the good (which accords philosophers and those who read them higher status) or emphasize the bad (which accords people who do their own work and ignore mainstream philosophy higher status).

If there are no consequences to this choice, it's more pleasant to dwell upon the bad: after all, the worse mainstream philosophy does, the more useful and original this makes our community; the better mainstream philosophy does, the more it suggests our community is a relatively minor phenomenon within a broader movement of other people with more resources and prestige than ourselves (and the more those of us whose time is worth less than Eliezer's should be reading philosophy journals instead of doing something less mind-numbing).

I think this community is smart enough to avoid many such biases if given a real question with a truth-value, but given a vague open question like "Yay philosophy - yes or no?" of course we're going to take the side that makes us feel better.

I think the solution is to present specific insights of Quinean philosophy in more depth, which you already seem like you're planning to do.

Comment author: lukeprog 23 March 2011 07:05:04PM *  2 points [-]

Maybe my original post gave the wrong impression of "which side I'm on." (Yay philosophy or no?) Like Quine and Yudkowsky, I've generally considered myself an "anti-philosophy philosopher."

But you're right that such vague questions and categorizations are not really the point. The solution is to present specific useful insights of mainstream philosophy, and let the LW community make use of them. I've done that in brief, here, and am working on posts to elaborate some of those items in more detail.

What disappoints me is the double standard being used (by some) for what counts as "useful" when presented in AI books or on Less Wrong, versus what counts as "useful" when it happens to come from mainstream philosophy.

Comment author: Vladimir_Nesov 23 March 2011 08:18:47PM *  1 point [-]

If it's not immediately obvious how an argument connects to a specific implementable policy or empirical fact, default is to covertly interpret it as being about status.

Sounds plausible, and if true, a useful observation.

Comment author: Jack 23 March 2011 08:02:18PM 8 points [-]

I'm worried part of this debate is just about status. When someone comes in and says "Hey, you guys should really pay more attention to what x group of people with y credentials says about z" it reminds everyone here, most of whom lack y credentials that society doesn't recognize them as an authority on z and so they are some how less valuable than group x. So there is an impulse to say that z is obvious, that z doesn't matter or that having y isn't really a good indicator of being right about z. That way, people here don't lose status relative to group x.

Conversely, members of group x probably put money and effort into getting credential y and will be offended by the suggestion that what they know about doesn't matter, that it is obvious or that their having credential y doesn't indicate they know anything more than anyone else.

Me, I have an undergraduate degree in philosophy which I value so I'm sure I get a little defensive when philosophy is mocked or criticized around here. But most people here probably fit in the first category. Eliezer, being a human being like everybody else, is likely a little insecure about his lack of a formal education and perhaps particularly apt to deny an academic community status as domain experts in a fields he's worked in (even though he is certainly right that formal credentials are overvalued).

I think a lot of this argument isn't really a disagreement over what is valuable and what isn't- it's just people emphasizing or de-emphasizing different ideas and writers to make themselves look higher status.

I've read Quine and you haven't so obviously Quine's insights were huge leaps forward and no progress is possible without standing on his shoulders. Most of what you've said here was said earlier and better by other people I've read.

...

I haven't read Quine and you have? Well in that case everything he ever said was obvious and I totally came up with it on my own. What's actually impressive coming up with these interesting ideas over here based on those obvious ideas Quine thought up. Any philosophers do that? No? That's what I thought.

These statements have no content they just say "My stuff is better than your stuff".

Comment author: lukeprog 23 March 2011 08:15:16PM *  1 point [-]

I think such debates unavoidably include status motivations. We are status-oriented, signaling creatures. Politics mattered in our ancestral environment.

Of course you know that I never said anything like either of the parody quotes provided. And I'm not trying to stay Quinean philosophy is better than Less Wrong. The claim I'm making is a very weak claim: that some useful stuff comes out of mainstream philosophy, and Less Wrong shouldn't ignore it when that happens just because the source happens to be mainstream philosophy.

Comment author: loup-vaillant 27 June 2011 10:02:45PM 1 point [-]

I'm late, but… is there substantial chain of cause and effect between the discovery of useful conclusions from mainstream philosophy, and the use of those conclusions by Eliezer? Counter-factually, if those conclusions were not drawn, would it be less likely that Eliezer found them anyway?

Eliezer seems to deny this chain of cause and effect. I wonder to what extent you think such a denial is unjustified.

Comment author: XiXiDu 23 March 2011 09:29:21AM 4 points [-]

I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project...

I've frequently been criticized for suggesting that you hold that attitude. The usual response is that LW is not about friendly AI or has not much to do with the SIAI.

Comment author: lukeprog 21 May 2011 04:16:15PM *  5 points [-]

So I stumbled on these instructions:

Go to a random wikipedia article. Click on the first link (skip parentheses). Repeat. You will always end up on 'Philosophy.'

Below is a list of the random articles I began from, and how long it took me to get to the Philosophy article.

Gymnasium Philippinum: 11
Brnakot: 23
Ohrenbach: 11
Vrijburg: 24
The Love Transcendent: 14
2010 in tennis: 13
Cross of All Nations: 24
List of teams and cyclists in the 2003 Tour de France: 14
Anton Ehmann: 19
Traveling carnival: 25
Frog: 13

Some, however, go into an immediate loop, for example between fringe theatre and alternative theatre.

Philosophy, of course, loops back on itself in just a few steps.

The Wikipedia version of the Collatz conjecture.

Comment author: Vladimir_Nesov 21 May 2011 05:00:52PM 5 points [-]

And if you click two more times starting from Philosophy, you get to Rationality. Rationality, of course, loops back to itself.

Comment author: calcsam 21 May 2011 05:52:56PM 4 points [-]

This is probably a result of what Elizer said about going up one level. The first link in wikipedia almost always goes up one level. Philosophy is the universal top level.

Comment author: [deleted] 20 March 2011 11:32:23PM 6 points [-]

Thanks so much. I didn't know about Quine, and from what you've quoted it seems quite clearly in the same vein as LessWrong.

Also, out of curiosity, do you know if anything's been written about whether an agent (natural or artificial) needs goals in order to learn? Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes -- does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?

Comment author: Eliezer_Yudkowsky 21 March 2011 12:01:46AM 8 points [-]

What you care about determines what your explorations learn about. An AI that didn't care about anything you thought was important, even instrumentally (it had no use for energy, say) probably wouldn't learn anything you thought was important. A probability-updater without goals and without other forces choosing among possible explorations would just study dust specks.

Comment author: [deleted] 21 March 2011 12:23:20AM 2 points [-]

That was my intuition. Just wanted to know if there's more out there.

Comment author: Eliezer_Yudkowsky 21 March 2011 12:36:36AM 5 points [-]

What, you mean in mainstream philosophy? I don't think mainstream philosophers think that way, even Quineans. The best ones would say gravely, "Yes, goals are important" and then have a big debate with the rest of the field about whether goals are important or not. Luke is welcome to prove me wrong about that.

Comment author: utilitymonster 21 March 2011 11:09:01AM 4 points [-]

I actually don't think this is about right. Last time I asked a philosopher about this, they pointed to an article by someone (I.J. Good, I think) about how to choose the most valuable experiment (given your goals), using decision theory.

Comment author: lukeprog 21 March 2011 01:18:21AM 4 points [-]

Yes, that's about right.

AI research is where to look in regards to your question, SarahC. Start with chapter 2 and the chapters with 'decisions' in the title in AI: A Modern Approach.

Comment author: [deleted] 21 March 2011 01:19:25AM 1 point [-]

Thank you!

Comment author: komponisto 21 March 2011 02:01:55AM 3 points [-]

I didn't know about Quine

My first exposure was his mathematical logic book. At the time, I didn't even realize he had a reputation as a philosopher per se. (I knew from the back cover of the book that he was in the philosophy department at Harvard, but I just assumed that that was where anyone who got sufficiently "foundational" about their mathematics got put.)

Comment author: [deleted] 21 March 2011 02:09:15AM 2 points [-]

Ah, see, when I learned a little logic, I shuddered, muttered "That is not dead which can unsleeping lie," and moved on. I'll come back to it if it ever seems useful though.

Comment author: cousin_it 21 March 2011 01:33:57PM *  6 points [-]

Yah, I sometimes joke that logicians are viewed by mathematicians in the same way that mathematicians are viewed by normal people. Logic makes complete sense to me, but some of my professional mathematician friends cannot understand my tastes at all. I, on the other hand, cannot understand how one can get interested in homological algebra or other such things, when there are all these really pressing logical issues to solve :-)

Comment author: Will_Sawin 21 March 2011 11:05:39AM 4 points [-]

That is exactly why I enjoy learning about logic.

Comment author: MichaelVassar 22 March 2011 07:49:23PM 1 point [-]

Will Sawin, aspiring necromancer... That should be on your business card.

Comment author: Will_Sawin 22 March 2011 10:00:19PM 1 point [-]

I should have a business card.

Comment author: Vladimir_Nesov 21 March 2011 02:23:29PM *  1 point [-]

Obviously humans and animals have values, at least in the sense of reward and punishment or positive and negative outcomes -- does anyone think that this is of practical importance for building processes that can form accurate beliefs about the world?

Practical importance for what purpose? Whatever that purpose is, adding heuristics that optimize the learning heuristics for better fulfillment of that purpose would be fruitful for that purpose.

It would be of practical importance to the extent the original implementation of the learning heuristics is suboptimal, and to the extent the implementable learning-heuristic-improving heuristics can work on that. If you are talking of autonomous agents, self-improvement is a necessity, because you need open-ended potential for further improvement. If you are talking about non-autonomous tools people write, it's often difficult to construct useful heuristic-improvement heuristics. But of course their partially-optimized structure is already chosen while making use of the values that they're optimized for, purpose in the designers.

Comment author: lukeprog 20 March 2011 11:51:01PM 1 point [-]

Could you clarify what you mean? When I parse your second paragraph, it comes across to my mind as three or four separate questions...

Comment author: [deleted] 21 March 2011 12:21:51AM 4 points [-]

Ok, this is actually an area on which I'm not well-informed, which is why I'm asking you instead of "looking it up" -- I'd like to better understand exactly what I want to look up.

Let's say we want to build a machine that can form accurate predictions and models/categories from observational data of the sort we encounter in the real world -- somewhat noisy, and mostly "uninteresting" in the sense that you have to compress or ignore some of the data in order to make sense of it. Let's say the approach is very general -- we're not trying to solve a specific problem and hard-coding in a lot of details about that problem, we're trying to make something more like an infant.

Would learning happen more effectively if the machine had some kind of positive/negative reinforcement? For example, if the goal is "find the red ball and fetch it" (which requires learning how to recognize objects and also how to associate movements in space with certain kinds of variation in the 2d visual field) would it help if there was something called "pain" which assigned a cost to bumping into walls, or something called "pleasure" which assigned a benefit to successfully fetching the ball?

Is the fact that animals want food and positive social attention necessary to their ability to learn efficiently about the world? We're evolved to narrow our attention to what's most important for survival -- we notice motion more than we notice still figures, we're better at recognizing faces than arbitrary objects. Is it possible that any process needs to have "desires" or "priorities" of this sort in order to narrow its attention enough to learn efficiently?

To some extent, most learning algorithms have cost functions associated with failure or error, even the one-line formulas. It would be a bit silly to say the Mumford-Shaw functional feels pleasure and pain. So I guess there's also the issue of clarifying exactly what desires/values are.

Comment author: Alicorn 20 March 2011 10:08:03PM 4 points [-]

Hilary Kornblith was my advisor in grad school. He's a cool dude.

Comment author: lukeprog 20 March 2011 10:13:03PM 1 point [-]

I'm jealous! As you probably know, he is perhaps the leading defender of naturalized epistemology today.

Comment author: Alicorn 20 March 2011 10:14:43PM 4 points [-]

Yup. I took a class on naturalized epistemology with him and got to listen to him talk about it in his nifty deep voice.

Comment author: cousin_it 21 March 2011 09:34:19AM *  7 points [-]

Discussions of priority are boring. If Quinean naturalism has insights relevant to LW, let's hear them!

Comment author: lukeprog 21 March 2011 09:41:26AM *  8 points [-]

What I'm saying is that Less Wrong shouldn't ignore mainstream philosophy.

What I demonstrated above is that, directly or indirectly, Less Wrong has already drawn heavily from mainstream philosophy. It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.

As for naturalistic philosophy's insights relevant to LW, they are forthcoming. I'll be writing some more philosophical posts in the future.

And actually, my statistical prediction rules post came mostly from me reading a philosophy book (Epistemology and the Psychology of Human Judgment), not from reading psychology books.

Comment author: Eliezer_Yudkowsky 21 March 2011 09:59:59AM 26 points [-]

I'll await your next post, but in retrospect you should have started with the big concrete example of mainstream philosophy doing an LW-style dissolution-to-algorithm not already covered on LW, and then told us that the moral was that we shouldn't ignore mainstream philosophy.

I did the whole sequence on QM to make the final point that people shouldn't trust physicists to get elementary Bayesian problems right. I didn't just walk in and tell them that physicists were untrustworthy.

If you want to make a point about medicine, you start by showing people a Bayesian problem that doctors get wrong; you don't start by telling them that doctors are untrustworthy.

If you want me to believe that philosophy isn't a terribly sick field, devoted to arguing instead of facing real-world tests and admiring problems instead of solving them and moving on, whose poison a novice should avoid in favor of eating healthy fields like settled physics (not string theory) or mainstream AI (not AGI), you're probably better off starting with the specific example first. "I disagree with your decision not to cover terminal vs. instrumental in CEV" doesn't cover it, and neither does "Quineans agree the world is made of atoms". Show me this field's power!

Comment author: lukeprog 21 March 2011 06:10:42PM *  53 points [-]

Eliezer,

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms. After all, most of the useful philosophy you've done on Less Wrong is not specifically related to that very particular thing... which again supports my point that mainstream philosophy has more to offer than dissolution-to-algorithm. (Unless you think most of your philosophical writing on Less Wrong is useless.)

Also, I don't disagree with your decision not to cover means and ends in CEV.

Anyway. Here are some useful contributions of mainstream philosophy:

  • Quine's naturalized epistemology. Epistemology is a branch of cognitive science: that's where recursive justification hits bottom, in the lens that sees its flaws.
  • Tarski on language and truth. One of Tarski's papers on truth recently ranked as the 4th most important philosophy paper of the 20th century by a survey of philosophers. Philosophers have much developed Tarski's account since then, of course.
  • Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.
  • Dennett on belief in belief. Used regularly on Less Wrong.
  • Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior. See, for example, pages 60-61 and 1041 of AIMA (3rd ed.).
  • Functionalism and multiple realizability. The philosophy of mind most natural to AI was introduced and developed by Putnam and Lewis in the 1960s, and more recently by Dennett.
  • Explaining the cognitive processes that generate our intuitions. Both Shafir (1998) and Talbot (2009) summarize and discuss as much as cognitive scientists know about the cognitive mechanisms that produce our intuitions, and use that data to explore which few intuitions might be trusted and which ones cannot - a conclusion that of course dissolves many philosophical problems generated from conflicts between intuitions. (This is the post I'm drafting, BTW.) Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases..." Sound familiar?
  • Pearl on causality. You acknowledge the breakthrough. While you're right that this is mostly a case of an AI researcher coming in from the outside to solve philosophical problems, Pearl did indeed make use of the existing research in mainstream philosophy (and AI, and statistics) in his book on causality.
  • Drescher's Good and Real. You've praised this book as well, which is the result of Drescher's studies under Dan Dennett at Tufts. And the final chapter is a formal defense of something like Kant's categorical imperative.
  • Dennett's "intentional stance." A useful concept in many contexts, for example here.
  • Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.
  • Ord on risks with low probabilities and high stakes. Here.
  • Deontic logic. The logic of actions that are permissible, forbidden, obligatory, etc. Not your approach to FAI, but will be useful in constraining the behavior of partially autonomous machines prior to superintelligence, for example in the world's first battlefield robots.
  • Reflective equilibrium. Reflective equilibrium is used in CEV. It was first articulated by Goodman (1965), then by Rawls (1971), and in more detail by Daniels (1996). See also the more computational discussion in Thagard (1988), ch. 7.
  • Experimental philosophy on the biases that infect our moral judgments. Experimental philosophers are now doing Kahneman & Tversky -ish work specific to biases that infect our moral judgments. Knobe, Nichols, Haidt, etc. See an overview in Experiments in Ethics.
  • Greene's work on moral judgment. Joshua Greene is a philosopher and neuroscientist at Harvard whose work using brain scanners and trolley problems (since 2001) is quite literally decoding the algorithms we use to arrive at moral judgments, and helping to dissolve the debate between deontologists and utilitarians (in his view, in favor of utilitarianism).
  • Dennett's Freedom Evolves. The entire book is devoted to explaining the evolutionary processes that produced the cognitive algorithms that produce the experience of free will and the actual kind of free will we do have.
  • Quinean naturalists showing intuitionist philosophers that they are full of shit. See for example, Schwitzgebel and Cushman demonstrating experimentally that moral philosophers have no special expertise in avoiding known biases. This is the kind of thing that brings people around to accepting those very basic starting points of Quinean naturalism as a first step toward doing useful work in philosophy.
  • Bishop & Trout on ameliorative psychology. Much of Less Wrong's writing is about how to use our awareness of cognitive biases to make better decisions and have a higher proportion of beliefs that are true. That is the exact subject of Bishop & Trout (2004), which they call "ameliorative psychology." The book reads like a long sequence of Less Wrong posts, and was the main source of my post on statistical prediction rules, which many people found valuable. And it came about two years before the first Eliezer post on Overcoming Bias. If you think that isn't useful stuff coming from mainstream philosophy, then you're saying a huge chunk of Less Wrong isn't useful.
  • Talbot on intuitionism about consciousness. Talbot (here) argues that intuitionist arguments about consciousness are illegitimate because of the cognitive process that produces them: "Recently, a number of philosophers have turned to folk intuitions about mental states for data about whether or not humans have qualia or phenomenal consciousness. [But] this is inappropriate. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system - System One - that will ignore qualia when making these judgments, even if qualia exist."
  • "The mechanism behind Gettier intuitions." This upcoming project of the Boulder philosophy department aims to unravel a central (misguided) topic of 20th century epistemology by examining the cognitive mechanisms that produce the debate. Dissolution to algorithm yet again. They have other similar projects ongoing, too.
  • Computational meta-ethics. I don't know if Lokhorst's paper in particular is useful to you, but I suspect that kind of thing will be, and Lokhorst's paper is only the beginning. Lokhorst is trying to implement a meta-ethical system computationally, and then actually testing what the results are.

Of course that's far from all there is, but it's a start.

...also, you occasionally stumble across some neato quotes, like Dennett saying "AI makes philosophy honest." :)

Note that useful insights come from unexpected places. Rawls was not a Quinean naturalist, but his concept of reflective equilibrium plays a central role in your plan for Friendly AI to save the world.

P.S. Predicate logic was removed from the original list for these reasons.

Comment author: [deleted] 21 March 2011 07:11:00PM 7 points [-]

It seems a shame to leave this list with several useful cites as a comment, where it is likely to be missed. Not sure what to suggest - maybe append it to the main article?

Comment author: lukeprog 21 March 2011 07:15:34PM 4 points [-]

I added a link to this list to the end of the original post.

Comment author: Eliezer_Yudkowsky 25 March 2011 07:15:59PM 21 points [-]

Quine's naturalized epistemology. Epistemology is a branch of cognitive science

Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn't shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn't do before, so from an LW perspective this isn't yet a move on the gameboard. At best it introduces a move on the gameboard.

Tarski on language and truth.

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn't imply people should study philosophy if they will also run into Tarski by doing mathematics.

Chalmers' formalization of Good's intelligence explosion argument...

...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you'll see that most of the issues raised didn't fit into Chalmers's decomposition at all. Not suggesting that he should've done it differently in a first paper, but still, Chalmers's formalization doesn't yet represent most of the debates that have been done in this community. It's more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.

Dennett on belief in belief.

Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.

Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior...

Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. "Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners... Michael Bratman has applied his "belief-desire-intention" model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992)." This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a "major inspiration", if so...

Functionalism and multiple realizability.

This comes under the heading of "things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward". I really don't think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.

Explaining the cognitive processes that generate our intuitions... Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy."...

Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it's not a move on the LW gameboard until you get to specifics. Then it's not a very impressive move unless it involves doing nonobvious reductionism, not just "Bias X might make philosophers want to believe in position Y". You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.

Pearl on causality.

Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It's what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast - I've done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl's work as "building" on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?

Drescher's Good and Real.

Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.

Dennett's "intentional stance."

For a change I actually did read about this before forming my own AI theories. I can't recall ever actually using it, though. It's for helping people who are confused in a way that I wasn't confused to begin with. Dennett is in any case a widely known and named exception.

Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.

A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who's done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal's Mugging, which was invented right here on Less Wrong by none other than yours truly - although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal's Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.

Reading Bostrom is a triumph of the rule "Read the most famous transhumanists" not "Read the most famous philosophers".

The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy - anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as "philosophy" rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.

Ord on risks with low probabilities and high stakes.

Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don't know if there was any academic debate or if the paper just dropped into the void.)

Deontic logic

Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.

...I'll stop there, but do want to note, even if it's out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume "Judgment Under Uncertainty: Heuristics and Biases" where it appears as a lovely chapter by Robyn Dawes on "The robust beauty of improper linear models", which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it's not work done in philosophy and, well, I didn't learn about it there so this particular citation feels a bit odd to me.

Comment author: Jack 25 March 2011 08:06:50PM *  15 points [-]

when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation?

That this isn't at all the case should be obvious even if the only thing you've read on the subject is Pearl's book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon's theory isn't about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.

Comment author: Eliezer_Yudkowsky 25 March 2011 08:15:38PM 4 points [-]

Stalnaker's name sounds familiar from Pearl, so I'll take your word for this and concede the point.

Comment author: komponisto 25 March 2011 07:47:46PM *  1 point [-]

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician.

As I pointed out before, the same is true for me of Quine. I don't know if lukeprog means to include Mathematical Logic when he keeps saying not to read Quine, but that book was effectively my introduction to the subject, and I still hold it in high regard. It's an elegant system with some important innovations, and features a particularly nice treatment of Gödel's incompleteness theorem (one of his main objectives in writing the book). I don't know if it's the best book on mathematical logic there is (I doubt it), but it appeals to a certain kind of personality, and I would certainly recommend it to a young high-schooler over reading Principia Mathematica, for example.

Comment author: lukeprog 25 March 2011 07:34:30PM *  1 point [-]

Cool. Let me know when you've finished your comment here and I'll respond.

Comment author: Eliezer_Yudkowsky 25 March 2011 08:07:49PM 0 points [-]

Done.

Comment author: lukeprog 25 March 2011 08:40:23PM *  4 points [-]

Quine's naturalized epistemology: agreed.

Tarski: But I thought you said you were not only influenced by Tarski's mathematics but also his philosophical work on truth?

Chalmers' paper: Yeah, it's mostly useful as an overview. I should have clarified that I meant that Chalmers' paper makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has in one place. Obviously, your work (and your debate with Robin) goes far beyond Chalmers' introductory paper, but it's scattered all over the place and takes a lot of reading to track down and understand.

And this would be the main reason to learn something from the mainstream: If it takes way less time than tracking down the same arguments and answers through hundreds of Less Wrong posts and other articles, and does a better job of pointing you to other discussions of the relevant ideas.

But we could have the best of both worlds if SIAI spent some time writing well-referenced survey articles on their work, in the professional style instead of telling people to read hundreds of pages of blog posts (that mostly lack references) in order to figure out what you're talking about.

Bratman: I don't know his influence first hand, either - it's just that I've seen his 1987 book mentioned in several books on AI and cognitive science.

Pearl: Jack beat me to the punch on this.

Talbot: I guess I'll have to read more about what you mean by dissolution to cognitive algorithm. I thought the point was that even if you can solve the problem, there's that lingering wonder about why people believe in free will, and once you explain why it is that humans believe in free will, not even a hint of the problem remains. The difference being that your dissolution of free will to cognitive algorithm didn't (as I recall) cite any of the relevant science, whereas Talbot's (and others') dissolutions to cognitive algorithms do cite the relevant science.

Is there somewhere where you explain the difference between what Talbot, and also Kip Werking, have done versus what you think is so special and important about LW-style philosophy?

As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn't, and people are better off reading statistics and AI and cognitive science, like I said. So I'm not sure there's anything left to argue.

The one major thing I'd like clarification on (if you can find the time) is the difference between what experimental philosophers are doing (or what Joshua Greene is doing) and the dissolution-to-algorithm that you consider so central to LW-style philosophy.

Comment author: Jack 25 March 2011 09:13:08PM *  9 points [-]

As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn't, and people are better off reading statistics and AI and cognitive science, like I said. So I'm not sure there's anything left to argue.

I'd like to emphasize, to no one in particular, that the evaluation that seems to be going on here is about whether or not reading these philosophers is useful for building a Friendly recursively self-improving artificial intelligence. While thats a good criteria for whether or not Eliezer should read them, failure to meet this criteria doesn't render the work of the philosopher valueless (really! it doesn't!). The question "is philosophy helpful for researching AI" is not the same as the question "is philosophy helpful for a rational person trying to better understand the world".

Comment author: timtyler 25 March 2011 07:33:15PM *  3 points [-]

Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.

I thought Chalmers was a newbie to all this - and showed it quite a bit. However, a definite step forward from zombies. Next, see if Penrose or Searle can be recruited.

Comment author: Eliezer_Yudkowsky 21 March 2011 07:55:02PM 9 points [-]

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms.

No, it's more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.

Amy just arrived and I've got to start book-writing, but I'll take one example from this list, the first one, so that I'm not picking and choosing; later if I've got a moment I'll do some others, in the order listed.

  • Predicate logic.

Funny you should mention that.

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

Now considering that philosophers of the sort I inveighed against in "against modal logic" seem to talk and think like the GOFAI people and not like the model-theoretic people, I'm guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves "philosophers" instead of those descendants who considered themselves part of the thriving edifice of mathematics.

Anyway. If you and I agree that philosophy is an extremely sick field, that there is no standardized repository of the good stuff, that it would be a desperate and terrible mistake for anyone to start their life studying philosophy before they had learned a lot of cognitive science and math and AI algorithms and plain old material science as explained by non-philosophers, and that it's not worth my time to read through philosophy to pick out the good stuff even if there are a few small nuggets of goodness or competent people buried here and there, then I'm not sure we disagree on much - except this post sort of did seem to suggest that people ought to run out and read philosophy-qua-philosophy as written by professional philosophers, rather than this being a terrible mistake.

Will try to get to some of the other items, in order, later.

Comment author: lukeprog 14 May 2011 03:14:06AM 12 points [-]

You may enjoy the following exchange between two philosophers and one mathematician.

Bertrand Russell, speaking of Godel's incompleteness theorem, wrote:

It made me glad that I was no longer working at mathematical logic. If a given set of axioms leads to a contradiction, it is clear that at least one of the axioms must be false.

Wittgenstein dismissed the theorem as trickery:

Mathematics cannot be incomplete; any more than a sense can be incomplete. Whatever I can understand, I must completely understand.

Godel replied:

Russell evidently misinterprets my result; however, he does so in a very interesting manner... In contradistinction Wittgenstein... advances a completely trivial and uninteresting misinterpretation.

According to Gleick (in The Information), the only person who understood Godel's theorem when Godel first presented it was another mathematician, Neumann Janos, who moved to the USA and began presenting it wherever he went, by then calling himself John von Neumann.

The soundtrack for Godel's incompleteness theorem should be, I think, the last couple minutes of 'Ludus' from Tabula Rasa by Arvo Part.

Comment author: Wei_Dai 14 May 2011 08:22:07AM *  13 points [-]

I've been wondering why von Neumann didn't do much work in the foundations of mathematics. (It seems like something he should have been very interested in.) Your comment made me do some searching. It turns out:

John von Neumann was a vain and brilliant man, well used to putting his stamp on a mathematical subject by sheer force of intellect. He had devoted considerable effort to the problem of the consistency of arithmetic, and in his presentation at the Konigsberg symposium, had even come forward as an advocate for Hilbert's program. Seeing at once the profound implications of Godel's achievement, he had taken it one step further—proving the unprovability of consistency, only to find that Godel had anticipated him. That was enough. Although full of admiration for Godel—he'd even lectured on his work—von Neumann vowed never to have anything more to do with logic. He is said to have boasted that after Godel, he simply never read another paper on logic. Logic had humiliated him, and von Neumann was not used to being humiliated. Even so, the vow proved impossible to keep, for von Neumann's need for powerful computational machinery eventually forced him to return to logic.

ETA: Am I the only one who fantasizes about cloning a few dozen individuals from von Neumann's DNA, teaching them rationality, and setting them to work on FAI? There must be some Everett branches where that is being done, right?

Comment author: lukeprog 14 May 2011 08:35:19AM 2 points [-]

We'd need to inoculate the clones against vanity, it appears.

Interesting story. Thanks for sharing your findings.

Comment author: Oscar_Cunningham 21 March 2011 08:12:34PM 10 points [-]

Of course, since this is a community blog, we can have it both ways. Those of us interested in philosophy can go out and read (and/or write) lots of it, and we'll chuck the good stuff this way. No need for anyone to miss out.

Comment author: lukeprog 21 March 2011 09:15:49PM 5 points [-]

Exactly. Like I did with my statistical prediction rules post.

Comment author: lukeprog 21 March 2011 08:04:03PM *  6 points [-]

Anyway. If you and I agree...

Yeah, we don't disagree much on all those points.

I didn't say in my original post that people should run out and start reading mainstream philosophy. If that's what people got from it, then I'll add some clarifications to my original post.

Instead, I said that mainstream philosophy has some useful things to offer, and shouldn't be ignored. Which I think you agree with if you've benefited from the work of Bostrom and Dennett (including, via Drescher) and so on. But maybe you still disagree with it, for reasons that are forthcoming in your response to my other examples of mainstream philosophy contributions useful to Less Wrong.

But yeah, don't let me keep you from your book!

As for predicate logic, I'll have to take your word on that. I'll 'downgrade it' in my list above.

Comment author: TheOtherDave 21 March 2011 08:15:37PM 11 points [-]

If that's what people got from it, then I'll add some clarifications to my original.

FWIW, what I got from your original post was not "LW readers should all go out and start reading mainstream philosophy," but rather "LW is part of a mainstream philosophical lineage, whether its members want to acknowledge that or not."

Comment author: lukeprog 21 March 2011 08:22:15PM 2 points [-]

Thanks for sharing. That too. :)

Comment author: Eliezer_Yudkowsky 22 March 2011 12:08:11AM 1 point [-]

I'm part of Roger Bacon's lineage too, and not ashamed of it either, but time passes and things improve and then there's not much point in looking back.

Comment author: lukeprog 22 March 2011 12:21:57AM *  15 points [-]

Meh. Historical context can help put things in perspective. You've done that plenty of times in your own posts on Less Wrong. Again, you seem to be holding my post to a different standard of usefulness than your own posts. But like I said, I don't recommend anybody actually read Quine.

Comment author: [deleted] 02 April 2015 05:54:59PM 1 point [-]

Oftentimes you simply can't understand what some theorem or experiment was for without at least knowing about its historical context. Take something as basic as calculus: if you've never heard the slightest thing about classical mechanics, what possible meaning could a derivative, integral, or differential equation have to you?

Comment author: TheAncientGeek 02 April 2015 07:25:10PM 0 points [-]

Does human nature improve, too?

Comment author: Perplexed 21 March 2011 11:24:09PM *  8 points [-]

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI.

I'd be curious to know what that "toxic view" was. My GOFAI academic advisor back in grad school swore by predicate logic. The only argument against that I ever heard was that proving or disproving something is undecidable (in theory) and frequently intractible (in practice).

And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

Model theory as opposed to proof theory? What is it you think is great about model theory?

Now considering that philosophers of the sort I inveighed against in "against modal logic" seem to talk and think like the GOFAI people and not like the model-theoretic people, I'm guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves "philosophers" instead of those descendants who considered themselves part of the thriving edifice of mathematics.

I have no idea what you are saying here. That "Against Modal Logic" posting, and some of your commentary following it strike me as one of your most bizarre and incomprehensible pieces of writing at OB. Looking at the karma and comments suggests that I am not alone in this assessment.

Somehow, you have picked up a very strange notion of what modal logic is all about. The whole field of hardware and software verification is based on modal logics. Modal logics largely solve the undecidability and intractability problems the bedeviled GOFAI approaches to these problems using predicate logic. Temporal logics are modal. Epistemic and game-theoretic logics are modal.

Or maybe it is just the philosophical approaches to modal logic that offended you. The classical modal logic of necessity and possibility. The puzzles over the Barcan formulas when you try to combine modality and quantification. Or maybe something bizarre involving zombies or Goedel/Anselm ontological proofs.

Whatever it was that poisoned your mind against modal logic, I hope it isn't contagious. Modal logic is something that everyone should be exposed to, if they are exposed to logic at all. A classic introductory text: Robert Goldblatt: Logics of Time and Computation (pdf) is now available free online. I just got the current standard text from the library. It - Blackburn et al.: Modal Logic (textbook) - is also very good. And the standard reference work - Blackburn et al.: Handbook of Modal Logic - is outstanding (and available for less than $150 as Borders continues to go out of business :)

Comment author: lukeprog 21 March 2011 11:35:45PM 7 points [-]

Reading Plantinga could poison almost anybody's opinion of modal logic. :)

Comment author: Perplexed 21 March 2011 11:50:37PM 3 points [-]

That is entirely possible. A five star review at the Amazon link you provided calls this "The classic work on the metaphysics of modality". Another review there says:

Plantinga's Nature of Necessity is a philosophical masterpiece. Although there are a number of good books in analytic philosophy dealing with modality (the concepts of necessity and possibility), this one is of sufficient clarity and breadth that even non-philosophers will benefit from it. Modal logic may seem like a fairly arcane subject to outsiders, but this book exhibits both its intrinsic interest and its general importance.

Yet among the literally thousands of references in the three books I linked, Platinga is not even mentioned. A fact which pretty much demonstrates that modal logic has left mainstream philosophy behind. Modal logic (in the sense I am promoting) is a branch of logic, not a branch of metaphysics.

Comment author: PhilGoetz 02 April 2011 03:04:02PM 3 points [-]

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

I'd very much like to see a post explaining that.

Comment author: lukeprog 22 March 2011 01:32:08PM *  3 points [-]

it's more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.

I'm not sure what "of that level" (of dissolving-to-algorithm) means, but I think I've demonstrated that quite a lot of useful stuff comes from mainstream philosophy, and indeed that a lot of mainstream philosophy is already being used by yourself and Less Wrong.

Comment author: DuncanS 22 March 2011 12:26:36AM *  5 points [-]

I believe I understand the warning here. The whole field of philosophy reminds me of the introduction to one of the first books on computer system development - The mythical man-month.

"No scene from prehistory is quite so vivid as that of the mortal struggles of great beasts in the tar pits. In the mind's eye one sees dinosaurs, mammoths, and saber-toothed tigers struggling against the grip of the tar. The fiercer the struggle, the more entangling the tar, and no beast is so strong or so skillful but that he ultimately sinks.

Large-system programming has over the past decade been such a tar pit, and many great and powerful beasts have thrashed violently in it. Most have emerged with running systems—few have met goals, schedules, and budgets. Large and small, massive or wiry, team after team has become entangled in the tar. No one thing seems to cause the difficulty—any particular paw can be pulled away. But the accumulation of simultaneous and interacting factors brings slower and slower motion. Everyone seems to have been surprised by the stickiness of the problem, and it is hard to discern the nature of it. But we must try to understand it if we are to solve it."

The tar pit, as the book goes on to describe, is information complexity, and far too many philosophers seem content to jump right into the middle of that morass, convinced they will be able to smash their way out. The problem is not the strength of their reason, but the lack of a solid foothold - everything is sticky and ill-defined, there is nothing solid to stand on. The result is much thrashing, but surprisingly little progress.

The key to progress, for nearly everyone, is to stay where you know solid ground is. Don't jump in the tar pit unless you absolutely have no other choice. Logic is of very little help when you have no clear foundation to rest it on.

Comment author: lukeprog 22 March 2011 12:55:25AM 1 point [-]

Yup! Most of analytic philosophy's foundation has been intuition, and, well... thar's yer problem right thar!

Comment author: FiftyTwo 22 March 2011 04:13:02AM 2 points [-]

There has been some recent work in tackling the dependence on intuitions. The Experimental Philosophy (X-Phi) movement has been doing some very interesting stuff examining the role of intuition in philosophy, what intuitions are and to what extent they are useful.

One of the landmark experiments was doing surveys that showed cross cultural variation in responses to certain philosophical thought experiments, (for example in what cases someone is acting intentionally) e.g. Weinberg et al (2001). Which obviously presents a problem for any Philosophical argument that uses such intuitions as premises.

The next stage being explaining these variations, and how by acknowledging these issues you can remove biases, without going too far into skepticism to be useful. To caricature the problem, if I can't trust certain of my intuitions I shouldn't trust them in general. But then how can I trust very basic foundations, (such as: a statement cannot be simultaneously true and false) and from there build up to any argument.

This area seems particularly relevant to this discussion, as there has been definite progress in the very recent past, in a manner very consistent rationalist techniques and goals.

[This is my first LW post, so apologies for any lack of clarity or deviation from accepted practice]

Comment author: Mitchell_Porter 21 March 2011 12:21:04PM 17 points [-]

I did the whole sequence on QM to make the final point that people shouldn't trust physicists to get elementary Bayesian problems right.

Unfortunately for your argument in that sequence, very few actual physicists see the interpretation of quantum mechanics as a choice between "wavefunctions are real, and they collapse" and "wavefunctions are real, and they don't". I think life set you up for that choice because you got some of your early ideas about QM from Penrose, who does advocate a form of objective collapse theory. But the standard interpretation is that the wavefunction is not the objective state of the system, it is a tabulation of dispositional properties (that is philosophical terminology and would be unfamiliar to physicists, but it does express what the Copenhagen interpretation is about).

I might object to a lot of what physicists say about the meaning of quantum mechanics - probably the smartest ones are the informed realist agnostics like Gerard 't Hooft, who know that an observer-independent objectivity ought to be restored but who also know just how hard that will be to achieve. But the interpretation of quantum mechanics is not an "elementary Bayesian problem", nor is it an elementary problem of any sort. Given how deep the quantumness of the world goes, and the deep logical interconnectedness of things in physics, the correct explanation is probably one of the last fundamental facts about physics that we will figure out.

Comment author: DuncanS 22 March 2011 12:51:33AM 6 points [-]

Unfortunately this is a typical example of the kind of thing that goes wrong in philosophy.

Our actual knowledge in this area is actually encapsulated by the equations of quantum mechanics. This is the bit we can test, and this is the bit we can reason about correctly, because we know what the rules are.

We then go on to ask what the real meaning of quantum mechanics is. Well, perhaps we should remind ourselves that what we actually know is in the equations of quantum mechanics, and in the tests we've made of them. Anything else we might go on to say might very well not be knowledge at all.

So in interpreting quantum mechanics, we tend to swap a language we can work with (maths) for another language which is more difficult (English). OK - there are some advantages in that we might achieve more of an intuitive feel by doing that, but it's still a translation exercise.

Many worlds versus collapse? Putting it pointedly, the equations themselves don't distinguish between a collapse and a superposition of correlated states. Why do I think that my 'interpretation' of quantum mechanics should do something else? But in fact I wouldn't say either one is 'correct'. They are both translations into English / common-sense-ese of something that's actually best understood in its native mathematics.

Translation is good - it's better than giving up and just "shutting up and calculating". But the native truth is in the mathematics, not the English translation.

Comment author: loqi 26 March 2011 07:29:40AM 2 points [-]

In other words, the Born probabilities are just numbers in the end. Their particular correlation with our anticipated experience is a linguistic artifact arising from a necessarily imperfect translation into English. Asking why we experience certain outcomes more frequently than others is good, but the answer is a lower-status kind of truth - the native truth is in the mathematics.

Comment author: lukeprog 21 March 2011 04:54:48PM 3 points [-]

you should have started with the big concrete example of mainstream philosophy doing an LW-style dissolution-to-algorithm not already covered on LW

But I've already pointed out that you do a lot more philosophy than just dissolution-to-algorithm. Dissolution to algorithm is not the only valuable thing to do in philosophy. Not all philosophical problems can be dissolved that way. Some philosophical problems turn out to be genuine problems that need an answer.

My claim that we shouldn't ignore philosophy is already supported by the points I made about how vast swaths of the useful content on Less Wrong have been part of mainstream philosophy for decades.

I'm not going to argue that philosophy isn't a terribly sick field, because it is a terribly sick field. Instead I'm arguing that you have already taken a great deal of value (directly or indirectly) from mainstream philosophy, and I gave more interesting examples than "metaphysical libertarianism is false" and "people are made of atoms."

Comment author: r90 21 March 2011 11:53:37AM 0 points [-]

Well, show me the power of LW then.

Since Quinean philosophy is just LW rationality but earlier, then that should settle it.

I find it likely that if someone were to trace the origins of LW rationality one would end up with Quine or someone similar. E.g. perhaps you read an essay by a Quinean philosopher when you were younger.

Comment author: [deleted] 21 March 2011 12:26:51PM 11 points [-]

I doubt it. In fact I'm pretty certain that Quine had nothing to do with 'the origins of LW rationality'. I came to many (though by no means all) of the same conclusions as Eliezer independently, some of them in primary school, and never heard of Quine until my early 20s. What I had read - and what it's apparent Eliezer had read - was an enormous pile of hard science fiction, Feynman's memoirs, every pop-science book and issue of New Scientist I could get my hands on and, later, Feynman's Lectures In Physics. If you start out with a logical frame of mind, and fill that mind up with that kind of stuff, then the answers to certain questions come out as just "that's obvious!" or "that's a stupid question!" Enough of them did to me that I'm pretty certain that Eliezer also came to those conclusions (and the others he's come to and written about) independently.

Comment author: [deleted] 21 March 2011 04:42:03PM 13 points [-]

Timing argues otherwise. We don't see Quine-style naturalists before Quine; we see plenty after Quine.

Eliezer doesn't recognize and acknowledge the influence? He probably wouldn't! People to a very large extent don't recognize their influences. To give just a trivial example, I have often said something to someone, only to find them weeks later repeating back to me the very same thing, as if they had thought of it. To give another example, pick some random words from your vocabulary - words like "chimpanzee", "enough", "unlikely". Which individual person taught you each of these words (probably by example), or which set of people? Do you remember? I don't. I really have no idea where I first picked up any bit of my language, with occasional exceptions.

For the most part we don't remember where exactly it was that we picked up this or that idea.

Of course, if Eliezer says he never read Quine, I don't doubt that he never read Quine. But that doesn't mean that he wasn't influenced by Quine. Quine influenced a lot of people, who influenced a lot of other people, who influenced still more people, some of whom could very easily have influenced Eliezer without Eliezer having the slightest notion that the influence originated with Quine.

It's hard to trace influence. What's not so hard is to observe timing. Quine comes first - by decades.

Comment author: MichaelVassar 22 March 2011 07:52:40PM 5 points [-]

Eliezer knows Bostrom pretty well and Bostrom is influenced by Quine, but I simply doubt the claim about no Quine style naturalists before Quine. Hard to cite non-citations though, so I can go on not believing you, but can't really say much to support it.

Comment author: [deleted] 22 March 2011 08:38:32PM 3 points [-]

Well, my own knowledge is spotty, and I have found that philosophy changes gradually, so that immediately before Quine I would expect you to find philosophers who in many ways anticipate a significant fraction of what Quine says. That said, I think that Quine genuinely originated much that was important. For example I think that his essay Two Dogmas of Empiricism contained a genuinely novel argument, and wasn't merely a repeat of something someone had written before.

But let's suppose, for the sake of argument, that Quine was not original at all, but was a student of Spline, and Spline was the actual originator of everything associated with Quine. I think that the essential point that Eliezer probably is the beneficiary of influence and is standing on the shoulders of giants is preserved, and the surrounding points are also preserved, only they are not attached specifically to Quine. I don't think Quine specifically is that important to what lukeprog was saying. He was talking about a certain philosophical tradition which does not go back forever.

Comment author: PhilGoetz 30 March 2011 03:53:56AM *  3 points [-]

(EDIT: Quine was not Rapaport's advisor; Hector-Neri Castaneda was.) William Rapaport, together with Stu Shapiro, applied Quine's ideas on semantics and logic to knowledge representation and reasoning for artificial intelligence. Stu Shapiro edited the Encyclopedia of Artificial Intelligence, which may be the best survey ever made of symbolic artificial general intelligence. Bill and Stu referenced Quine in many of their papers, which have been widely read in artificial intelligence since the early 1980s.

There are many concepts from Stu and Bill's representational principles that I find useful for dissolving philosophical problems. These include the concepts of intensional vs. extensional representation, deictic representations, belief spaces, and the unique variable binding rule. But I don't know if any of these ideas originate with Quine, because I haven't studied Quine. Bill and Stu also often cited Meinong and Carnap; I think many of Bill's representational ideas came from Meinong.

A quick google of Quine shows that a paper that I'm currently making revisions on is essentially a disproof of Quine's "indeterminacy of translation".

Comment author: Davorak 22 March 2011 09:02:59AM 2 points [-]

Eliezer doesn't recognize and acknowledge the influence? He probably wouldn't! People to a very large extent don't recognize their influences.

Applying the above to Quine would seem to at least weakly contradict:

Timing argues otherwise. We don't see Quine-style naturalists before Quine; we see plenty after Quine.

You seem to be singling out Quine as unique rather then just a link in a chain, unlike Eliezer and people who do not recognize their influences. This seems unlikely to me. Is this what you ment to communicate?

Comment author: [deleted] 22 March 2011 09:41:48AM *  2 points [-]

I don't assume Quine to be any different from anyone else in recognizing his influences.

It is because I have no particular confidence in anyone recognizing their own influences that I turn to timing to help me answer the question of independent creation.

1) If a person is the first person to give public expression to an idea, then the chance is relatively high that he is the originator of the idea. It's not completely certain, but it's relatively high.

2) In contrast, if a person is not the first person to give public expression to an idea but is, say, the 437th person to do so, the first having done so fifty years before, then chances are relatively high that he picked up the idea from somewhere and didn't remember picking it up. The fact that nobody expressed the idea before fifty years earlier suggests that the idea is pretty hard to come up with independently, because had it been easy, people would have been coming up with it all through history.

3) Finally, if a person is not the first person to give public expression to an idea but people have been giving public expression to the idea for as long as we have records, then the chance is relatively high once again that he independently rediscovered the idea, since it seems to be the sort of idea that is relatively easy to rediscover independently.

Comment author: TomM 23 March 2011 01:20:19AM *  2 points [-]

The fact that nobody expressed the idea before fifty years earlier suggests that the idea is pretty hard to come up with independently, because had it been easy, people would have been coming up with it all through history.

This can be true, but it is also possible that an idea may be hard to independently develop because the intellectual foundations have not yet been laid.

Ideas build on existing understandings, and once the groundwork has been done there may be a sudden eruption of independent-but-similar new ideas built on those foundations. They were only hard to come up with until that time.

Comment author: [deleted] 23 March 2011 01:38:38AM *  2 points [-]

This can be true, but it is also possible that an idea may be hard to independently develop because the intellectual foundations have not yet been laid.

Well, yes, but that's essentially my point. What you've done is pointed out that the foundation might lie slightly before Quine. Indeed it might. But I don't think this changes the essential idea. See here for discussion of this point.

Comment author: thomblake 21 March 2011 04:07:45PM 1 point [-]

devoted to arguing

Even the best philosophy is this. Dan Dennett is devoted to arguing.

Of course, by Beisutsukai standards, philosophy is almost as good as physics. Both are very too slow.

Comment author: Vladimir_Nesov 21 March 2011 12:42:29PM *  3 points [-]

It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.

Given that your audience at least in some sense disagrees, you'd do well to use a more powerful argument than "it would be odd" (it would be a fine argument if you expected the audience's intuitions to align with the statement, but it's apparently not the case), especially given that your position suggests how to construct one: find an insight generated by mainstream philosophy that would be considered new and useful on LW (which would be most effective if presented/summarized in LW language), and describe the process that allowed you to find it in the literature.

On a separate note, I think finding a place for LW rationality in academic philosophy might be a good thing, but this step should be distinguished from the connotation that brings about usefulness of (closely located according to this placement) academic philosophy.

So, I agree denotationally with your post (along the lines of what you listed in this comment), but still disagree connotationally with the implication that standard philosophy is of much use (pending arguments that convince me otherwise, the disagreement itself is not that strong). I disagree strongly about the way in which this connotation feels to argue its case through this post, not presenting arguments that under its own assumptions should be available. I understand that you were probably unaware of this interpretation of your post (i.e. arguing for mainstream philosophy being useful, as opposed to laying out some groundwork in preparation for such argument), or consider it incorrect, but I would argue that you should've anticipated it and taken into account.

(I expect if you add a note at the beginning of the post to the effect that the point of this particular post is to locate LW philosophy in mainstream philosophy, perhaps to point out priority for some of the ideas, and edit the rest with that in mind, the connotational impact would somewhat dissipate, without changing the actual message. But given the discussion that has already taken place, it might be not worth doing.)

Comment author: lukeprog 21 March 2011 06:15:13PM 3 points [-]

No, I didn't take the time to make an argument.

But I am curious to discuss this with someone who doesn't find it odd that mainstream philosophy could make useful contributions up until a certain point and then suddenly stop. That's far from impossible, but I'd be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen? In the 1960's, after philosophy's predicate logic and Tarskian truth-conditional theories of language were mature? In the 1980s? Around 2000?

Comment author: Randaly 21 March 2011 06:30:42PM *  4 points [-]

The inability of philosophers to settle on a position on an issue and move on. It's very difficult to make progress (ie additional useful contributions) if your job depends, not on moving forwards and generating new insights, but rather on going back and forth over old arguments. People like, e.g. Yudkowsky, whose job allows/requires him to devote almost all of his time to new research, would be much more productive- possibly, depending on the philosopher and non-philosopher in question, so much more productive that going back over philosophical arguments and positions isn't very useful.

The time would depend on the field in question, of course; I'm no expert, but from an outsider's perspective I feel like, e.g. linguistics and logic have had much more progress in recent decades than, e.g. philosophical consciousness studies or epistemology. (Again, no expert.) However, again, my view is less that useful philosophical contributions have stopped, and more that they've slowed to a crawl.

Comment author: lukeprog 21 March 2011 06:46:39PM 7 points [-]

This is indeed why most philosophy is useless. But I've asserted that most philosophy is useless for a long time. This wouldn't explain why philosophy would nevertheless make useful progress up until the 60s or 80s or 2000s and then suddenly stop. That suggestion remains to be explained.

Comment author: Randaly 21 March 2011 11:12:00PM 2 points [-]

(My apologies; I didn't fully understand what you were asking for.)

First, it doesn't claim that philosophy makes zero progress, just that science/AI research/etc. makes more. There were still broad swathes of knowledge (e.g. linguistics and psychology) that split off relatively late from philosophy, and in which philosophers were still making significant progress right up to the point where the point where they became sciences.

Second, philosophy has either been motivated by or freeriding off of science and math (e.g. to use your example, Frege's development of predicate logic was motivated by his desire to place math on a more secure foundation.) But the main examples (that are generally cited elsewhere, at least) of modern integration or intercourse between philosophy and science/math/AI (e.g. Dennett, Drescher,, Pearl, etc.) have already been considered, so it's reasonable to say that mainstream philosophy probably doesn't have very much more to offer, let alone a "centralized repository of reductionist-grade naturalistic cognitive philosophy" of the sort Yudkowsky et al. are looking for.

Third, the low-hanging fruit would have been taken first; because philosophy doesn't settle points and move on to entire new search spaces, it would get increasingly difficult to find new, unexplored ideas. While they could technically have moved on to explore new ideas anyways, it's more difficult than sticking to established debates, feels awkward, and often leads people to start studying things not considered part of philosophy (e.g. Noam Chomsky or, to an extent, Alonzo Church.) Therefore, innovation/research would slow down as time went on. (And where philosophers have been willing to go out ahead and do completely original thinking, even where they're not very influenced by science, LW has seemed to integrate their thinking; e.g. Parfit.)

(Btw, I don't think anybody is claiming that all progress in philosophy had stopped; indeed, I explicitly stated that I thought that it hadn't. I've already given four examples above of philosophers doing innovative work useful for LW.)

Comment author: lukeprog 21 March 2011 11:17:30PM 3 points [-]

Yeah, I'm not sure we disagree on much. As you say, Less Wrong has already made use of some of the best of mainstream philosophy, though I think there's still more to be gleaned.

Comment author: Vladimir_Nesov 21 March 2011 06:48:34PM *  2 points [-]

That's far from impossible, but I'd be curious to know what you think was cause the stop in useful progress. And when did that supposedly happen?

Just now. As of today, I don't expect to find useful stuff that I don't already know in mainstream philosophy already written, commensurate with the effort necessary to dig it up (this situation could be improved by reducing the necessary effort, if there is indeed something in there to find). The marginal value of learning more existing math or cognitive science or machine learning for answering the same (philosophical) questions is greater. But future philosophy will undoubtedly bring new good insights, in time, absent defeaters.

Comment author: lukeprog 21 March 2011 06:52:02PM *  3 points [-]

So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it's worth? If so, I find that plausible. Like I said, I don't think Eliezer should spend his time digging through mainstream philosophy. Digging through math books and AI books will be much more rewarding. I don't know what your fields of expertise are, but I suspect digging through mainstream philosophy would not be the best use of your time, either.

Comment author: Vladimir_Nesov 21 March 2011 07:13:54PM *  1 point [-]

So maybe your argument is not that mainstream philosophy has nothing useful to offer but instead just that it would take you more effort to dig it up than it's worth?

I don't believe that for the purposes of development of human rationality or FAI theory this should be on anyone's worth-doing list for some time yet, before we can afford this kind of specialization to go after low-probability perks.

I expect that there is no existing work coming from philosophy useful-in-itself to an extent similar to Drescher's Good and Real (and Drescher is/was an AI researcher), although it's possible and it would be easy to make such work known to the community once it's discovered. People on the lookout for these things could be useful.

I expect that reading a lot of related philosophy with a prepared mind (so that you don't catch an anti-epistemic cold or death) would refine one's understanding of many philosophical questions, but mostly not in the form of modular communicable insights, and not to a great degree (compared to background training from spending the same time studying math/AI, that is ways of thinking you learn apart from the subject matter). This limits the extent to which people specializing in studying potentially relevant philosophy can contribute.

Comment author: Eliezer_Yudkowsky 20 March 2011 10:28:21PM 7 points [-]

I'm highly skeptical. I suspect that you may have failed to distinguish between sensory empiricism, which is a large standard movement, and the kind of thinking embodied in How An Algorithm Feels From the Inside which I've never seen anywhere else outside of Gary Drescher (and rumors that it's in Dennett books I haven't read).

Simple litmus test: What is the Quinean position on free will?

"It's nonsense!" = what I think standard "naturalistic" philosophy says

"If the brain uses the following specific AI-ish algorithms without conscious awareness of it, the corresponding mental ontology would appear from the inside to generate the following intuitions and apparent impossibilities about 'free will'..." = Less Wrong / Yudkowskian

Comment author: lukeprog 20 March 2011 10:46:54PM *  23 points [-]

Eliezer,

I'm not trying to say that you haven't made genuine contributions. Making genuine contributions in the Quinean path is what I mean when you say your work is part of that movement. And certainly, you speak a different language - the language of algorithms and AI rather than that of analytic philosophy. (Though there are quite a few who are doing philosophy in the language of AI, too: Judea Pearl is a shining example.)

'How an algorithm feels from the inside' is an important insight - an important way of seeing things. But your factual claims about free will are not radical. You agree with all naturalists that we do not have libertarian free will. We have no power to cause effects in the world without ourselves being fully caused, because we are fully part of nature. And you agree with naturalists that we are, nonetheless, able to deliberate about our actions. And that deliberation can, of course, affect the action we eventually choose. Our beliefs and desires affect our decisions, too.

Your differences with Quine look, to me at least, more like the differences that Quinean naturalists have with each other, rather than the differences that Quinean naturalists have with intuitionists and theists and postmodernists and phenomenologists, or even non-Quinean "naturalists" like Frank Jackson and David Chalmers.

Comment author: Eliezer_Yudkowsky 20 March 2011 10:52:15PM 10 points [-]

Luke,

From my perspective, the idea that we do not have libertarian free will is too obvious to be interesting. If you want to claim that places me in a particular philosophical camp, fine, but that doesn't mean they do the same sort of cognitive labor I do when I'm doing philosophy. I knew there wasn't libertarian free will the instant I first considered the problem, at I think maybe age fourteen or thereabouts; if that made me a master philosopher, great, but to me it seems like the distance from there to being able to resolve the algorithms of the brain into their component parts was the interesting part of the journey.

(And Judea Pearl I have quite well acknowledged as an explicit shoulder to stand upon, but so far as I know he's another case of an AI researcher coming in from outside and solving a problem where philosophers just spun their wheels because they didn't think in algorithms.)

Comment author: lukeprog 20 March 2011 10:59:02PM *  17 points [-]

I did not put you in the Quinean camp merely because of your agreement about libertarian free will. I listed about a dozen close comparisons on matters that are highly controversial in mainstream philosophy. And I placed special emphasis on your eerily echo-ish defense of Quine's naturalized epistemology, which is central to both your philosophy and his.

I agree with you about Judea Pearl coming from AI to solve problems on which philosophers had been mostly stalled for centuries. Like Dennett says, AI researchers are doing philosophy - and really good philosophy - without really knowing it. Except for Pearl, actually. He does know he's doing philosophy, as becomes apparent in his book on causality, for example, where he is regularly citing the mainstream philosophical literature on the subject (alongside statistics and AI and so on).

Comment author: Eliezer_Yudkowsky 20 March 2011 11:04:11PM 2 points [-]

Look, if someone came to me and said, "I'm great at LW-style philosophy, and the proof of this is, I can argue there's no libertarian free will" I would reply "You have not yet done any difficult or worthwhile cognitive work." It's like saying you don't believe in astrology. Well, great, and yes there's lots of people who disagree with you about that, but there's a difference between doing grade school arithmetic and doing calculus, and "There is no libertarian free will" is grade school arithmetic. It doesn't interest me that this philosophical school agrees with me about that. It's too simple and basic, and part of what I object to in philosophy is that they are still arguing about problems like this instead of moving onto real questions.

Comment author: lukeprog 20 March 2011 11:06:45PM 33 points [-]

Eliezer,

I don't get it. Your comment here doesn't respond to anything I said in my previous comment. The first sentence of my previous comment is: "I did not put you in the Quinean camp merely because of your agreement about libertarian free will."

Comment author: gjm 20 March 2011 11:57:53PM 21 points [-]

I think Eliezer is suggesting that all the things you've mentioned that distinguish Quinean naturalists from other philosophers are similarly basic, and that "LW-style philosophy" takes (what turns out to be) Quinean naturalism as a starting point and then goes on to do things that no one working in mainstream philosophy has thought of.

In other words, that the problem with mainstream philosophy isn't that it's all wrong, but that much of it is wrong and that the part that isn't wrong is mostly not doing anything interesting with its not-wrongness.

(I make no comment on whether all, or some, or none, of that is correct. I'm just hoping to reduce the amount of talking-past-one-another here.)

Comment author: Alexandros 21 March 2011 12:03:51AM 11 points [-]

Eliezer is suggesting that the Quineans are "not doing anything interesting with [their] not-wrongness" after being aware of the field for all of an hour and a half?!

Comment author: [deleted] 21 March 2011 12:34:55PM 6 points [-]

Makes perfect sense to me. Someone comes up to me and says "This person is a brilliant mathematician! She just showed me a proof that there's no highest prime, and proved Pythagoras' theorem!" my response would be that that's still no evidence that she's made any worthwhile contribution to mathematics. She may have, but there's little reason to believe it from the original statement.

Comment author: Alexandros 21 March 2011 12:50:20PM *  5 points [-]

"still no evidence" is very much different to claims that certain properties do not exist in a given body of work. Absence of evidence (after an hour's looking, if that) is not evidence of absence.

Comment author: [deleted] 21 March 2011 04:20:27PM 11 points [-]

Seems to me less like that and more like, "this Euclid fellow was brilliant", followed by a list of things that Euclid proved before anybody else proved. Timing matters here. It's no coincidence that before Quine came along, the clever Eliezers were not taking Quinean naturalism for granted.

For another analogy, if someone came along and told you, "this Hugh Everett fellow was brilliant! Here, read this paper in which he argues that the wave function never collapses", would you say, "well, Eliezer already went through that a few years ago; there's still no evidence that Everett made any worthwhile contribution"?

Comment author: Eliezer_Yudkowsky 21 March 2011 12:00:27AM 5 points [-]

I affirm this interpretation.

Comment author: BowDown 21 March 2011 07:42:34AM 15 points [-]

Eliezer's response does not. It looks like the response of one who feels their baby, LW style philosophy, is under attack. But it isn't.

Methinks Eliezer needs to spend more time practicing the virtues of scholarship by actually reading much of the philosophy that he is critiquing. His assessments of "naturalistic" philosophy seem like straw men. Furthermore, from a psychological perspective, it seems like Eliezer is trying to defend his previously made commitments to "LW-Style philosophy" at all costs. This is not the mark of true rationality - true rationality admits challenges to previous assumptions.

Comment author: Eliezer_Yudkowsky 20 March 2011 11:58:44PM 4 points [-]

Okay, so what have they done that I would consider cognitive philosophy? It doesn't matter how many verbal-type non-dissolved questions we agree on apart from that. I'm taking free will as an exemplar and saying, "But it's all like that, so far as I've been able to tell."

Comment author: [deleted] 21 March 2011 12:10:13PM 8 points [-]

Just taking the example I happen to know about, Sarah-Jane Leslie works on the meaning of generics. (What do we mean when we say "Tigers have stripes" ? All tigers? Most tigers? Normal tigers? But then how do we account for true statements like "Tigers eat people" when most tigers don't eat people, or "Peacocks have colorful tails" when female peacocks don't have colorful tails?) She answers this question directly using evidence from cognitive science. I think it counts as question-dissolving.

Comment author: lukeprog 21 March 2011 12:14:15AM *  17 points [-]

It doesn't matter how many verbal-type non-dissolved questions we agree on apart from that. I'm taking free will as an exemplar and saying, "But it's all like that, so far as I've been able to tell."

I'm not sure what you mean by this. Are you saying that my claim that LW-style philosophy shares many central assumptions with Quinean naturalism in contrast to most of philosophy doesn't hinge on whether or not I can present a long list of things on which LW-style philosophy and Quinean naturalism agree on, in contrast to most of philosophy?

I suspect that's not what you're saying, but then... what do you think it was that I was claiming in the first place?

Or, another way to put it: Which sentence of my original article are you disagreeing with? Do you disagree with my claim that "standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century"? Or perhaps you disagree with my claim that "Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science - a movement that has been active for at least two decades"? Or perhaps you disagree with my claim that "Rationalists need not dismiss or avoid philosophy"?

I wonder if you agree with gjm's suggestion that "LW-style philosophy takes (what turns out to be) Quinean naturalism as a starting point and then goes on to do things that no one working in mainstream philosophy has thought of." That's roughly what I said above, though of course I'll point out that lots of Quinean naturalists have taken Quinean naturalism as a starting point and done things that nobody else thought of. That's just what it means to make original contributions in the movement.

I'll be happy to provide examples of "cognitive philosophy" once I've got this above bit cleared up. I've given examples before (Schroeder 2004; Bishop & Trout 2004; Bickle 2003), but of course I could give more detail.

Comment author: Eliezer_Yudkowsky 21 March 2011 12:27:49AM 15 points [-]

Are you saying that my claim that LW-style philosophy shares many central assumptions with Quinean naturalism in contrast to most of philosophy doesn't hinge on whether or not I can present a long list of things on which LW-style philosophy and Quinean naturalism agree on, in contrast to most of philosophy?

I'm saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant, thus, presenting the long list of basic assumptions on which LW-style and Quinean naturalism agree is from my perspective irrelevant.

Do you disagree with my claim that "standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century"?

Yes. What I would consider "standard LW positions" is not "there is no libertarian free will" but rather "the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z". If the latter has been a standard position then I would be quite interested.

Or perhaps you disagree with my claim that "Less Wrong-style philosophy is part of a movement within mainstream philosophy to massively reform philosophy in light of recent cognitive science - a movement that has been active for at least two decades"?

The kind of reforms you quote are extremely basic, along the lines of "OMG there are cognitive biases and they affect philosophers!" not "This is how this specific algorithm generates the following philosophical debate..." If the movement hasn't progressed to the second stage, then there seems little point in aspiring LW rationalists reading about it.

GJM's suggestion is correct but the thing which you seem to deny and which I think is true is that LW is at a different stage of doing this sort of philosophy than any Quinean naturalism I have heard of, so that the other Quineans "doing things that nobody else have thought of" don't seem to be doing commensurate work.

I am not asking for an example of someone who agrees with me that, sure, cognitive philosophy sounds like a great idea, by golly. There's a difference between saying "Sure, evolution is true!" and doing evolutionary biology.

I'm asking for someone who's dissolved a philosophical question into a cognitive algorithm, preferably in a way not previously seen before on LW.

Did you read the LW sequence on free will, both the setup and the solution? Apologies if you've already previously answered this question, I have a vague feeling that I asked you before and you said yes, but still, just checking.

On the whole, you seem to think that I should be really enthusiastic about finding philosophers who agree with my basic assumptions, because here are these possible valuable allies in academia - why, if we could reframe LW as Quineanism, we'd have a whole support base ready-made!

Whereas I'm thinking, "If you ask what sort of activity these people perform in their daily work, their skills are similar to those of other philosophers and unlike those of people trying to figure out what algorithm a brain is running" and so they can't be hired to do the sort of work we need without extensive retraining; and since we're not out to reform academic philosophy, per se, it's not clear that we need allies in a fight we could just bypass.

Comment author: [deleted] 21 March 2011 01:15:20AM *  28 points [-]

It might be useful, if only for gaining status and attention and funding, to connect your work directly to one or several academic fields. To present it as a synthesis of philosophy, computer science, and cognitive science (or some other combination of your choice.) When people ask me what LessWrong is, I generally say something like "It's philosophy from a computer scientist's perspective." Most people can only put a mental label on something when they have a rough idea of what it's like, and it's not practical to say, "Well, our work isn't like anything."

That doesn't mean you have to hire philosophers or join a philosophy department; it might not mean that you, personally, have to do anything. But I do think that more people would be interested, and have a smaller inferential distance, if LW ideas were generally presented as related to other disciplines.

Comment author: lukeprog 21 March 2011 01:07:27AM *  29 points [-]

I'm saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant...

Well, it's important to my claim that LW-style philosophy fits into the category of Quinean naturalism, which I think is undeniable. You may think Quinean naturalism is obvious, but well... that's what makes you a Quinean naturalist. Part of the purpose of my post is to place LW-style philosophy in the context of mainstream philosophy, and my list of shared assumptions between LW-style philosophy and Quinean philosophy does just that. That goal by itself wasn't meant to be very important. But I think it's a categorization that cuts reality near enough the joints to be useful.

What I would consider "standard LW positions" is not "there is no libertarian free will" but rather "the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z". If the latter has been a standard position then I would be quite interested.

Then we are using the word "standard" in different ways. If I were to ask most people to list some "standard LW positions", I'm pretty sure they would list things like reductionism, empiricism, the rejection of libertarian free will, atheism, the centrality of cognitive science to epistemology, and so on - long before they list anything like "the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z". I'm not even sure how much consensus that enjoys on Less Wrong. I doubt it is as much a 'standard' position on Less Wrong than the other things I mentioned.

But I'm not here to argue about the meaning of the word standard.

Disagreement: dissolved.

Moving on: Yes, I read the free will stuff. 'How an Algorithm Feels from the Inside' is one of my all-time favorite Yudkowsky posts.

I'll have to be more clear on what you think LW is doing that Quinean naturalists are not doing. But really, I don't even need to wait for that to respond. Even work by philosophers who are not Quinean naturalists can be useful in your very particular line of work - for example in clearing up your CEV article's conflation of "extrapolating" from means to ends and "extrapolating" from current ends to new ends after reflective equilibrium and other processes have taken place.

Finally, you say that if Quinean naturalism hasn't progressed from recognizing that biases affect philosophers to showing how a specific algorithm generates a philosophical debate then "there seems little point in aspiring LW rationalists reading about it."

This claim is, I think, both clearly false as stated and misrepresents the state of Quinean naturalism.

First, on falsity: There are many other useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates. Since your own philosophical work on Less Wrong has considered far more than just this, I assume you agree. Thus, it is not the case that Quinean naturalists aren't doing useful work unless they are discovering the cognitive algorithms that generate philosophical debates.

Second, on misrepresentation: Quinean naturalists don't just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it. Moreover, Quinean naturalists do sometimes discuss how cognitive algorithms generate philosophical debates. See, for example, Eric Schwitzgebel's recent work on how introspection works and why it generates philosophical confusions.

It seems you're not just resisting the classification of LW-style philosophy within the broader category of Quinean naturalism. You're also resisting the whole idea of seeing value in what mainstream naturalistic philosophers are doing, which I don't get. How do you think that thought got generated? Reading too much modal logic and not enough Dennett / Bickle / Bishop / Metzinger / Lokhorst / Thagard?

I'm not even trying to say that Eliezer Yudkowsky should read more naturalistic philosophy. I suspect that's not the best use of your time, especially given your strong aversion to it. But I am saying that the mainstream community has useful insights and clarifications and progress to contribute. You've already drawn heavily from the basic insights of Quinean naturalism, whether or not you got them from Quine himself. And you've drawn from some of the more advanced insights of people like Judea Pearl and Nick Bostrom.

So I guess I just don't get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don't understand where that aversion is coming from. If you're not that familiar with Quinean naturalistic philosophy, why do you assume in advance that it's a bad idea to read through it for insights?

Comment author: Alexandros 20 March 2011 11:26:13PM 14 points [-]

When I read your first post here, my mind immediately went to You're Entitled to Arguments, But Not (That Particular) Proof. I gave you you the benefit of the doubt since you called it a 'litmus test' (however arbitrary), but you seem to have anchored on that. If your work is in substantial agreement with an established field in philosophy, that means there are more intelligent people who could become allies, and a store of knowledge from where valuable insights could come. I don't know why you are looking this particular gift horse in the mouth.

Comment author: Eliezer_Yudkowsky 21 March 2011 12:04:07AM 7 points [-]

There's lots of people I think have valuable insights - cognitive scientists, AI researchers, statistical learning experts, mathematicians...

The question is whether high-grade academic philosophy belongs on the scholarship list, not whether scholarship is a virtue. The fact that they have managed to produce a minority school that agrees with Gary Drescher on the extremely basic question of whether there's libertarian free will (no) and people are made of atoms (yes), does not entitle them to a position next to "Artificial Intelligence: A Modern Approach".

Comment author: lukeprog 21 March 2011 01:16:23AM *  14 points [-]

Physicalism and the rejection of free will are both majority positions in Anglophone philosophy, actually, but I agree that agreement on those points doesn't put someone on the shelf next to AIMA.

Comment author: AlephNeil 21 March 2011 03:00:58AM *  15 points [-]

Physicalism and the rejection of free will are both majority positions in Anglophone philosophy

Regarding physicalism, I don't entirely trust that survey.

Firstly, most of those who call themselves physicalists nevertheless think that qualia exist and are Deeply Mysterious, such that one cannot deduce a priori, from objective physical facts, that Alfred isn't a zombie or that Alfred and Bob aren't qualia-inverted with respect to each other.

Secondly, in very recent years - 90s into the new century - I think there's been a rising tide of antimaterialism. Erstwhile physicalists such as Jaegwon Kim have defected. Anthologies are published with names like "The Waning of Materialism".

As the survey itself tell us, only 16% accept or lean towards "zombies are inconceivable".

This is all consistent with my experience in internet debates, where it seems that most upcoming or wannabe philosophers who have any confident opinions on the matter are antimaterialists.

Comment author: lukeprog 21 March 2011 03:08:51AM 11 points [-]

All good points. I take back the claim that physicalism is a majority position; that is under serious doubt.

How sad! :(

Comment author: [deleted] 21 March 2011 07:37:10AM *  6 points [-]

Firstly, most of those who call themselves physicalists nevertheless think that qualia exist and are Deeply Mysterious, such that one cannot deduce a priori, from objective physical facts, that Alfred isn't a zombie or that Alfred and Bob aren't qualia-inverted with respect to each other.

[...]

only 16% accept or lean towards "zombies are inconceivable".

Strictly speaking, I don't think either of these requires abandonment of physicalism by even a small degree. To say that one can or cannot conceive something is not to directly say anything about reality itself (#). To say that one can or cannot deduce something is, again, not directly to say anything about reality itself (#except in the trivial sense that it says something about what one, i.e. a real person, can or cannot deduce, or can or cannot conceive). Even if you want to argue that it says something about reality itself, however indirectly, it's not at all obvious that it says this particular thing (i.e. non-physicalism).

In particular, I am well aware of the severe limitations of deduction as a path to knowledge. Being so aware, I am not in the slightest surprised by, or troubled by, the inability to deduce that Alfred isn't a zombie. I don't see why I should be troubled. As for what I can conceive - well, I can conceive all sorts of things which have no obvious connection to reality. Why should examination of the limits of my imagination give me any sort of information about whether physicalism is true?

The key question for me is: is the hypothesis of physicalism tenable? I'm not asking for proof, deductive or otherwise. I am asking whether the hypothesis is consistent with the evidence and internally coherent. The fact that someone can conceive of zombies, and therefore conceive that the hypothesis is false, is no disproof of the hypothesis. And similarly, the fact that the hypothesis of physicalism cannot be deduced is no disproof either.

Comment author: PhilosophyTutor 22 March 2011 02:09:53AM 8 points [-]

Possibly you should state your hypothesis ahead of time and define what would count (or have counted in the past) as a worthwhile contribution to LW-style rationalism from within the analytic philosophy community.

Then we would have a concrete way to decide the question of whether analytic philosophy has contributed anything in the past, or contributes anything in the future.

It also might turn out in the process of formalising your definition of what counts as a worthwhile contribution that nothing outside of your specific field of AI research counts for you, which would in itself be a worthwhile realisation.

Acknowledging my own biases here, I'm an analytic philosopher who mostly teaches scientific methodology and ethics (with a minor side interest in statistics) and my reaction to perusing the LW content was that there were some very interesting and valuable nuggets here for me to fossick out but that the bulk of the content wasn't new or non-obvious to me.

Possibly there is so little for you in philosophy that has real novelty or value because there is already enormous overlap between what you do and what is done in the relevant subset of philosophy.

Being a philosopher makes you acutely aware of how deep the intellectual debts of most modern people are to philosophy, and how little awareness of this they have. It's all too easy to believe that one came to one's moral viewpoint entirely unassisted and entirely naturally, for example, without being aware that one is actually articulating a mixture of Kant and Bentham's ideas that you never would have come up with had you lived before Kant and Bentham. Many people who have never heard of Peter Singer take the animal liberation movement for granted, unaware that the term "animal liberation" was coined by a philosopher in 1975 drawing on previous work by philosophers in the 1970s.

Comment author: komponisto 21 March 2011 12:17:38AM 21 points [-]

the kind of thinking embodied in How An Algorithm Feels From the Inside which I've never seen anywhere else outside of Gary Drescher (and rumors that it's in Dennett books I haven't read).

Dennett is one of the leaders of mainstream philosophy. If it's in Dennett, Luke wins.

what I think standard "naturalistic" philosophy says

How did you acquire your beliefs about what standard "naturalistic" philosophy says? I have this impression that it was from outside caricatures rather than philosophers themselves.

Remember Scott Aaronson's critique of Stephen Wolfram? You seem at risk of being in a similar position with respect to mainstream analytic philosophy as Wolfram was with respect to mainstream science.

Comment author: Perplexed 20 March 2011 10:41:47PM 1 point [-]

A partial answer here:

Note his diagnosis of the problem of free will as being a result of philosophical confusion. Yes, of course, we will things and act according to our will, so in that sense, it’s free, but our will is itself caused.

Comment author: XiXiDu 21 March 2011 11:05:28AM *  1 point [-]

I have always been too shy to ask, but would anyone be willing to tell me how wrong I am about my musings regarding free will here? I haven't read the LW sequence on free will yet, as it states "aspiring reductionists should try to solve it on their own." I tried, any feedback?

Comment author: gwern 21 March 2011 04:38:06PM 2 points [-]

I don't think it's very good. (On the other hand, I have seen a great deal worse on free will.) There seem to be some outright errors or at least imprecisions, eg.:

No system can understand itself for that the very understanding would evade itself forever. A bin trying to contain itself.

To keep on topic, are you familiar with quining and all the ways of self-referencing?

Comment author: XiXiDu 22 March 2011 09:12:51AM 2 points [-]

To keep on topic, are you familiar with quining and all the ways of self-referencing?

I am vaguely aware of it. As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. "when preceded by" or "print" need external interpreters to work as intended. No closed system can contain a perfect model of itself and is consequently unable to predict its actions, therefore no libertarian free will can exist.

There seem to be some outright errors or at least imprecisions...

What is outright wrong or imprecise about it?

The main point I tried to make is that a definition of free will that does satisfy our understanding of being free agents is possible if you disregard free from and concentrate on free to.

Comment author: jimrandomh 21 March 2011 06:23:49PM 3 points [-]

This seems to be saying that Quinean philosophy reached (correct) conclusions similar to Less Wrong, and that since it came first it probably influenced LW, directly or indirectly, and therefore, we should study Quinean philosophy. But this does not follow; if LW and Quine say the same things, and either LW is better written or we've already read it, then this is a reason not to read Quine, because of the duplication. The implied argument seems to be: Quine said these things first => Quine deserves prestige => We should read Quine. But prestige alone is not a sufficient reason to read anything.

Comment author: lukeprog 21 March 2011 06:56:09PM *  7 points [-]

No, I advise against reading Quine. I only said above that rationalists should not ignore mainstream (Quinean) philosophy. That's a much weaker claim than the one you've attributed to me. Much of LW is better-written and more informed of the latest science than some of the best Quinean philosophy being written today.

What I'm claiming is that Quinean philosophy has made, and continues to make, useful contributions, and thus shouldn't be ignored. I have some examples of useful contributions from Quinean philosophy here.

Comment author: dxu 16 November 2014 05:52:41PM *  0 points [-]

Necro-post, but I have to say I think a lot of people might have been/be talking past each other here. The question isn't whether mainstream philosophy has useful insights to offer, the question is whether studying mainstream philosophy, i.e. "not ignoring it", as you put it, is the best possible use of one's time, as opposed to studying, say, AI research. There are opportunity costs for everything you do, and frankly, I'd say reading philosophy has (for me) too high of an opportunity cost and too low of an expected benefit to justify doing so. I don't think I'd be mistaken in saying that this is probably true for many other LW readers as well.

Comment author: TheOtherDave 21 March 2011 07:02:08PM 3 points [-]

I'm reminded of Caliph Omar's apocryphal comments about the Library of Alexandria.

Comment author: benelliott 21 March 2011 06:39:43PM *  2 points [-]

Perhaps the argument is more like this:

  • Quine said many things that we agree with
  • Some of these are non-obvious, its possible that we wouldn't all have come up with them had we not had this community
  • Since we have not explicitly mentioned Quine before it is unlikely that we have already heard everything he came up with
  • Therefore reading Quine may reveal other useful, non-obvious insights, which we might take a long time to come up with on own
  • Therefore we should read Quine.
Comment author: lukeprog 21 March 2011 07:00:20PM *  9 points [-]

I don't advocate reading Quine directly, but rather Quinean philosophy. For example Epistemology and the Psychology of Human Judgment, which reads like a series of Less Wrong blog posts, but covers lots of material not yet covered on Less Wrong. (I made a dent in this by transposing its coverage of statistical prediction rules into a Less Wrong post.)

And I don't advocate it for everyone. Doing research in philosophy is my specialty, but I don't think Eliezer should waste his time poring through philosophy journals for useful insights. Nor should most people. But then, most people won't benefit from reading through books on algorithmic learning theory, either. That's why we have divisions of labor and expertise. The thing I'm arguing against is Eliezer's suggestion that people shouldn't read philosophy at all outside of Less Wrong and AI books.

Comment author: Vladimir_M 20 March 2011 10:24:16PM *  2 points [-]

One philosopher whose work it would be extremely interesting to see analyzed from a LW-style perspective is Max Stirner. Stirner has, in my opinion, been unfairly neglected in academic philosophy, and to the extent that his philosophy has been given attention, it was mostly in various nonsensical postmodernist and wannabe-avantgardist contexts. However, a straightforward reading of his original work is a very rewarding intellectual exercise, and I'd really like to see a serious no-nonsense discussion of his philosophy.

Comment author: BobTheBob 23 March 2011 04:15:52AM *  1 point [-]

Some questions and thoughts about this:

  • How is it that 'naturalism' is the L.W. philosophy? I am not a naturalist, as I understand that term. What is the prospect of fair treatment for a dissenter to the L.W. orthodoxy?

  • Where does Quine talk about postmodernism, or debates about the meanings of terms like 'knowledge'? If a reference is available it'd be appreciated.

  • What exactly do you understand by 'naturalism' - what does it commit you to? Pointing to Quine et. al. gives some indication, but it should not be assumed that there is no value, if being a naturalist is important to you, in trying to be more precise than this. One suggestion - still quite crude- is that there are only empirical and historical facts - there is no fact which doesn't ultimately boil down to some collection of facts of these types. Plausibly such a view implies that there are no facts about rationality, insofar as rationality concerns what we ought to think and do, and this is not something implied solely by facts about the way the world measurably is and has been. Is this an acceptable consequence?

  • What exactly do you mean by 'reductionism'? There are at least the following two possibilities:

1) There is some privileged set of basic physical laws (the domain of micro-physics), and all higher-order laws are in principle derivable from the members of the privileged set.

2) There is some set of basic concepts, and all higher order concepts are merely logical constructions of these.

Depending on how (1) is spelled-out, it is plausibly fairly trivial, and not something anyone of Quine's generation could count as an innovative or courageous position.

Proposition (2), by contrast, is widely thought to be false. And surely one of the earliest and strongest criticisms of it is found in Quine's own 'Two Dogmas of Empiricism'.

Is there some third thesis under the name 'reductionism' which is neither close to trivial nor likely false, that you have in mind?

  • Concerning the role of shared intuition in philosophy. It's an interesting subject, worthy of thought. But roughly, its value is no more than the sort of shared agreement relied upon in any other collaborative discipline. Just as in mathematics and physics you have to count on people to agree at some point that certain things are obvious, so too in philosophy. The difference is that in philosophy the things often are value judgments (carefully considered). Intuitions are of use in philosophy only to the extent that almost any rational person can be counted on to share them (Theory X implies it's morally acceptable to kill a person in situation Y, intuitively it is not acceptable to kill a person in situation Y, therefore X is flawed). So I don't see much to the claim they present a problem.

  • What do you take the claim that philosophy should be about cognitive science to imply? Do you think there should be no philosophy of language, no philosophy of mind, no aesthetics, no ethics, and on and on? Or do you really think that a complete understanding of the functioning of the brain would afford all of the answers to the questions these undertakings ask? I looked for an answer to this question in the post linked to as the source of this thought, but it is more a litany of prejudices and apparently uninformed generalizations than an argument. Not a model of rationality, at least.

Comment author: lukeprog 23 March 2011 08:23:28AM *  2 points [-]

Answering your questions in order...

Naturalism is presupposed by all or nearly all promoted Less Wrong posts, and certainly by all of Eliezer's posts. I don't know what the prospects are for fair treatment of dissenters.

Here is a quick overview of Quine on postmodernism. On Quine on useless debates about the meaning of terms, see Quine: A guide for the perplexed.

There are lots of meanings of naturalism, explored for example in Ritchie's Understanding Naturalism. What I mean by 'Quinean naturalism' is summed up in the original post.

As for reductionism - I mean this kind of reductionism. The (2) kind of reductionism you mentioned is, of course, that second dogma of empiricism that Quine famously attacked (the 1st being analyticity). And that is not what I mean by reductionism.

I'm working on a post on intuitions where my positions on them will become clearer.

As for philosophy being about cognitive science, I'd have to write at great length to explain, I suppose. I'll probably write a post on that for my blog Common Sense Atheism, at which time I'll try to remember to come back here and link to it.

Comment author: Mirzhan_Irkegulov 21 January 2015 08:33:40PM *  1 point [-]

FIX: The link http://commonsenseatheism.com/wp-content/uploads/2011/03/Quine-Epistemology-Naturalized.pdf doesn't work. It's under Quine famously wrote.

The link to Northoff (http://www.imhr.ca/research/northofflab/index-e.cfm) is also 404.

Comment author: Ritalin 07 December 2012 05:13:59PM 1 point [-]

I just wanted to thank you for your continuous work, and especially for explicitly sparing us the work of sifting through all that philosophical tradition.

Comment author: [deleted] 10 February 2015 12:16:18AM *  0 points [-]

I suspect some philosophers of mind would reply 'Philosophy of mind makes AI studies honest'. Also, if you are averse to recommending the reading of Quine, at least recommend some of his critics. If your views are Quinean, surely you should at least have a look at the vast anti-Quinean literature out there?

Comment author: Fyrius 08 December 2012 02:02:30PM *  0 points [-]

After a proposed analysis or definition is overturned by an intuitive counterexample, the idea is to revise or replace the analysis with one that is not subject to the counterexample. Counterexamples to the new analysis are sought, the analysis revised if any counterexamples are found, and so on...

Interestingly, that sounds a lot like (an important part of) how linguistics research works. Of course, it's a problem for philosophy because it doesn't see itself as a cognitive science like linguistics does, and it endeavours to do other things with this approach than deducing the rules of the system that generates the intuitions.

Comment author: lukeprog 23 December 2011 07:38:34PM 0 points [-]

Someone asked me via email:

How do you see the analytic/synthetic distinction relating to map/territory? I suspect I read the logical positivists with too much charity, because I fit their arguments into my conception of map and territory. Quine attacked the positivists' view with what I know you've said is a view much like what LessWrong holds.

I figured my answer will be helpful for others, too, so I'll post it here:

The analytic/synthetic distinction is quite different than map/territory. The map/territory distinction is a metaphor that illustrates the correspondence theory of truth which Eliezer endorses but I am unsure of.

The analytic/synthetic distinction was used by the logical empiricists to draw a strict line between sentences that were true in virtue of the relations between the meanings of words (analytic), and sentences that were true in virtue of the relations of the meanings of words plus extralinguistic facts (synthetic).

Quine argued that the distinction can't be made so easily. He gave several arguments for this conclusion. One of the easier-to-summarize ones that I'll use as an example is his argument against sentence-by-sentence meaning. He said that individual sentences taken in isolation from each other do not imply certain anticipations. For that, you need individual sentences plus larger chunks of theory in which the terms of the sentence are embedded. This is one part of Quine's "holism."

More generally, Quine said that Carnap (representing the best of logical positivism in The Logical Structure of the World, which was a masterful step forward for human thought even if it is flawed) and the logical empiricists had failed to provide clear and unambiguous boundaries for the analytic, and that the line between analytic and synthetic was instead quite fuzzy.

As for this: "Quine attacked the positivists' view with what I know you've said is a view much like what LessWrong holds." I don't think I said Quine attacked the positivists' view with a view much like LessWrong's typical view. What I remember saying was that many positive aspects of Quine's worldview resembled standard LessWrong positions, not that his negative work on logical empiricism made use of standard LessWrong positions.

Comment author: Vladimir_Nesov 23 December 2011 09:44:17PM *  3 points [-]

The map/territory distinction is a metaphor that illustrates the correspondence theory of truth which Eliezer endorses but I am unsure of.

Its role in the sequences seems much simpler: if you look at human minds as devices for producing correct (winning) decisions (beliefs), the "map" aspect of the brain is effective to the extent/because the state of the brain corresponds to the state of the territory. This is not correspondence theory of truth, it's theory of (arranging) coincidence between correct decisions/beliefs (things defined in terms of the territory) and actual decisions/beliefs (made by the brain involving its "map" aspect), that points out that it normally takes physical reasons to correlate the two.

Comment author: lukeprog 23 December 2011 10:08:54PM 0 points [-]

I like how you've put this. This is roughly how I see things, and what I thought was intended by The Simple Truth, but recently someone pointed me to a post where Eliezer seems to endorse the correspondence theory instead of the thing you said (which I'm tempted to classify as a pragmatist theory of truth, but it doesn't matter).

Comment author: Vladimir_Nesov 23 December 2011 10:35:45PM *  0 points [-]

My point is that the role of map/territory distinction is not specifically to illustrate the correspondence theory of truth. I don't see how the linked post disagrees with what I said, as its subject matter is truth (among other things), and I didn't talk about truth, instead I said some apparently true things about the process of forming beliefs and decisions, as seen "from the outside". If we then mark the beliefs that correspond to territory, those fulfilling their epistemic role, as "true", correspondence theory of truth naturally follows.

Comment author: AnthonyC 29 April 2011 07:15:21PM 0 points [-]

Socrates definitely drew on some questionable intuitions in Plato's dialogues, but I think justice is a particularly slippery concept, in that it requires a prior conception of both the law and the good.

Legality is, for any sufficiently well-written laws, a purely factual matter. Does x break the law? Yes/no. Morality is, for any particular sufficiently well-written moral system, also a factual matter, but with more degrees of freedom. Is x good? Is x optimally good? Is x bad, but the best available option? Is the moral law, as written, itself good or optimally good under it's own definition of goodness?

Justice flings this all together in one pot: What ought the law to be? How ought the law to be enforced? How should we feel about the execution of justice?

This may be relatively clear to most readers of this site who grew up aware that good/evil and law/chaos are largely orthogonal, but in my experience many (most?) people have significant confusion/crossover between "illegal" and "immoral."