Jack comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong

106 Post author: lukeprog 20 March 2011 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (328)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 25 March 2011 07:15:59PM 21 points [-]

Quine's naturalized epistemology. Epistemology is a branch of cognitive science

Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn't shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn't do before, so from an LW perspective this isn't yet a move on the gameboard. At best it introduces a move on the gameboard.

Tarski on language and truth.

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn't imply people should study philosophy if they will also run into Tarski by doing mathematics.

Chalmers' formalization of Good's intelligence explosion argument...

...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you'll see that most of the issues raised didn't fit into Chalmers's decomposition at all. Not suggesting that he should've done it differently in a first paper, but still, Chalmers's formalization doesn't yet represent most of the debates that have been done in this community. It's more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.

Dennett on belief in belief.

Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.

Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior...

Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. "Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners... Michael Bratman has applied his "belief-desire-intention" model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992)." This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a "major inspiration", if so...

Functionalism and multiple realizability.

This comes under the heading of "things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward". I really don't think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.

Explaining the cognitive processes that generate our intuitions... Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy."...

Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it's not a move on the LW gameboard until you get to specifics. Then it's not a very impressive move unless it involves doing nonobvious reductionism, not just "Bias X might make philosophers want to believe in position Y". You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.

Pearl on causality.

Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It's what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast - I've done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl's work as "building" on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?

Drescher's Good and Real.

Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.

Dennett's "intentional stance."

For a change I actually did read about this before forming my own AI theories. I can't recall ever actually using it, though. It's for helping people who are confused in a way that I wasn't confused to begin with. Dennett is in any case a widely known and named exception.

Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.

A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who's done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal's Mugging, which was invented right here on Less Wrong by none other than yours truly - although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal's Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.

Reading Bostrom is a triumph of the rule "Read the most famous transhumanists" not "Read the most famous philosophers".

The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy - anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as "philosophy" rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.

Ord on risks with low probabilities and high stakes.

Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don't know if there was any academic debate or if the paper just dropped into the void.)

Deontic logic

Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.

...I'll stop there, but do want to note, even if it's out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume "Judgment Under Uncertainty: Heuristics and Biases" where it appears as a lovely chapter by Robyn Dawes on "The robust beauty of improper linear models", which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it's not work done in philosophy and, well, I didn't learn about it there so this particular citation feels a bit odd to me.

Comment author: Jack 25 March 2011 08:06:50PM *  15 points [-]

when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation?

That this isn't at all the case should be obvious even if the only thing you've read on the subject is Pearl's book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon's theory isn't about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.

Comment author: Eliezer_Yudkowsky 25 March 2011 08:15:38PM 4 points [-]

Stalnaker's name sounds familiar from Pearl, so I'll take your word for this and concede the point.