Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lukeprog comments on Less Wrong Rationality and Mainstream Philosophy - Less Wrong

106 Post author: lukeprog 20 March 2011 08:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (328)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 21 March 2011 01:07:27AM *  29 points [-]

I'm saying that the claim that LW-style philosophy shares many assumptions with Quinean naturalism in contrast to most of philosophy is unimportant...

Well, it's important to my claim that LW-style philosophy fits into the category of Quinean naturalism, which I think is undeniable. You may think Quinean naturalism is obvious, but well... that's what makes you a Quinean naturalist. Part of the purpose of my post is to place LW-style philosophy in the context of mainstream philosophy, and my list of shared assumptions between LW-style philosophy and Quinean philosophy does just that. That goal by itself wasn't meant to be very important. But I think it's a categorization that cuts reality near enough the joints to be useful.

What I would consider "standard LW positions" is not "there is no libertarian free will" but rather "the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z". If the latter has been a standard position then I would be quite interested.

Then we are using the word "standard" in different ways. If I were to ask most people to list some "standard LW positions", I'm pretty sure they would list things like reductionism, empiricism, the rejection of libertarian free will, atheism, the centrality of cognitive science to epistemology, and so on - long before they list anything like "the philosophical debate on free will arises from the execution of the following cognitive algorithms X, Y, and Z". I'm not even sure how much consensus that enjoys on Less Wrong. I doubt it is as much a 'standard' position on Less Wrong than the other things I mentioned.

But I'm not here to argue about the meaning of the word standard.

Disagreement: dissolved.

Moving on: Yes, I read the free will stuff. 'How an Algorithm Feels from the Inside' is one of my all-time favorite Yudkowsky posts.

I'll have to be more clear on what you think LW is doing that Quinean naturalists are not doing. But really, I don't even need to wait for that to respond. Even work by philosophers who are not Quinean naturalists can be useful in your very particular line of work - for example in clearing up your CEV article's conflation of "extrapolating" from means to ends and "extrapolating" from current ends to new ends after reflective equilibrium and other processes have taken place.

Finally, you say that if Quinean naturalism hasn't progressed from recognizing that biases affect philosophers to showing how a specific algorithm generates a philosophical debate then "there seems little point in aspiring LW rationalists reading about it."

This claim is, I think, both clearly false as stated and misrepresents the state of Quinean naturalism.

First, on falsity: There are many other useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates. Since your own philosophical work on Less Wrong has considered far more than just this, I assume you agree. Thus, it is not the case that Quinean naturalists aren't doing useful work unless they are discovering the cognitive algorithms that generate philosophical debates.

Second, on misrepresentation: Quinean naturalists don't just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it. Moreover, Quinean naturalists do sometimes discuss how cognitive algorithms generate philosophical debates. See, for example, Eric Schwitzgebel's recent work on how introspection works and why it generates philosophical confusions.

It seems you're not just resisting the classification of LW-style philosophy within the broader category of Quinean naturalism. You're also resisting the whole idea of seeing value in what mainstream naturalistic philosophers are doing, which I don't get. How do you think that thought got generated? Reading too much modal logic and not enough Dennett / Bickle / Bishop / Metzinger / Lokhorst / Thagard?

I'm not even trying to say that Eliezer Yudkowsky should read more naturalistic philosophy. I suspect that's not the best use of your time, especially given your strong aversion to it. But I am saying that the mainstream community has useful insights and clarifications and progress to contribute. You've already drawn heavily from the basic insights of Quinean naturalism, whether or not you got them from Quine himself. And you've drawn from some of the more advanced insights of people like Judea Pearl and Nick Bostrom.

So I guess I just don't get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don't understand where that aversion is coming from. If you're not that familiar with Quinean naturalistic philosophy, why do you assume in advance that it's a bad idea to read through it for insights?

Comment author: TheOtherDave 21 March 2011 01:42:47AM 2 points [-]

I'm reminded of the "subsequence" of The Level Above Mine, Competent Elites, Above Average AI Scientists, and That Magical Click.

Maybe mainstream philosophers just lack the aura of thousand-year-old rationalist vampires?

Comment author: lukeprog 21 March 2011 01:49:32AM *  1 point [-]

I'm quite sure they do. Right now I can't think of a philosopher who is as imposing to me as (the late) E.T. Jaynes is. Unless you count people like Judea Pearl who also do AI research, that is. :)

But that doesn't mean that mainstream philosophers never make useful and original contributions on all kinds of subjects relevant to Less Wrong and even to friendly AI.

Comment author: Perplexed 21 March 2011 02:32:46AM *  1 point [-]

Right now I can't think of a philosopher who is as imposing to me as (the late) E.T. Jaynes is. Unless you count people like Judea Pearl who also do AI research, that is.

That (Jaynes) is a pretty high standard. But not impossibly high. As candidates, I would mention Jaakko Hintikka, Per Martin-Lof and the late David Lewis. If you are allowed to count economists, then I would also mention game theorists like Aumann, Binmore, and the late John Harsanyi. And if you allow philosophically inclined physicists like Jaynes, there are quite a few folks worth mentioning.

Comment author: lukeprog 21 March 2011 02:39:41AM 0 points [-]

I'd never heard of Per Martin-Lof, thanks.

Comment author: TheOtherDave 21 March 2011 01:55:55AM 0 points [-]

I of course am not definitive here, but I strongly suspect that from EY's perspective it means precisely that.

Comment author: lukeprog 21 March 2011 02:13:16AM *  3 points [-]

If so, I don't think he can maintain that position consistently, since he has already benefited from the work of many mainstream philosophers, and continues to do so - for example Bostrom on anthropic reasoning.

Comment author: Perplexed 21 March 2011 03:22:36AM 0 points [-]

Maybe mainstream philosophers just lack the aura of thousand-year-old rationalist vampires?

Maybe. But they have a self-deprecating sense of humor. Doesn't that count for something?

Comment author: Vladimir_Nesov 21 March 2011 01:16:39PM *  2 points [-]

So I guess I just don't get what looks to me like a strong aversion in you to rationalists looking through Quinean naturalistic philosophy for useful insights. I don't understand where that aversion is coming from.

Actually an expectation that studying this philosophy stuff would be of no use (or even can harm you), which is a more reflectively reliable judgment than mere emotional aversion. Might be incorrect, but can't be influenced by arguing that aversion is irrelevant (not that you do argue this way, but summarizing the position with use of that word suggests doing that).

Comment author: Eliezer_Yudkowsky 21 March 2011 01:54:20AM -2 points [-]

If you're not that familiar with Quinean naturalistic philosophy, why do you assume in advance that it's a bad idea to read through it for insights?

Because I expect it to teach very bad habits of thought that will lead people to be unable to do real work. Assume naturalism! Move on! NEXT!

Comment author: lukeprog 21 March 2011 02:03:15AM *  27 points [-]

Assume naturalism! Move on! NEXT!

Yes, that's what most Quinean naturalists are doing...

Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism? I hope so. I'm also still curious to hear your response to the specific example I've now given several times of how even non-naturalistic philosophy can provide useful insights that bear directly on your work on Friendly AI (the "extrapolation" bit).

As for expecting naturalistic philosophy to teach very bad habits of thought: That has some plausibility. But it is hard to argue about with any precision. What's the cost/benefit analysis on reading naturalistic philosophy after having undergone significant LW-rationality training? I don't know.

But I will point out that reading naturalistic philosophy (1) deconverted me from fundamentalist Christianity, (2) led me to reject most of standard analytic philosophy, (3) led me to almost all of the "standard" (in the sense I intended above) LW positions, and (4) got me reading and loving Epistemology and the Psychology of Human Judgment and Good and Real (two philosophy books that could just as well be a series of Less Wrong blog posts) - all before I started regularly reading Less Wrong.

So... it's not always bad. :)

Also, I suspect your recommendation to not read naturalistic, reductionistic philosophy outside of Less Wrong feels very paternalistic and cultish to me, and I have a negative emotional (and perhaps rational) reaction to the suggestion that people should only get their philosophy from a single community.

Comment author: Eliezer_Yudkowsky 21 March 2011 07:57:36AM 6 points [-]

Can I expect a reply to my claim that a central statement of your above comment was both clearly false and misrepresented Quinean naturalism?

Reply to charge that it is clearly false: Sorry, it doesn't look clearly false to me. It seems to me that people can get along just fine knowing only what philosophy they pick up from reading AI books.

Reply to charge that it misrepresented Quinean naturalism: Give me an example of one philosophical question they dissolved into a cognitive algorithm. Please don't link to a book on Amazon where I click "Surprise me" ten times looking for a dissolution and then give up. Just tell me the question and sketch the algorithm.

The CEV article's "conflation" is not a convincing example. I was talking about the distinction between terminal and instrumental value way back in 2001, though I made the then-usual error of using nonstandard terminology. I left that distinction out of CEV specifically because (a) I'd seen it generate cognitive errors in people who immediately went funny in the head as soon as they were introduced to the concept of top-level values, and (b) because the original CEV paper wasn't supposed to go down to the level of detail of ordering expected-consequence updates versus moral-argument-processing updates.

Comment author: lukeprog 21 March 2011 08:32:25AM *  19 points [-]

Thanks for your reply.

On whether people can benefit from reading philosophy outside of Less Wrong and AI books, we simply disagree.

Your response on misrepresenting Quinean naturalism did not reply to this part: "Quinean naturalists don't just discuss the fact that cognitive biases affect philosophers. Quinean naturalists also discuss how to do philosophy amidst the influence of cognitive biases. That very question is a major subject of your writing on Less Wrong, so I doubt you see no value in it."

As for an example of dissolving certain questions into cognitive algorithms, I'm drafting up a post on that right now. (Actually, the current post was written as a dependency for the other post I'm writing.)

On CEV and extrapolation: You seem to agree that the distinction is useful, because you've used it yourself elsewhere (you just weren't going into so much detail in the CEV paper). But that seems to undermine your point that valuable insights are not to be found in mainstream philosophy. Or, maybe that's not your claim. Maybe your claim is that all the valuable insights of mainstream philosophy happen to have already shown up on Less Wrong and in AI textbooks. Either way, I once again simply disagree.

I doubt that you picked up all the useful philosophy you have put on Less Wrong exclusively from AI books.

Comment author: Dmytry 07 April 2012 11:58:27AM *  -2 points [-]

I agree about philosophy and actually I feel similar about the LW style rationality, for my value of real work (engineering mostly, with some art and science). Your tricks burden the tree search, and also easily lead to wrong order of branch processing as the 'biases' for effective branch processing are either disabled or worst of all negated, before a substitute is devised.

If you want to form a belief about, for example, FAI, it's all nice that you don't feel that the morality can result from some simple principles. If you want to build FAI - this branch (the generated morality that we agree with) is much much lower while it's probability of success, really, isn't that much worse, as the long, hand wavy argument has many points of possible failure and low reliability. Then, there's still no immunity against fallacies. The worst form of sunk cost fallacy is disregard for possibility of better solution after the cost has been sunk. That's what destroys corporations after they sink costs. They don't even pursue cost-recovery option when it doesn't coincide with prior effort and only utilizes part of prior effort.

Comment author: Oscar_Cunningham 21 March 2011 07:52:02PM 0 points [-]

Thanks for the link to Eric Schwitzgebel; very interesting reading!

Comment author: Perplexed 21 March 2011 01:33:01AM *  0 points [-]

There are many other [Edit: was originally 'more'] useful things for philosophers (including Quinean naturalists) to be doing besides just working with scientists to figure out why our brains produce confused philosophical debates.

Perhaps. But it is difficult to imagine any less complete problem dissolution being successful at actually shutting down that confused philosophical debate, and thus freeing those first-class minds to actually do those hypothetical useful things.

Comment author: lukeprog 21 March 2011 01:44:11AM *  0 points [-]

BTW, by "more" I meant "additional": I meant that there "are many other useful things for philosophers... to be doing..." I've now clarified the wording in the original comment.