Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Philosophy: A Diseased Discipline

88 Post author: lukeprog 28 March 2011 07:31PM

Part of the sequence: Rationality and Philosophy

Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most.

That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.)

 

Failed methods

Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used.

Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

 

A diseased discipline

What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians.

Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature:

  1. Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful.
  2. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend.
  3. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
  4. Because they were trained in traditional philosophical ideas, arguments, and frames of mind, naturalists will anchor and adjust from traditional philosophy when they make progress, rather than scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science. Sometimes, philosophical work is useful to build from: Judea Pearl's triumphant work on causality built on earlier counterfactual accounts of causality from philosophy. Other times, it's best to ignore the past confusions. Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
  5. Many naturalists aren't trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don't know how your tool works then you'll use it poorly. AI is useful because it keeps you honest: you can't write confused concepts or non-natural hypotheses in a programming language.
  6. Mainstream philosophy publishing favors the established positions and arguments. You're more likely to get published if you can write about how intuitions are useless in solving Gettier problems (which is a confused set of non-problems anyway) than if you write about how to make a superintelligent machine preserve its utility function across millions of self-modifications.
  7. Even much of the useful work naturalistic philosophers do is not at the cutting-edge. Chalmers' update for I.J. Good's 'intelligence explosion' argument is the best one-stop summary available, but it doesn't get as far as the Hanson-Yudkowsky AI-Foom debate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy summaries of much of the heuristics and biases literature, just like Eliezer has so usefully done on Less Wrong, but of course this isn't cutting edge. You could always just read it in the primary literature by Kahneman and Tversky and others.

Of course, there is mainstream philosophy that is both good and cutting-edge: the work of Nick Bostrom and Daniel Dennett stands out. And of course there is a role for those who keep arguing for atheism and reductionism and so on. I was a fundamentalist Christian until I read some contemporary atheistic philosophy, so that kind of work definitely does some good.

But if you're looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first, or try to solve the problem by applying rationalist thinking: like this.

Swimming the murky waters of mainstream philosophy is perhaps a job best left for those who already spent several years studying it - that is, people like me. I already know what things are called and where to look, and I have an efficient filter for skipping past the 95% of philosophy that isn't useful to me. And hopefully my rationalist training will protect me from picking up bad habits of thought.

 

Philosophy: the way forward

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable. How can we proceed?

First, we must remain vigilant with our rationality training. It is not easy to overcome millions of years of brain evolution, and as long as you are human there is no final victory. You will always wake up the next morning as homo sapiens.

Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality. Ask a fellow rationalist who is knowledgeable about philosophy what the standard positions and arguments in philosophy are on your topic. If any of them seem really useful, grab those particular works and read them. But again: you're probably better off trying to solve the problem by thinking like a cognitive scientist or an AI programmer than by ingesting mainstream philosophy.

However, I must say that I wish so much of Eliezer's cutting-edge work wasn't spread out across hundreds of Less Wrong blog posts and long SIAI articles written in with an idiosyncratic style and vocabulary. I would rather these ideas were written in standard academic form, even if they transcended the standard game of mainstream philosophy.

But it's one thing to complain; another to offer solutions. So let me tell you what I think cutting-edge philosophy should be. As you might expect, my vision is to combine what's good in LW-style philosophy with what's good in mainstream philosophy, and toss out the rest:

  1. Write short articles. One or two major ideas or arguments per article, maximum. Try to keep each article under 20 pages. It's hard to follow a hundred-page argument.
  2. Open each article by explaining the context and goals of the article (even if you cover mostly the same ground in the opening of 5 other articles). What topic are you discussing? Which problem do you want to solve? What have other people said about the problem? What will you accomplish in the paper? Introduce key terms, cite standard sources and positions on the problem you'll be discussing, even if you disagree with them.
  3. If possible, use the standard terms in the field. If the standard terms are flawed, explain why they are flawed and then introduce your new terms in that context so everybody knows what you're talking about. This requires that you research your topic so you know what the standard terms and positions are. If you're talking about a problem in cognitive science, you'll need to read cognitive science literature. If you're talking about a problem in social science, you'll need to read social science literature. If you're talking about a problem in epistemology or morality, you'll need to read philosophy.
  4. Write as clearly and simply as possible. Organize the paper with lots of heading and subheadings. Put in lots of 'hand-holding' sentences to help your reader along: explain the point of the previous section, then explain why the next section is necessary, etc. Patiently guide your reader through every step of the argument, especially if it is long and complicated.
  5. Always cite the relevant literature. If you can't find much work relevant to your topic, you almost certainly haven't looked hard enough. Citing the relevant literature not only lends weight to your argument, but also enables the reader to track down and examine the ideas or claims you are discussing. Being lazy with your citations is a sure way to frustrate precisely those readers who care enough to read your paper closely.
  6. Think like a cognitive scientist and AI programmer. Watch out for biases. Avoid magical categories and language confusions and non-natural hypotheses. Look at your intuitions from the outside, as cognitive algorithms. Update your beliefs in response to evidence. [This one is central. This is LW-style philosophy.]
  7. Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).
  8. Don't dwell too long on what old dead guys said, nor on semantic debates. Dissolve semantic problems and move on.
  9. Conclude with a summary of your paper, and suggest directions for future research.
  10. Ask fellow rationalists to read drafts of your article, then re-write. Then rewrite again, adding more citations and hand-holding sentences.
  11. Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Meeting journals standards is not the most important reason to follow the suggestions above. Write short articles because they're easier to follow. Open with the context and goals of your article because that makes it easier to understand, and lets people decide right away whether your article fits their interests. Use standard terms so that people already familiar with the topic aren't annoyed at having to learn a whole new vocabulary just to read your paper. Cite the relevant positions and arguments so that people have a sense of the context of what you're doing, and can look up what other people have said on the topic. Write clearly and simply and with much organization so that your paper is not wearying to read. Write lots of hand-holding sentences because we always communicate less effectively then we thought we did. Cite the relevant literature as much as possible to assist your most careful readers in getting the information they want to know. Use your rationality training to remain sharp at all times. And so on.

That is what cutting-edge philosophy could look like, I think.

 

Next post: How You Make Judgments

Previous post: Less Wrong Rationality and Mainstream Philosophy

 

 

Comments (431)

Comment author: buybuydandavis 24 November 2014 08:27:37PM 2 points [-]

Seems like an appropriate article to relay a bit of wisdom from E.T. Jaynes.

Jaynes quotes a colleague: “Philosophers are free to do whatever they please, because they don’t have to do anything right.”

Comment author: Mirzhan_Irkegulov 11 July 2014 12:08:51PM 0 points [-]

Fix: The link in the sentence “This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes.”, namely http://el-prod.baylor.edu/certain_doubts/?p=453, is wrong, should be something else.

Comment author: ike 08 November 2015 04:01:22AM 0 points [-]
Comment author: Laoch 29 November 2013 03:00:26PM *  0 points [-]

Look at your intuitions from the outside, as cognitive algorithms.

Which Less Wrong post do I need to read to find out how to do that? Also is there a hard definition of an AI programmer?

Comment author: MarkusRamikin 27 November 2013 07:40:09PM 1 point [-]

as long as you are human there is no final victory.

Hm, that makes a nifty quote.

Comment author: Benito 26 October 2012 12:31:19PM 0 points [-]

The difference between much of mainstream philosophy and LessWrongian philosophy: http://www.lulztruck.com/43901/the-thinker-and-the-doer/

Comment author: Peterdjones 26 October 2012 02:12:38PM 0 points [-]

Out of the way! The Singularity is coming! http://www.dismuse.com/wp-content/uploads/2010/10/Glacier2_p.jpg

Comment author: Bugmaster 28 November 2011 10:30:08PM 2 points [-]

Think like a cognitive scientist and AI programmer.

Is it possible to think "like an AI programmer" without being an AI programmer ? If the answer is "no", as I suspect it is, then doesn't this piece of advice basically say, "don't be a philosopher, be an AI programmer instead" ? If so, then it directly contradicts your point that "philosophy is not useless".

To put it in a slightly different way, is creating FAI primarily a philosophical challenge, or an engineering challenge ?

Comment author: TimS 29 November 2011 03:30:03AM 2 points [-]

Creating AI is an engineering challenge. Making FAI requires an understanding of what we mean by Friendly. If you don't think that is a philosophy question, I would point to the multiplicity of inconsistent moral theories throughout history to try to convince you otherwise.

Comment author: Bugmaster 29 November 2011 03:50:24AM 0 points [-]

Thanks, that does make sense. But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point.

FWIW, I know there are people out there who would claim that friendliness/morality is a scientific question, not a philosophical one, but I myself am undecided on the issue.

Comment author: Vaniver 29 November 2011 04:02:37AM 2 points [-]

But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point.

If you don't think like an AI programmer, you will be tempted to use concepts without understanding them well enough to program them. I don't think that's reduced to the level of 'engineering challenge.'

Comment author: Bugmaster 29 November 2011 04:12:55AM *  0 points [-]

Are you saying that it's impossible to correctly answer the question "what does 'friendly' mean ?" without understanding how to implement the answer by writing a computer program ? If so, why do you think that ?

Edit: added "correctly" in the sentence above, because it's trivially possible to just answer "bananas !" or something :-)

Comment author: DSimon 29 November 2011 04:27:02AM 5 points [-]

I don't think the division is so sharp as all that. Rather, what Vanvier is getting at, I think, is that one is capable of correctly and usefully answering the question "What does 'Friendy' mean?" in proportion to one's ability to reason algorithmically about subproblems of Friendliness.

Comment author: Bugmaster 29 November 2011 09:35:02PM 1 point [-]

I see, so you're saying that a philosopher who is not familiar with AI might come up with all kinds of philosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice. That makes sense. And (presumably) if we assume that humans are kind of similar to AIs, then the AI-savvy philosopher's ideas would have immediate applications, as well.

So, that makes sense, but I'm not aware of any philosophers who have actually followed this recipe. It seems like at least a few such philosophers should exist, though... do they ?

Comment author: DSimon 29 November 2011 11:19:43PM *  0 points [-]

[P]hilosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice.

Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.

Programmers like to say "You don't truly understand how to perform a task until you can teach a computer to do it for you". A computer, or any other sort of rigid mathematical mechanism, is unable to make the 'common sense' connections that a human mind can make. We humans are so good at that sort of thing that we often make many such leaps in quick succession without even noticing!

Implementing an idea on a computer forces us to slow down and understand every step, even the ones we make subconsciously. Otherwise the implementation simply won't work. One doesn't get as thorough a check when explaining things to another human.

Philosophy in general is enriched by an understanding of math and computation, because it provides a good external view of the situation. This effect is of course only magnified when the philosopher is specifically thinking about how to represent human mental processes (such as volition) in a computational way.

Comment author: Bugmaster 29 November 2011 11:26:12PM 1 point [-]

I agree with most of what you said, except for this:

Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation.

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques. Assuming the techniques are demonstrated to work reliably, of course.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

Comment author: DSimon 29 November 2011 11:39:08PM *  1 point [-]

Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques.

Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be "obvious". It's hard to spot those mistakes without an outside view.

Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.

To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they're all likely to be non-locally-cohesive and heavily interdependent.

Comment author: Vaniver 29 November 2011 02:57:20PM 0 points [-]

Endorsed.

Comment author: lessdazed 29 November 2011 01:54:27AM *  2 points [-]

is creating FAI primarily a philosophical challenge, or an engineering challenge ?

An analogy:

http://eccc.hpi-web.de/report/2011/108/

Computational complexity theory is a huge, sprawling field; naturally this essay will only touch on small parts of it...One might think that, once we know something is computable, whether it takes 10 seconds or 20 seconds to compute is obviously the concern of engineers rather than philosophers. But that conclusion would not be so obvious, if the question were one of 10 seconds versus 101010 seconds!
And indeed, in complexity theory, the quantitative gaps we care about are usually so vast that one has to consider them qualitative gaps as well. Think, for example, of the difference between reading a 400-page book and reading every possible such book, or between writing down a thousand-digit number and counting to that number.
More precisely, complexity theory asks the question: how do the resources needed to solve a problem scale with some measure n of the problem size...

Need it be primarily one or the other? But if I must pick one, I pick philosophy.

Comment author: Bugmaster 29 November 2011 02:29:32AM 0 points [-]

An analogy: http://eccc.hpi-web.de/report/2011/108/

I'm afraid I don't see how this article is analogous. The article points out that computational complexity puts a very real limit on what can be computed in practice. Thus, even if you'd proved that something is computable in principle, it may not be computable in our current Universe, with its limited lifespan. You can apply computational complexity to practical problems (f.ex., devising an optimal route for inspecting naval buoys) as well as to theoretical ones (f.ex., discarding the hypothesis that the human brain is a giant lookup table). But these are still engineering and scientific concerns, not philosophical ones.

Need it be primarily one or the other? But if I must pick one, I pick philosophy.

I still don't understand why. If you want to know the probability of FAI being feasible at all, you're asking a scientific question; in order to answer it, you'll need to formulate a hypothesis or two, gather evidence, employ Bayesian reasoning to compute the probability of your hypothesis being true, etc. If, on the other hand, you are trying to actually build an FAI, then you are solving a specific engineering problem; of course, determining whether FAI is feasible or not would be a great first step.

So, I can see how you'd apply science or engineering to the problem, but I don't see how you'd apply philosophy.

Comment author: lessdazed 29 November 2011 03:02:54AM 0 points [-]

If you want to know the probability of FAI being feasible at all, you're asking a scientific question

To fill in the content the term "FAI" stands for, science isn't enough. Engineering is by guess and check, I suppose, but not really.

Comment author: Bugmaster 29 November 2011 03:52:29AM 0 points [-]

Sorry, I couldn't parse your comment at all; I'm not sure what you mean by "content". My hunch is that you meant the same thing as TimS, above; if so, my reply to him should be relevant. If not, my apologies, but could you please explain what you meant ?

Comment author: lessdazed 29 November 2011 04:09:05AM 0 points [-]

I meant what I think he did, so you got it.

Comment author: JonathanLivengood 30 August 2011 07:17:30PM 6 points [-]

I agree with a lot of the content -- or at least the spirit -- of the post, but I worry that there is some selectivity that makes philosophy come off worse than it actually is. Just to take one example that I know something about: Pearl is praised (rightly) for excellent work on causation, but very similar work developed at the same time by philosophers at Carnegie Mellon University, especially Peter Spirtes, Clark Glymour, and Richard Scheines, isn't even mentioned.

Lots of other philosophers could be added to the list of people making interesting, useful contributions to causation research: Christopher Hitchcock at Caltech, James Woodward at Pitt HPS, John Norton at Pitt HPS, Frederick Eberhardt at WashU, Luke Glynn at Konstanz, David Danks at CMU, Ned Hall at Harvard, Jonathan Schaffer at Rutgers, Nancy Cartwright at the LSE, and many others (maybe even including my own humble self).

I am not trying to defend philosophy on the whole. I agree that we have some disease in philosophy that ought to be cut away. But I don't think that philosophy is in as bad a shape as the post suggests. More importantly, there is a lot of good, interesting, useful work being done in philosophy, if you know where to look for it.

Comment author: [deleted] 30 August 2011 07:43:47PM 1 point [-]

Thanks for your comment; I'm working on learning causation theory at the moment, and I didn't know anyone in the field other than Pearl.

Comment author: JonathanLivengood 30 August 2011 10:24:51PM 2 points [-]

You're welcome, of course. Pearl's book on causality is a great place to start. I also recommend Spirtes, Glymour, and Scheines Causation, Prediction, and Search. Depending on your technical level and your interests, you might find Woodward's book Making Things Happen a better place to start. After that, there are many excellent papers, depending on your interests.

Comment author: [deleted] 30 August 2011 10:59:11PM 3 points [-]

I'm a graduate student in mathematics; the more technical, the better. I'm currently three chapters into Pearl. After that in my queue comes Tversky and Kahneman, and now I'll add Spirtes et al. to the end of that.

Comment author: lukeprog 13 June 2011 05:18:59AM *  4 points [-]

This paragraph, from Eugene Mills' 'Are Analytic Philosophers Shallow and Stupid?', made me laugh out loud:

The paradox of analysis concludes that

(PA) A conceptual analysis is correct only if it is trivial.

Philosophers from Socrates onward have [provided] conceptual analyses of knowledge, freedom, truth, goodness, and more. The paradox of analysis suggests that these philosophers... are shallow and stupid: shallow because they stalk triviality, stupid because it so often eludes them.

Mills goes on to defend philosophers, with two sections entitled 'Embracing Triviality, Part I' and 'Embracing Triviality, Part II.'

Comment author: lukeprog 14 May 2011 02:31:41AM *  1 point [-]

One thing I mean by saying that philosophers could benefit from 'thinking like AI programmers' is that forcing yourself to think about the algorithm that would generate a certain reality can guard against superstition, because magic doesn't reduce to computer code.

I recently came across Leibniz saying much the same thing in a passage where he imagines a future language of symbolic logic that had not yet been invented:

The characters would be quite different from what has been imagined up to now... The characters of this script should serve invention and judgment as in algebra and arithmetic... It will be impossible to write, using these characters, chimeral notions.

For the record, I didn't get this little gem from reading Leibniz. I stumbled onto it in Gleick's new history of information, The Information.

Comment author: Vladimir_Nesov 14 May 2011 01:16:27PM 2 points [-]

For the record, I didn't get this little gem from reading Leibniz.

I appreciate this disclaimer.

Comment author: rhollerith_dot_com 14 May 2011 03:55:09AM *  0 points [-]

What I take Leibniz to have meant was that when he uses math he is much less prone to self-deception and to mistakenly believing he's had an insight than when he uses natural language, so he tried (and failed) to extend math so that he could use it to talk about or think about all of the things he uses language to talk about, including human and personal things.

Gottlob Frege, the creator of predicate logic, had a similar ambition.

Note that creating FAI that will extrapolate the volition of the humans requires using math (broadly construed) or formal language to talk about some human things. In particular, you must formally define "human", "volition" and the extrapolation process. The fact that Leibniz and Frege did not get very far with their ambition (although the creation of predicate logic strikes me as some progress) suggests that for us to teach ourselves how to do that might require nontrivial effort -- although I tend to think that we have a head start in some of our mathematical tools. In particular the AIXI formalism (and to a lesser extent) some of the more intellectually-deep traditions we have for designing programming languages and writing programs strike me as superior to any of the "head starts" (including predicate logic) that Leibniz or Frege (who died in 1925) had at their disposal.

(Pearl's technical explanation of causality is another things that sort of seems to me that it might possibly somehow assist in this enterprise.)

SIAI has not included me in their private or not-completely-public discussions of Friendliness theory to any significant degree, so they might have insights that render my speculations here obsolete.

Comment author: rhollerith_dot_com 14 May 2011 04:21:47AM 1 point [-]

Another person who seems to have had the same general ambition as Liebniz and Frege is the Free Software Foundation's lawyer, the man who with Richard Stallman created the General Public License. Eben Moglen. Here's Moglen in 2000:

I was committed to the idea that what we were doing with computers was making languages that were better than natural languages for procedural thought. The idea was to do for whole ranges of human thinking what mathematics has been doing for thousands of years in the quantitative arrangement of knowledge, and to help people think in more precise and clear ways.

Comment author: scientism 06 April 2011 04:09:15AM 1 point [-]

Peter Hacker is not somebody who thinks "philosophy should be useless." Of the list of "basics" that you cite Peter Hacker would agree that "things are made of atoms", "that many questions don't need to be answered but instead dissolved" and "that language is full of tricks." He also explicitly states that "Philosophical Foundations of Neuroscience" should be judged on its usefulness (which is why methodological concerns are relegated to the back pages). Indeed, it seems you equate dissolving problems with "thinking philosophy should be useless" (you cite the later Wittgenstein and dissolution was his method), despite the fact that you also cite it favourably. I find this odd.

Comment author: lukeprog 11 April 2011 10:41:27AM 1 point [-]

You're right. I mis-remembered Hacker's positions. I've updated the original post. Thanks for the correction.

Comment author: zaph 31 March 2011 05:54:52PM 0 points [-]

This is my viewpoint as a philosophical laymen. I've liked a lot of the philosophy I've read, but I'm thinking about what the counter-proposal to what your post might be, and I don't know that it wouldn't result in a better state of affairs. I don't believe we'd have to stop reading writers from prior eras, or keep reinventing the wheel for "philosophical" questions. But why not just say, from here on out, the useful bits of philosophy can be categorized into other disciplines, and the general catch all term is no longer warranted? Philosophy covered just too wide a swath of topics: political science/economics, physics/cosmology, and psychology, just to name a few. I don't really know how to categorize everything Leibnitz and Newton were interested in. Now that these topics have more empirical data, there's less room for general speculation like there was in the old days. When you reclassify the useful stuff of philosophers' work as science, math, or logic I think it's very clarifying. All that remains afterwards (in my opinion) is more cultural commentaries and criticisms, and general speculations about life. I wouldn't call them useless; I found Rawls and Nozick to be interesting. But there would be big picture thinkers, cross-disciplinary studiers, and other types of thinkers even without a formal academic discipline called philosophy.

Comment author: mytyde 13 November 2012 09:20:05PM *  0 points [-]

The decision of what disciplines belong to "science" or "humanitees", "art" or "engineering" is significantly a political decision. Indeed, it is a political question which disciplines exist in which organization and how they fit together.

Rationalist philosophers just need to call themselves "Psychologists of Quantitative Reasoning" in order to get funding. In the current political era, it is fashionable to claim 'objectivity' in one's profession despite frequently inquiring into non-empirical matters. This claim of objectivity often serves to hide one's personal biases which, if made explicit, might otherwise be useful in interpretation of research.

The drive to be unconcerned with the political implications of one's work is the ideal paradigm for economic exploitation of a class of highly-educated scientists by institutions and people who control how funding is utilized to enables, disables, or actualize research and engineering.

Fox News is a perfect example of brutally skewing scientific evidence towards political ends "How Roger Ailes Built the Fox News Fear Factory" http://www.rollingstone.com/politics/news/how-roger-ailes-built-the-fox-news-fear-factory-20110525

(For those of you who would: instead of voting me down because you dislike these ideas, how about trying to engage with them?)

Comment author: elhelado 31 March 2011 04:45:10PM 0 points [-]

The traditional definition of philosophy (in Greek) implied that philosophy's purpose was not to convey information, but to produce a transformation in the individual who practices it. In that sense, it is not supposed to be "useless", but it may appear so to someone who is looking to it for "information" about reality. By this standard, very little of what goes on in academic Philosophy departments today would qualify.

Comment author: mytyde 13 November 2012 09:27:05PM -1 points [-]

I would charge that the same 'institutionalization' which has neutered psychology has changed philosophy into a funding-chaser.

Psychology was invented as a means of studying society so that the social situation could be improved: Freud was a socialist. Because many disciplines have moved to institutions, they have less freedom to pursue research and less freedom to depart from the views of their institutions.

Also, because funding is dependent on people who have ulterior motives in what they choose to fund, it would be almost impossible for a school of psychology to develop which says, for instance "there's something seriously wrong with our society" because they would be hard-pressed to find research funding. That the general population surrenders so much initiative to scientists who are so strongly influenced by veiled politics is the true tragedy of our time.

Comment author: wedrifid 14 November 2012 01:27:41AM 0 points [-]

Psychology was invented as a means of studying society

That sounds more likely "Sociology". If you are actually trying to talk about Psychology then your claim seems wrong.

Comment author: mytyde 15 January 2013 04:24:22AM *  0 points [-]

No, my claim is literal. The role of the discipline 'psychology' has shifted over time away from what we now consider 'sociology' and towards an individualistic approach to mental health. The assumption didn't used to be that mental problems were profoundly unique to the individual, but now mainstream psychology does not take into account the sociological factors which affect mental health in all situations.

Some sources to elaborate the transformation of the discipline are historiologists & sociologists like Immanuel Wallerstein and Michel Foucault, but there are plenty of non-mainstream psychologists who still practice holistic psychology like Helene Shulman & Mary Watkins.

Comment author: MugaSofer 15 January 2013 10:12:52AM *  -2 points [-]

mainstream psychology does not take into account the sociological factors which affect mental health in all situations.

Really? I often hear dire warnings about how our society e.g. contributes to suicides by publicizing them. These are generally billed as coming from experts in the field.

Full disclosure: I live in Ireland, it may be different in other countries.

[EDIT: typos)

Comment author: ohwilleke 31 March 2011 01:31:06AM 1 point [-]

"3. Philosophy has grown into an abnormally backward-looking discipline."

Indeed. One of the salutory roles that philosophy served until about the 18th century (think e.g. "natural philosophy") was to serve as an intellectual context within new disciplines could emerge and new problems could be formulated into coherent complexes of issues that became their own academic disciplines.

In a world where cosmology and quantum physics and neuroscience and statistics and scientific research methods and psychology and "law and whatever" are vibrant we don't need philosophers to deal with metaphysics and epistomology, but we may need considerable more philosophical attention to questions like "what about a book has value?", or "what obligations do people have to each other in an unequal society?," or "what does it mean to be human?"

One of philosophy's main cutting edge agendas should be formulating new questions to ask and serving as an incubator from which to outline the boundaries of new disciplines of specialists to answer those questions.

Any summary of the discipline that is looks like an index of the last two thousand years of philosophical thought is probably missing the stuff that philosophers should be spending their time considering.

Alternately, one approach that many academic philosophers seem to be fond of taking is to consider themselves to be primarily intelllectual historians, with a particularly rich and subtle tradition to understand so that it can be understood by those who are primarily interested in the history of ideas. In the same way, Freud is a bad place to look for someone interesting in doing clinical psychology, but a good place to look for someone interesting in understanding the conceptual roots of lots of ideas that shaped by lay and professional understanding of the individual mind.

Comment author: marcad 30 March 2011 04:17:59PM *  -1 points [-]

In these articles, I believe Less Wrong is approaching extraordinary levels of group think. I had the misfortune of growing up as the child of bona fide cult members, complete with guru. There are many similarities here.

And what is significantly absent is self-awareness of the blatant conceit in believing that some super smart dude can reinvent all thinking all-by-self (don't deny it, that's what's going on). I have been disgusted by articles written by Eleazar which virtually lifted whole swaths of Nietzsche, completely unattributed. There is no way that most of this is original thinking.

And I'll also point out to all the people with rationality blinders on that if the poor dumb sheeples (as appears to be the general attitude around here) get wind that you're anywhere close to installing super-awesome robot overlords that you are certain will rule with love and compassion, then we'll see an uprising which will make the French Revolution look like a love-in.

Really super disgusted. And I don't even give a shit about Wittgenstein. Though I think rationalists who believe they have found or are close to finding the key to living and thinking non-metaphorically are living in their own very delusional altered reality.

What's most ironic is that Less Wrong IS mainstream philosophy. Look around peeps, this IS the zeitgeist of the scientific set. Just because universities haven't caught up with you means Nothing. Get some self-awareness, this is pure and simply an advanced step in the progression of the Enlightenment, although more accurately it's an advanced step in scientific reason a-la the school of Socrates. Of course you're different, advanced, but you are a part of that specific genealogy. And this is the damning lack of awareness most present in this mindset. You are children of Enlightenment. (Go ahead, murder your fathers ;)

The analysis of mainstream philosophy is missing some key analytical components. Namely, the big picture: the nature, progression of change at, and priorities of academia overall. The political and social world in which that academic progression took shape. The rationalizations and biases supporting major universities as suppliers of ruling classes, as well as the rationalizations and biases of the academia in working class university, and of course the funding of all of the above, and how those shape thinking. Not exploring this issues is tantamount to not exploring the problem. It's just hand wringing.

And why this glaring lack? Those are the hard problems. Hard to talk about aren't they. Hell, all this philosophy debate sparks hundreds of comments, but this is a particularly abstract topic. 10% concrete development, 90% repainting the bike shed.

Here's my contribution to LW: the Fallacy of the Single Solution (to society's ills), i.e. AI; i.e. Rationality. Particularly abstract solutions, mind you. A lot of what goes on around here is quite Utilitarian, a point alone which should make people sit back and consider, "do we really have the knowledge and capabilities yet to resolve through advanced AI the serious unresolved problems of Utilitarianism?" I'd say that you'd better be Insanely Sure. The bar for evidence better be high, this is high-risk territory. Or, instead, are the Old Dead Guys who have discussed these problems Not Worth Reading either? "Stick with our dogma peeps, don't confuse yourselves!"...

..Oh man, when you're telling people "don't confuse yourselves with the old literature, you are in really altered reality. Wow. Cults.

Quite strange all the denial. But then again, that's what group think is all about.

Comment author: TheOtherDave 30 March 2011 04:53:35PM 2 points [-]

What's also ironic is that luke, who wrote the post you're responding to, has recently argued at some length that it's important to acknowledge the relationships between LW and mainstream philosophy and in particular the places where LW/EY owe debts to mainstream philosophy.

A reasonable man might infer from this that he's not entirely blinded by groupthink on this particular subject.

Of course, that doesn't mean all the rest of us aren't... though we sure do seem to have a lot of internal disagreement for a bona fide cult.

Comment author: wnoise 30 March 2011 04:39:27PM *  4 points [-]

all thinking all-by-self.

All thinking all-by-himself? No. Great chunks, while being immersed in the culture that resulted from that thinking, sure.

I have been disgusted by articles written by Eleazar which virtually lifted whole swaths of Nietzsche, completely unattributed. There is no way that most of this is original thinking.

For direct influences, Eliezer is quite willing to cite e.g. Feynmann, Dennet, Pearl and Drescher.

I don't see the connection you see to Nietzsche in particular, merely a bunch of things that are tangential at best. Would you be willing to spell out which bits of his writings are like which bits of Nietzsche? I would strongly guess that anything you identify is not particularly unique to Nietzsche, and similar points had been made both before and after him, and any that did have no antecedents before him leaked out into the broader culture.

It depends on what you mean by this being "original thinking". Eliezer almost certainly isn't directly mining 19th century German philosophers for ideas. I doubt he has read much if any Nietzsche and would thus not be able to directly copy Nietzsche. Nonetheless, some ideas of Nietzsche have made their way into modern world view. Ideas are generally dense and interconnected. Starting at one idea of a philosopher and thinking about the implications are going to produce similar new ideas to others the philosopher had.

Yes, one should keep clear that one's ideas that apparently arise from within are crucially dependent on previous experiences and culture. But that doesn't extend to a requirement to track down and cite previous articulators of similar ideas. Once an idea is encountered indirectly, it's free game to build upon. It's long been recognized that certain ideas arise multiple times apparently independently when the prerequisites take root in a given culture. Newton and Liebniz independently invented calculus, with no direct connection. I'm sure neither could cite any direct influence from prior mathematicians that would directly lead to calculus. But there was still enough commonality in mathematical culture that they developed it at roughly the same time.

Comment author: djc 30 March 2011 05:22:56AM *  25 points [-]

As a professional philosopher who's interested in some of the issues discussed in this forum, I think it's perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I'm interested in the reverse direction: if good ideas are being had here, I'd like professional philosophy to benefit from them. So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.

(The two main contributions that I'm aware of are ideas about friendly AI and timeless/updateless decision theory. I'm sure there are more, though. Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.)

Comment author: [deleted] 09 June 2013 08:43:42PM 7 points [-]

As a professional philosopher who's interested in some of the issues discussed in this forum. . .

Oh wow. The initials 'djc' match up with David (John) Chalmers. Carnap and PhilPapers are mentioned in this user's comments. Far from conclusive evidence, but my bet is that we've witnessed a major analytic philosopher contribute to LW's discussion. Awesome.

Comment author: enye-word 10 May 2017 08:51:59AM 0 points [-]

In the comment he links to above, djc states "One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas."

When asked about LessWrong in a reddit AMA, David Chalmers stated "i think having subcommunities of this sort that make their own distinctive assumptions is an important mechanism of philosophical progress" and an interest in TDT/UDT.

(See also: https://slatestarcodex.com/2017/02/06/notes-from-the-asilomar-conference-on-beneficial-ai/)

(Sorry to dox you, David Chalmers. Hope you're doing well these days.)

Comment author: XiXiDu 30 March 2011 01:23:59PM 6 points [-]

So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.

Actually in one case this "forum" could benefit from the help of professional philosophers, as the founder Eliezer Yudkowsky especially asks for help on this problem:

I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers...

I think that if you show that professional philosophy can dissolve that problem then people here would be impressed.

Comment author: Vladimir_Nesov 30 March 2011 10:51:20AM 3 points [-]

Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.

Do you know about the TDT paper?

Comment author: radical_negative_one 30 March 2011 06:23:59AM *  1 point [-]

Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.

Just in case you haven't seen it, here is Eliezer's Timeless Decision Theory paper. It's over a hundred pages so i'd hope that it represents a "clear statement". (Although i can't personally comment on anything in it because i don't currently have time to read it.)

Comment author: djc 30 March 2011 06:45:48AM 24 points [-]

That's the one. I sent it to five of the world's leading decision theorists. Those who I heard back from clearly hadn't grasped the main idea. Given the people involved, I think this indicates that the paper isn't a sufficiently clear statement.

Comment author: [deleted] 30 March 2011 06:31:40AM 6 points [-]

It's somewhat painful to read. I've tried to read it in the past and get a bit eyesore after the first twenty pages.

Doing the math, I realize it's probably irrational for Yudkowsky-san to spend time learning LaTeX or some other serious typesetting system, but I can dream, right?

Comment author: lukeprog 13 July 2012 04:58:53AM 11 points [-]

Your dream has come true.

Comment author: gmpalmer 10 December 2012 02:23:43PM *  0 points [-]

I hope this is corrected later in the paper and my apologies if this is a stupid question but could you please explain how the example of gum chewing and abscesses makes sense?

That is, in the explanation you are making your decision based on evidence. Indeed, you'd be happy--or anyone would be happy--to hear you're chewing gum once the results of the second study are known. How is that causal and not evidential?

I see later in the paper that gum chewing is evidence for the CGTA gene but that doesn't make any sense. You can't change whether or not you have the gene and the gum chewing is better for you at any rate. Still confused about the value of the gum chewing example.

Comment author: [deleted] 13 July 2012 05:41:24AM 2 points [-]

Happiness is too general a term to express my current state of mind.

May the karma flow through you like so many grains of sand through a sieve.

Comment author: wedrifid 13 July 2012 05:50:06AM 0 points [-]

May the karma flow through you like so many grains of sand through a sieve.

Not quite sure how this one works. Usually I associate sieve with "leaking like a sieve", generally a bad thing---do you want all his karma to be assassinated away as fast as it comes?

Comment author: [deleted] 13 July 2012 06:03:17AM 2 points [-]

Oh, no. Lukeprog is the sieve, and the grains of sand are whatever fraction of a hedon he gets from being upvoted.

Comment author: RichardKennaway 30 March 2011 10:14:56AM *  5 points [-]

The LaTeX to format a document like that can be learnt in an hour or two with no previous experience, assuming at least basic technically-minded smarts.

Comment author: rhollerith_dot_com 30 March 2011 12:10:42PM *  5 points [-]

The LaTeX to format a document like that can be learnt in an hour or two

And the learning (and formatting of the document) does not have to be done by the author of the document.

Comment author: lukeprog 30 March 2011 05:32:31AM 15 points [-]

Yes, this is one reason I'm campaigning to have LW / SIAI / Yudkowsky ideas written in standard form!

Comment author: Liosis 30 March 2011 05:11:52AM 0 points [-]

The philosophers I study under criticise the sciences for not being rigorous enough. The problem goes both ways. The sciences often do not understand the basic concepts from which they are functioning. A good scientist will also have a rudimentary understanding of philosophy, in order to fiddle with the background epistemology of their work.

You are correct in thinking that Continental philosophy is not continuous with the sciences, because it is the core of the humanities and as such being continuous with the sciences would be unnatural for it. I still think that asking questions about our connection to existence is interesting and important, although I personally do not find Continental philosophy as potentially fruitful as Analytic.

Intuitions are by no means accepted within the discipline as a whole, and are also an interesting topic of debate within it. Because philosophy is a highly speculative discipline it isn't going to be following a normal scientific model, but instead will model constant discovery. If you want to see where science connects up with philosophy what you should look at is the disciplines that end up coming out of philosophy as questions that can be answered scientifically. This is what we produce with regard to science.

Philosophy is the core of the academic disciplines. It isn't in the business of scientific inquiry and it should not be. Some philosophers are still looking for universal truths after all. Simply disagreeing with the idea of a priori does not make it go away.

It is good that you recognise there are problems in philosophy. Too many people take it as dogma and do not question the area they have explored. My advice is to take what you can from the discipline well keeping in mind that every piece you take comes with a centuries long dialogue.

Comment author: Emile 30 March 2011 03:23:50PM 4 points [-]

This doesn't do much to convince me; for example in these bits you could substitute "philosophy" with "theology", and it would sound the same:

Because philosophy is a highly speculative discipline it isn't going to be following a normal scientific model, but instead will model constant discovery.

[...] It isn't in the business of scientific inquiry and it should not be. Some philosophers are still looking for universal truths after all. Simply disagreeing with the idea of a priori does not make it go away.

It is good that you recognise there are problems in philosophy. Too many people take it as dogma and do not question the area they have explored. My advice is to take what you can from the discipline well keeping in mind that every piece you take comes with a centuries long dialogue.

The bit about "take what you can" and "every piece comes with a centuries long dialogue" especially could be said of a lot of things (law, for example) and it's not clear why those are good things in themselves.

Comment author: Eliezer_Yudkowsky 30 March 2011 05:19:23AM 9 points [-]

The philosophers I study under criticise the sciences for not being rigorous enough.

Acid test 1: Are they complaining about experimenters using arbitrary subjective "statistical significance" measures instead of Bayesian likelihood functions?

Acid test 2: Are they chiding physicists for not decisively discarding single-world interpretations of quantum mechanics?

Acid test 3: Are all of their own journals open-access?

It may be ad hominem tu quoque, but any discipline that doesn't pass the three acid tests has not impressed me with its superiority to our modern, massively flawed academic science.

Comment author: quen_tin 30 March 2011 04:17:00PM 2 points [-]

Acid test (1) and (2): this is where dogma starts.

Comment author: Broggly 05 April 2011 12:48:02AM 0 points [-]

I get the problem with (2), although mostly because I haven't thought about quantum mechanics enough to have an opinion, but (1) is no more dogma than "DNA is transcribed to mRNA which is then translated as an amino acid sequence". There are lots of good reasons to investigate the actual likelihood of the null and alternative hypotheses rather than just assuming it's about 95% likely it's all just a coincidence Of course, until this becomes fairly standard doing so would mean turning your paper into a meta-analysis as well as the actual experiment, which is probably hard work and fairly boring.

Comment author: Will_Newsome 30 March 2011 04:11:46PM *  1 point [-]

Acid test 2: Are they chiding physicists for not decisively discarding single-world interpretations of quantum mechanics?

ETA: The following comment is mostly off-base due to the reason pointed out in JGWeissman's reply. Mea culpa.

Ugh, it's not like many worlds is even the most elegant interpretation: http://arxiv.org/abs/1008.1066 . Talk of MWI is kind of misleading if people haven't already thought about spatially infinite universes for more than 5 minutes, which they mostly haven't.

I realize that world-eater supporters are almost definitely wrong, but I'm really suspicious of putting people into the irrational bin because they've failed according to a metric that is knowably fundamentally flawed. I doubt the utility lost via setting a precedent (even if you're damn well sure they're wrong in this case) of actually figuring out ways a person could have fundamentally correct epistemology is more than the utility lost by disregarding everyone and going all Only Sane Man. But my experience is with SIAI and not SL4. Maybe I'd think differently if I was Quirrell.

Comment author: JGWeissman 30 March 2011 09:20:52PM 5 points [-]

Ugh, it's not like many worlds is even the most elegant interpretation:

The proposed theory does not seem to be an alternative to MW QM so much as a possible answer to "What adds up to MW QM?". In this light, does pushing MW over Collapse really warrant an "ugh" response?

Comment author: jimrandomh 30 March 2011 04:07:27PM *  6 points [-]

(2) appears to reject any discipline that ignores quantum mechanics entirely, or which pays attention to quantum mechanics but whose practitioners consider themselves too confused about it to challenge the consensus position.

(3) appears to reject almost all of academia. In particular, it rejects disciplines stuck at the common equilibrium of closed-access journals combined with authors publishing the same articles on their own web pages.

Comment author: RichardChappell 30 March 2011 01:19:01AM *  4 points [-]

philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

What makes you think this? It's true that many philosophers recognize the genetic fallacy, and hence don't take "you judge that P because of some fact about your brain" to necessarily undermine their judgment. But it's ludicrously uncharitable to interpret this principled epistemological disagreement as a mere factual misunderstanding.

Again: We can agree on all the facts about how human psychology works. What we disagree about (some of us, anyway -- there's much dispute here within philosophy too, as seen e.g. if you browse the archives of the Arche methodology weblog ) is the epistemological implications.

Similar objections apply to the claim that "Most philosophers don't understand the basics... that people are made of atoms and intuitions don't trump science." Are you serious?

Comment author: Eliezer_Yudkowsky 30 March 2011 02:47:33AM 3 points [-]

Richard, I'm pretty sure I remember you treating the apparent conceivability of zombies as a primary fact about the conceivability of zombies to which you have direct access, rather than treating it as an output of some cognitive algorithm in your brain and asking what sort of thought process might have produced it.

Comment author: CuSithBell 22 April 2011 02:14:05PM 3 points [-]

It seems like some people are using "conceivable" to mean "imaginable at some resolution", and some to mean "coherently imaginable at any resolution", or something. By which I mean, the first group would say that they could conceive of "America lost the Revolutionary War" or "heavier objects fall faster" or "we are composed of sentient superstrings, and the properties of matter are their tiny, tiny emotions" or "the president has been kidnapped by ninjas"; whereas the second group would say these things are not conceivable.

As a result, group A wouldn't really consider the conceivability of p-zombies as evidence of their possibility (well, it'd technically be extremely weak evidence), whereas group B would consider the problem of the conceivability of p-zombies as essentially equivalent to the actuality of p-zombies. (There may be other groups, such as those who think "If it's imaginable, then it's coherent," but based on my brief glance the discussion hasn't actually made it that far yet.)

Is this right? I'd think the whole thing could be resolved if you taboo'd "conceivable"...?

Comment author: Peterdjones 22 April 2011 02:30:38PM *  0 points [-]

Talking about "the" possibility of p-zombies is pretty pointless, because of the important difference between logical and physical impossibility. Even Chalmers thinks PZs are physically/naturally impossible.

I don't think the coherent/incoherent distinction you are making is clear. Of course, in a universe where everything is exactly the same, heavier objects would not fall faster in vacuo. But then we understand gravity and acceleration, so we can say what the contradictions would be. We don't understand what the contradictions would be in the case of p-zombies, because we don't have the psychophysical laws.Physicalism is Not An Explanation.

Comment author: CuSithBell 22 April 2011 03:30:58PM 0 points [-]

By 'coherent', I mean something like 'consistent' (to make an analogy to logic) - given all our observations, and extrapolating the concept as needed, there are no contradictions. "Heavier objects fall faster" leads to contradictions pretty quickly. Some people believe that "p-zombies are possible" (in some sense, which might match up with what you mean by either logical or physical) also leads to a contradiction, though we of course don't understand the laws that would cause this.

This is beside the point! I'm not arguing for or against p-zombies (here), I'm saying I think the people in this argument are talking past each other because they have diverging definitions.

Comment author: Peterdjones 22 April 2011 03:41:39PM *  -1 points [-]

"Heavier objects fall faster" leads to contradictions with a theory,

If we don't know the laws that would contradict p-zombies, there is no see-able contradiction in them, and conceivability=logical possibility follows.

Comment author: CuSithBell 22 April 2011 04:02:00PM 0 points [-]

"Heavier objects fall faster" is imaginable at a particular resolution. Once you ask, say, "what happens if you glue two stones together?", it contradicts more deeply-held notions, and the concept falls apart at that resolution.

Some people believe that p-zombies are incoherent if analyzed sufficiently, or expect that they necessitate a severe contradiction of much more deeply-held beliefs.

Moreover, it is possible to hold that we don't know the laws that would contradict p-zombies but that they are nevertheless contradicted - as it is possible to hold that things should not fall up without knowing the laws of gravitation (leaving aside that some things do fall up).

Do you disagree with my central assertion, or just my definition of coherence?

Comment author: Peterdjones 22 April 2011 04:08:13PM *  0 points [-]

The stone-gluing can be worked around with auxilliary laws. To assume those laws are absent is to assume some other laws.

People can believe what they like. If you are going to stake a claim that there is a literal self contradiction in p-zombies, you need to say what it is. However most cases of aleged self contradiction turn out to be contradiction with unexamined background assumptions--laws, again. Talk of "resolution" is misleading: this is cognitive, not pictorial.

It is in fact the philosopher's point that p-zombies are really, for unknown reasons, impossible. They are not arguing zombies in order to argue zombies! Non-philosophers keep misunderstanding that.

Comment author: CuSithBell 22 April 2011 04:53:18PM 0 points [-]

So, ah, just the latter then?

That's all right, and I admit it's a fuzzy term. But if you want to make any progress, I suggest you consider the former point instead.

Comment author: Peterdjones 22 April 2011 01:58:08PM 0 points [-]

That's a difference that doesn't make a difference. That I can (not) conceive p-zombies can only mean that my cognitive processes produce a certain output. Whether is is somehow a mistaken output is another matter entirely.

Comment author: RichardChappell 30 March 2011 03:08:32AM *  0 points [-]

Distinguish two questions: (1) Are zombies logically coherent / conceivable? (2) What cognitive processes make it seem plausible that the answer to Q1 is 'yes'?

I'm fully aware that one can ask the second, cogsci question. But I don't believe that cogsci answers the first question.

Comment author: [deleted] 30 March 2011 07:29:36AM *  5 points [-]

It's hard to be sure that I'm using the right words, but I am inclined to say that it's actually the connection between epistemic conceivability and metaphysical possibility that I have trouble with. To illustrate the difference as I understand it, someone who does not know better can epistemically conceive that H2O is not water, but nevertheless it is metaphysically impossible that H2O is not water. I am not confident I know the meanings of the philosophical terms of the preceding comment, but employing mathematics-based meanings of the words "logic" and "coherent", then it is perfectly logically coherent for someone who happens to be ignorant of the truth to conceive that H2O is not water, but this of course tells us very little of any significant interest about the world. It is logically coherent because try as he might, there is no way for someone ignorant of the facts to purely logically derive a contradiction from the claim that H2O is not water, and therefore reveal any logical incoherence in the claim. To my way of understanding the words, there simply is no logical incoherence in a claim considered against the background of your (incomplete) knowledge unless you can logically deduce a contradiction from inside the bounds of your own knowledge. But that's simply not a very interesting fact if what you're interested in is not the limitations of logic or of your knowledge but rather the nature of the world.

I know Chalmers tries to bridge the gap between epistemic conceivability and metaphysical possibility in some way, but at critical points in his argument (particularly right around where he claims to "rescue" the zombie argument and brings up "panprotopsychism") he loses me.

Comment author: RichardChappell 30 March 2011 04:34:51PM 0 points [-]

Aleph basically has it right in his reply: 'water' is a special case because it's a rigid designator, picking out the actual watery stuff in all counterfactual worlds (even when some other stuff, XYZ, is the watery stuff instead of our water).

Conceiving of the "twin earth" world (where the watery stuff isn't H2O) is indeed informative, since if this really is a coherent scenario then there really is a metaphysically possible world where the watery stuff isn't H2O. It happens that we shouldn't call that stuff "water", if it differs from the watery stuff in our world, but that's mere semantics. The reality is that there is a possible world corresponding to the one we're (coherently) conceiving of.

For more detail, see Misusing Kripke; Misdescribing Worlds, or my undergrad thesis on Modal Rationalism

Comment author: AlephNeil 30 March 2011 10:04:25AM *  6 points [-]

My view on this question is similar to that of Eric Marcus (pdf).

When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that HasNoExperiences(ThisPerson) but there's no necessary reason why a predicate called "HasNoExperiences" must track whether or not people have experiences.)

Here, I think, is how Chalmers might drive a wedge between the zombie example and the "water = H2O" example:

Imagine that we're prescientific people familiar with a water-like substance by its everyday properties. Suppose we're shown two theories of chemistry - the correct one under which water is H2O and another under which it's "XYZ" - but as yet have no way of empirically distinguishing them. Then when we epistemically conceive of water being XYZ, we have a coherent picture in our minds of 'that wet stuff we all know' turning out to be XYZ. It isn't water, but it's still wet.

To epistemically but not metaphysically conceive of p-zombies would be to imagine a scenario where some physically normal people lack 'that first-person experience thing we all know' and yet turn out to be conscious after all. But whereas there's a semantic gap between "wet stuff" and "real water" (such that only the latter is necessarily H2O), there doesn't seem to be any semantic gap between "that first-person thing" and "real consciousness". Consciousness just is that first-person thing.

Perhaps you can hear the sound of some hairs being split. I don't think we have much difference of opinion, it's just that the idea of "conceiving of something" is woolly and incapable of precision.

Comment author: RichardChappell 30 March 2011 04:41:16PM 1 point [-]

When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that HasNoExperiences(ThisPerson) but there's no necessary reason why a predicate called "HasNoExperiences" must track whether or not people have experiences.)

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness? It's not like we can imagine a microphysical duplicate of our world that's lacking chairs. Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.

Comment author: Peterdjones 22 April 2011 02:15:22PM *  0 points [-]

Or in simpler terms: we can't see how particular physics produces particular consciousness, even if we accept in general that physics produces consciousness. The conceivability of p-zombies doesn't mean they are really possible, or that physicalism is false, but it does mean that our explanations are inadequate. Reductivism is not, as it stands, an explanation of consciousness, but only a proposal of the form an explanation would have.

Comment author: quen_tin 30 March 2011 08:07:14PM 2 points [-]

Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.

That's true. The difference between chairs and consciousness is that chair is a 3rd person concept, whereas consciousness is a 1st person concept. Imagining a world without consciousness is easy, because we never know if there are consciousnesses or not in the world - consciousness is not an empirical data, it's something we speculate other have by analogy with ourselves.

Comment author: [deleted] 30 March 2011 07:13:13PM 9 points [-]

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness? It's not like we can imagine a microphysical duplicate of our world that's lacking chairs.

But these kinds of imagining are importantly dissimilar. Compare:

1) imagine the physical properties without imagining consciousness

2) imagine a microphysical duplicate of our world that's lacking chairs

The key phrases are: "without imagining" and "that's lacking". It is one thing to imagine one thing without imagining another, and quite another to imagine one thing that's lacking another. For example, I can imagine a ball without imagining its color (indeed, as experiments have shown, we can see a ball without seeing its color), but I may not be able to imagine a ball that's lacking color.

This is no small distinction.

To bring (2) into line with (1) we would need to change it to this:

2a) imagine a microphysical duplicate of our world without imagining chairs

And this, I submit, is possible. In fact it is possible not only to imagine a physical duplicate of our world without imagining chairs, it is (in parallel to the ball example above) possible to see a duplicate of our world (namely the world itself) without seeing (i.e. perceiving, recognizing) chairs. It's a regular occurrence that we fail to see (to recognize) what's right in front of us in plain view. It is furthermore possible for a creature like Laplace's Demon to imagine every particle in the universe and all their relations to each other without recognizing, in its own imagined picture, that a certain group of particles make up a chair, etc. The Demon can in other words fail to see the forest for the trees in its own imagined world.

Now, if instead of changing (2) to bring it into line with (1), we change (1) to bring it into line with (2), we get:

1a) imagine a microphysical duplicate of our world that's lacking consciousness

Now, your reason for denying (2) was:

Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair.

Converting this, we have the following proposition:

Once we've imagined the atoms-arranged-personwise, that's all it is to be a person.

But this seems to be nothing other than the issue in question, namely, the issue of whether there is anything more to being a person than atoms-arranged-personwise. If you assume that there is, then you are assuming the possibility of philosophical zombies. In other words this particular piece in the argument for the possibility of philosophical zombies assumes the possibility of philosophical zombies.

Comment author: RichardChappell 30 March 2011 10:32:13PM 3 points [-]

we get: 1a) imagine a microphysical duplicate of our world that's lacking consciousness

Right, that's the claim. I explain why I don't think it's question-begging here and here

Comment author: pjeby 31 March 2011 01:36:31AM *  5 points [-]

we get: 1a) imagine a microphysical duplicate of our world that's lacking consciousness Right, that's the claim. I explain why I don't think it's question-begging here and here

How can can you perform that step unless you've first defined consciousness as something that's other-than-physical?

If the "consciousness" to be imagined were something we could point to and measure, then it would be a physical property, and would thus be duplicated in our imagining. Conversely, if it is not something that we can point to and measure, then where does it exist, except in our imagination?

The logical error in the zombie argument comes from failing to realize that the mental models we build in our minds do not include a term for the mind that is building the model. When I think, "Richard is conscious", I am describing a property of my map of the world, not a property of the world. "Conscious" is a label that I apply, to describe a collection of physical properties.

If I choose to then imagine that "Zombie Richard is not conscious", then I am saying, "Zombie Richard has all the same properties, but is not conscious." I can imagine this in a non-contradictory way, because "conscious" is just a label in my brain, which I can choose to apply or not apply.

All this is fine so far, until I try to apply the results of this model to the outside world, which contains no label "conscious" in the first place. The label "conscious" (like "sound" in the famous tree-forest-hearing question) is strictly something tacked on to the physical events to describe a common grouping.

In other words, my in-brain model is richer than the physical world - I can imagine things that do not correspond to the world, without contradiction in that more-expressive model.

For example, I can label Charlie Sheen as "brilliant" or "lunatic", and ascribe these properties to the exact same behaviors. I can imagine a world in which he is a genius, and one in which he is an idiot, and yet, he remains exactly the same and does the same things. I can do this because it's just my label -- my opinion -- that changes from one world to the other.

The zombie world is no different: in one world, you have the opinion that I'm conscious, and in the other, you have the opinion that I'm not. It's your failure to notice that "conscious" is an opinion or judgment -- specifically, your opinion or judgment -- that makes it appear as though it is proving something more profound than the proposition that people can hold contradictory opinions about the same thing.

If you map the argument from your imagination to the real world, then you can imagine/opine that people are conscious or zombies, while the physical world remains the same. This isn't contradictory, because it's just an opinion, and you can change your opinion whenever you like.

The reason the zombie world doesn't then work as an argument for non-materialism, is that it cheats by dropping out the part where the person doing the experiment is the one holding the opinion of consciousness. In your imagined world, you are implicitly holding the opinion, then when you switch to thinking about the real world, you're ignoring the part that it's still just you, holding an opinion about something.

Comment author: komponisto 30 March 2011 05:21:12PM 5 points [-]

we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness?

And that is a question of cognitive science, is it not?

Comment author: RichardChappell 30 March 2011 10:19:52PM 0 points [-]

Ha, indeed, poorly worded on my part :-)

Comment author: SilasBarta 30 March 2011 10:24:41PM 1 point [-]

What was poor about it? The rest of your point is consistent with that wording. What would you put there instead so as to make your point more plausible?

Comment author: RichardChappell 30 March 2011 10:48:37PM *  1 point [-]

Good question. It really needed to be stated in more objective terms (which will make the claim less plausible to you, but more logically relevant):

It's a fact that a scenario containing a microphysical duplicate of our world but lacking chairs is incoherent. It's not a fact that the zombie world is incoherent. (I know, we dispute this, but I'm just explaining my view here.)

With the talk of what's easily imaginable, I invite the reader to occupy my dialectical perspective, and thus to grasp the (putative) fact under dispute; but I certainly don't think that anything I'm saying here forces you to take my position seriously. (I agree, for example, that the psychological facts are not sufficient justification.)

Comment author: TheOtherDave 30 March 2011 05:01:45PM 8 points [-]

If you met someone who said with a straight face "Of course I can imagine something that is physically identical to a chair, but lacks the fundamental chairness that chairs in our experience partake of... and is therefore merely a fake chair, although it will pass all our physical tests of being-a-chair nevertheless," would you consider that claim sufficient evidence for the existence of a non-physical chairness?

Or would you consider other explanations for that claim more likely?

Would you change your mind if a lot of people started making that claim?

Comment author: RichardChappell 30 March 2011 09:29:32PM *  2 points [-]

You misunderstand my position. I don't think that people's claims are evidence for anything.

When I invite people to imagine the zombie world, this is not because once they believe that they can do so, this belief (about their imaginative capabilities) is evidence for anything. Rather, it's the fact that the zombie world is coherently conceivable that is the evidence, and engaging in the appropriate act of imagination is simply a psychological precondition for grasping this evidence.

That's not to say that whenever you believe that you've coherently imagined X, you thereby have the fact that X is coherently conceivable amongst your evidence. For this may not be a fact at all.

(This probably won't make sense to anyone who doesn't know any epistemology. Basically I'm rejecting the dialectical or "neutral" view of evidence. Two participants in a debate may be unable to agree even about what the evidence is, because sometimes whether something qualifies as evidence or not will depend on which of the contending views is actually correct. Which is to reiterate that the disagreement between me and Lukeprog, say, is about epistemological principles, and not any empirical matter of fact.)

Comment author: Tyrrell_McAllister 31 March 2011 08:24:25PM *  1 point [-]

When I invite people to imagine the zombie world, this is not because once they believe that they can do so, this belief (about their imaginative capabilities) is evidence for anything. Rather, it's the fact that the zombie world is coherently conceivable that is the evidence, and engaging in the appropriate act of imagination is simply a psychological precondition for grasping this evidence.

If someone were to claim the following, would they be making the same point as you are making?

"The non-psychological fact that 'SS0 + SS0 = SSSS0' is a theorem of Peano arithmetic is evidence that 2 added to 2 indeed yields 4. A psychological precondition for grasping this evidence is to go through the process of mentally verifying the steps in a proof of 'SS0 + SS0 = SSSS0' within Peano arithmetic.

"This line of inquiry would provide evidence to the verifier that 2+2 = 4. However, properly speaking, the evidence would not be the psychological fact of the occurrence of this mental verification. Rather, the evidence is the logical fact that 'SS0 + SS0 = SSSS0' is a theorem of Peano arithmetic."

Comment author: TheOtherDave 30 March 2011 09:51:49PM 3 points [-]

I agree that your belief that you've coherently imagined X does not imply that X is coherently conceivable.

I agree that, if it were a fact that the zombie world were coherently conceivable, that could be evidence of something.

I don't understand your reasons for believing that the zombie world is coherently conceivable.

Comment author: RichardChappell 30 March 2011 10:02:00PM 0 points [-]

Are you assuming that in order for me to be able to justifiedly believe and reason from the premise that the zombie world is conceivable, I need to be able to give some independent justification for this belief? That way lies global skepticism.

I can tell you that the belief coheres well with my other beliefs, which is a necessary but not sufficient condition for my being justified in believing it. There's no good reason to think that it's false. (Though again, I don't mean to suggest that this fact suffices to make it reasonable to believe.) Whether it's reasonable to believe depends, in part, on facts that cannot be agreed upon within this dialectic: namely, whether there really is any contradiction in the idea.

Comment author: Alicorn 30 March 2011 09:38:41PM *  0 points [-]

I don't think that people's claims are evidence for anything.

I claim to be wearing blue today.

Comment author: RichardChappell 30 March 2011 09:53:40PM 0 points [-]

It's a restricted quantifier :-)

Comment author: FAWS 30 March 2011 04:58:19PM 12 points [-]

But there's no such conceptual connection between neurons-instantiating-computations and consciousness

Only for people who haven't properly internalized that they are brains. Just like people who haven't internalized that heat is molecular motion could imagine a cold object with molecules vibrating just as fast as in a hot object.

Comment author: RichardChappell 30 March 2011 09:27:18PM 3 points [-]

Distinguish physical coldness from phenomenal coldness. We can imagine phenomenal coldness (i.e. the sensation) being caused by different physical states -- and indeed I think this is metaphysically possible. But what's the analogue of a zombie world in case of physical heat (as defined in terms of its functional role)? We can't coherently imagine such a thing, because physical heat is a functional concept; anything with the same microphysical behaviour as an actual hot (cold) object would thereby be physically hot (cold). Phenomenal consciousness is not a functional concept, which makes all the difference here.

Comment author: FAWS 30 March 2011 09:43:51PM *  5 points [-]

You are simply begging the question. For me philosophical zombies make exactly as much sense as cold objects that behave like hot objects in every way. I can even imagine someone accepting that molecular movement explains all observable heat phenomena, but still confused enough to ask where hot and cold come from, and whether it's metaphysically possible for an object with a lot of molecular movement to be cold anyway. The only important difference between that sort of confusion and the whole philosophical zombie business in my eyes is that heat is a lot simpler so people are far, far less likely to be in that state of confusion.

Comment author: SilasBarta 30 March 2011 10:19:31PM *  2 points [-]

still confused enough to ask ... whether it's metaphysically possible for an object with a lot of molecular movement to be cold anyway.

Not so fast! That is possible, and that was EY's point here:

Suppose there was a glass of water, about which, initially, you knew only that its temperature was 72 degrees. Then, suddenly, Saint Laplace reveals to you the exact locations and velocities of all the atoms in the water. You now know perfectly the state of the water, so, by the information-theoretic definition of entropy, its entropy is zero. Does that make its thermodynamic entropy zero? Is the water colder, because we know more about it?

Ignoring quantumness for the moment, the answer is: Yes! Yes it is!

And then he gave the later example of the flywheel, which we see as cooler than a set of metal atoms with the same velocity profile but which is not constrained to move in a circle:

But the more important point: Suppose you've got an iron flywheel that's spinning very rapidly. That's definitely kinetic energy, so the average kinetic energy per molecule is high. Is it heat? That particular kinetic energy, of a spinning flywheel, doesn't look to you like heat, because you know how to extract most of it as useful work, and leave behind something colder (that is, with less mean kinetic energy per degree of freedom).

Comment author: RichardChappell 30 March 2011 10:09:11PM 2 points [-]

This comment is unclear. I noted that out heat concepts are ambiguous, between what we can call physical heat (as defined by its causal-functional role) and phenomenal heat (the conscious sensations). Now you write:

I can even imagine someone accepting that molecular movement explains all observable heat phenomena, but still confused enough to ask where hot and cold come from...

Which concept of 'hot' and 'cold' are you imagining this person to be employing? If the phenomenal one, then they are (in my view) correct to see a further issue here: this is simply the consciousness debate all over again. If the physical-functional concept, then they are transparently incoherent.

Now, perhaps you are suggesting that you only have a physical-function conception of consciousness, and no essentially first-personal (phenomenal) concepts at all. In that case, we are talking past each other, because you do not have the concepts necessary to understand what I am talking about.

Comment author: Clippy 30 March 2011 04:47:27PM 3 points [-]

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness?

World-models that are deficient at this aspect of world representation in ape brains.

Comment author: [deleted] 30 March 2011 12:04:48PM 1 point [-]

Thanks, I like the paper. I understand the core idea is that to imagine a zombie (in the relevant sense of imagine) you would have to do it first person - which you can't do, because there is nothing first person to imagine. I find the argument for this persuasive.

And this is just what I have been thinking:

the idea of "conceiving of something" is woolly and incapable of precision.

Comment author: komponisto 30 March 2011 05:01:42AM 7 points [-]

The first question should really be: what does the apparent conceivability of zombies by humans imply about their possibility?

Philosophers on your side of the debate seem to take it for granted (or at least end up believing) that it implies a lot, but those of us on the other side think that the answer to the cogsci question undermines that implication considerably, since it shows how we might think zombies are conceivable even when they are not.

It's been quite a while since I was actively reading philosophy, so maybe you can tell me: are there any reasons to believe zombies are logically possible other than people's intuitions?

Comment author: Peterdjones 22 April 2011 02:00:47PM -2 points [-]

since "logically possible" just means "conceviable" there doesn't need to be.

Comment author: RichardChappell 30 March 2011 10:16:57PM *  3 points [-]

The first question should really be: what does the apparent conceivability of zombies by humans imply about their possibility?

I'm aware that the LW community believes this, but I think it is incorrect. We have an epistemological dispute here about whether non-psychological facts (e.g. the fact that zombies are coherently conceivable, and not just that it seems so to me) can count as evidence. Which, again, reinforces my point that the disagreement between me and Eliezer/Lukeprog concerns epistemological principles, and not matters of empirical fact.

For more detail, see my response to TheOtherDave downthread.

Comment author: komponisto 31 March 2011 03:04:51AM 6 points [-]

We have an epistemological dispute here about whether non-psychological facts (e.g. the fact that zombies are coherently conceivable, and not just that it seems so to me) can count as evidence

At least around here, "evidence (for X)" is anything which is more likely to be the case under the assumption that X is true than under the assumption that X is false. So if zombies are more likely to be conceivable if non-physicalism is true than if physicalism is true, then I for one am happy to count the conceivability of zombies as evidence for non-physicalism.

But again, the question is: how do you know that zombies are conceivable? You say that this is a non-psychological fact; that's fine perhaps, but the only evidence for this fact that I'm aware of is psychological in nature, and this is the very psychological evidence that is undermined by cognitive science. In other words, the chain of inference still seems to be

people think zombies are conceivable => zombies are conceivable => physicalism is false

so that you still ultimately have the "work" being done by people's intuitions.

Comment author: RichardChappell 31 March 2011 03:57:13AM *  0 points [-]

How do you know that "people think zombies are conceivable"? Perhaps you will respond that we can know our own beliefs through introspection, and the inferential chain must stop somewhere. My view is that the relevant chain is merely like so:

zombies are conceivable => physicalism is false

I claim that we may non-inferentially know some non-psychological facts, when our beliefs in said facts meet the conditions for knowledge (exactly what these are is of course controversial, and not something we can settle in this comment thread).

Comment author: komponisto 31 March 2011 04:58:28AM 3 points [-]

I know that people think zombies are conceivable because they say they think zombies are conceivable (including, in some cases, saying "zombies are conceivable").

To say that we may "non-inferentially know" something appears to violate the principle that beliefs require justification in order to be rational. By removing "people think zombies are conceivable", you've made the argument weaker rather than stronger, because now the proposition "zombies are conceivable" has no support.

In any case, you now seem as eminently vulnerable to Eliezer's original criticism as ever: you indeed appear to think that one can have some sort of "direct access" to the knowledge that zombies are conceivable that bypasses the cognitive processes in your brain. Or have I misunderstood?

Comment author: RichardChappell 31 March 2011 05:21:45PM 0 points [-]

Depending on what you mean by 'direct access', I suspect that you've probably misunderstood. But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.

Comment author: SilasBarta 31 March 2011 05:32:22PM 2 points [-]

You're still getting voted up on net, despite not explaining how, as you've claimed, the psychological fact of p-zombie plausibility is evidence for it (at least beyond references to long descriptions of your general beliefs).

Comment author: NancyLebovitz 30 March 2011 07:16:55AM 2 points [-]

Thanks for laying this out. I'm one of the people who thinks philosophical zombies don't make sense, and now I understand why-- they seem like insisting that a result is possible while eliminating the process which leads to the result.

This doesn't explain why it's so obvious to me that pz are unfeasible and so obvious to many other people that pz at least make enough sense to be a basis for argument. Does the belief or non-belief in pz correlate with anything else?

Comment author: Peterdjones 22 April 2011 02:11:52PM 0 points [-]

Since no physical law is logically necessary, it is always logically possible that an effect could fail to follow from a cause.

Comment author: [deleted] 30 March 2011 02:59:20AM 2 points [-]

Can you make the connection between Richard's comment and yours clearer?

Comment author: lukeprog 30 March 2011 01:44:30AM *  7 points [-]

Richard Chappell,

Of course, you know how intuitions are generally used in mainstream philosophy, and why I think most such arguments are undermined by facts about where our intuitions come from, which undermine the epistemic usefulness of those intuitions. (So does the cross-checking problem.)

I'll break the last part into two bits:

What I'm saying with the 'people are made of atoms' bit is that it looks like a slight majority of philosophers may now think that is at least a component of a person that is not made of atoms - usually consciousness.

As for intuitions trumping science, that was unclear. What I mean is that, in my view, philosophers still often take their intuitions to be more powerful evidence than the trends of science (e.g. reductionism) - and again I can point to this example.

I'm sure this post must have been highly annoying to a pro such as yourself, and I appreciate the cordial tone of your reply.

Comment author: ohwilleke 31 March 2011 01:40:36AM 1 point [-]

It seems to me that philosophy is most important for refining mere intuitions and bumbling around until we find a rigorous way of posing the questions that are associated with those intuitions. Once you have a well posed question, any old scientist can answer it.

But, philosophy is necessary to turn the undifferentiated mass of unprocessed data and potential ideas into something that is succeptible to being examined.

Rationality is all fine and good, but reason applies known facts and axioms with accepted logical relationships to reach conclusions.

The importance of hypothesis generation is much underappreciated by scientists, but critical to the enterprise, and to generate a hypothesis, one needs intuition as much as reason.

Genius, meanwhile, comes from being able to intuitively generate a hypothesis the nobody else would, breaking the mold of others intuitions, and building new conceptual structures from which to generate novel intuitive hypothesises and eventually to formulate the conceptual structure well enough that it can be turned over to the rationalists.

Comment author: RichardChappell 30 March 2011 04:58:00PM 0 points [-]

As for intuitions trumping science, that was unclear. What I mean is that, in my view, philosophers still often take their intuitions to be more powerful evidence than the trends of science (e.g. reductionism) - and again I can point to this example.

Ah, you mean capital-S 'Science', as opposed to just the empirical data. One might have a view compatible with all the scientific data without buying in to the ideological picture that we can't use non-empirical methods (viz. philosophy) when investigating non-empirical questions.

Comment author: lukeprog 30 March 2011 07:25:50PM 3 points [-]

Non-empirical questions like... what? Mathematical questions?

Comment author: RichardChappell 30 March 2011 09:48:34PM *  0 points [-]

Like, whether phenomenal properties just are certain physical/functional properties, or whether the two are merely nomologically co-extensive (going together in all worlds with the same natural laws as our own). This is obviously neither mathematical nor empirical. Similarly with normative questions: what's a reasonable credence to have given such-and-such evidence, etc.

See: Overcoming Scientism

Comment author: jhuffman 30 March 2011 02:40:56PM *  2 points [-]

As for intuitions trumping science, that was unclear. What I mean is that, in my view, >philosophers still often take their intuitions to be more powerful evidence than the >trends of science (e.g. reductionism) - and again I can point to this example.

The comments on your linked article really do a good job of demonstrating the enormous gulf between many philosophical thinkers and the LW community. I especially enjoyed the comments about how physicalism always triumphs because it expands to include new strange idea. So, the dualists understand that their beliefs are not based on evidence, and in fact they sneer at evidence as if its a form of cheating.

Sorry but I do not think this patient can be saved.

Comment author: Jonathan_Graehl 30 March 2011 07:40:00PM 1 point [-]

Which comments do you agree or disagree with?

What is the patient? LW? Many-philosophers? The idea of LW-contributing-to-philosophy (or conversely)?

Comment author: mtraven 29 March 2011 04:57:18PM *  4 points [-]

A few points:

  • Philisophy is (by definition, more or less) meta to everything else. By its nature, it has to question everything, including things that here seem to be unuqestionable, such as rationality and reductionism. The elevation of these into unquestionable dogma creates a somewhat cult-like environment.

  • Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so smart you will avoid all the pitfalls they've already fallen into.

  • I agree with the linked post of Eliezer's that much of analytic philosophy (and AI) is mostly just slapping formal terms over unexamined everyday ideas, which is why I find most of it bores me to tears.

  • Continental philosophy, on the other hand, if you can manage to make sense of it, actually can provide new perspectives on the world, and in that sense is worthwhile. Don't assume that just because you can't understand it, it doesn't have anything to say. Complaining because they use what seems like an impenetrable language is about on the level of an American traveling to Europe and complaining that the people there don't speak English. That said, Sturgeon's law definitely applies, perhaps at the 99% level.

  • I'm recomending Bruno Latour to everyone these days. He's a French sociologist of science and philosopher, and if you can get past the very French style of abstraction he uses, he can be mind-blowing in the manner described above.

Comment author: alfredmacdonald 15 December 2012 08:41:42AM 0 points [-]

Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so smart you will avoid all the pitfalls they've already fallen into.

While I agree that it's important to avoid succumbing to these ideas, philosophy curricula tend to emphasize not just the history of ideas but the history of philosophers, which makes the process of getting up to speed for where contemporary philosophy is take entirely too long. It is not so important that we know what Augustine or Hume thought so much as why their ideas can't be right now.

Also, "the history of ideas" is really broad, because there are a lot of ideas that by today's standards are just absurd. Including the likes of Anaximander and Heraclitus in "the history of ideas" is probably a waste of time and cognitive energy.

Comment author: ohwilleke 31 March 2011 01:56:49AM 2 points [-]

"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."

Really? When I look at Aquinas or Plato or Aristotle, I see people mostly asking questions that we no longer care about because we have found better ways of dealing with the issues that made those questions worth thinking about.

Scholastic discourse about the Bible or angels makes much less sense when you have a historical-critical context to explain how it emerged in the way that it did, and a canon of contemporaneous secular works to make sense of what was going on in their world at the time.

Philosophical atomism is irrelevant once you've studied modern physics and chemistry.

The notion that we have Platonic a priori knowledge looks pretty silly without a great deal of massaging as we learn more about the mechanism of brain development.

Also, not all new perspectives on the world have value. Continental philosophy and post-modernism are to philosophy what mid-20th century art music is to music composition. It is a rabbit hole that a whole generation of academics got sucked into and wasted their time on. It turned out that the future of worthwhile music was elsewhere, in people like Elvis and the Beatles and rappers and Nashville studios and Motown artists and ressurrections of the greats of the classical and romantic periods in new contexts, and the tone poems and dissonant musics and other academic experiements of that era were just garbage. They lost sight of what music was for, just as the continental philosophers and post-modernist philosophers lost sight of what philosophy was for.

The language in impenatrable because they have nothing to say. I know what it is like to read academic literature, for example, in the sciences or economics, that is impenetrable because it is necessarily so, but that isn't it. People who use sophisticated jargon when it is really necessary are also capable of speaking much more clearly about the essence of what is going on - people like Richard Feynman. But, our modern day philosophical sophisticates are known to no one but each other and are not adding to large understanding. Instead, all of the other disciplines are busy purging themselves of all that dreck so that they can get back on solid ground.

Comment author: gjm 13 April 2017 10:31:38PM 3 points [-]

mid-20th century art music [...] tone poems and dissonant musics [...] were just garbage

wat?

Here are a few pieces of mid-20th century art music. I'm taking "mid-20th-century" to mean 1930 to 1970. Some of them are quite dissonant. None of them is actually a tone poem, as it happens. They are all pieces that (1) I like, (2) are well regarded by the classical music "establishment", (3) are pretty accessible even to (serious) listeners of fairly conservative taste, (4) are still being performed, recorded, etc., (5) are clearly part of the mainstream of mid-20th-century art music, and (6) seem to me to show no lack of awareness of what music is for.

  • 1930: Stravinsky, Symphony of Psalms
  • 1936: Barber, Adagio for strings
  • 1941: Tippett, A child of our time
  • 1942: Prokofiev, Piano sonata #7
  • 1945: Britten, Peter Grimes
  • 1948: Strauss, Four last songs
  • 1960: Shostakovich: String quartets #7,8
  • 1965: Bernstein, Chichester Psalms

(I make no claim that these are the best or most important works by their composers. I wanted things reasonably well spread out over the period in question, and subject to that picked fairly randomly.)

Are these all garbage? Perhaps you had in mind only music "weirder" than those: Second Viennese School twelve-tone music (though I'd call that early rather than mid 20th century), Cage-style experimentalism, and so forth. I'm not at all convinced that that stuff had no value or influence, but in any case it's far from all that was happening in western art music in the middle of the 20th century.

Comment author: g_pepper 14 April 2017 02:07:39AM 1 point [-]

Great list of 20th century compositions! 20th century art music gets an undeservedly bad rap, IMO. I would add a few more composers:

  • 1930: Kurt Weill: Aufstieg und Fall der Stadt Mahagonny
  • 1935: George Gershwin: Porgy and Bess
  • 1940-1941: Olivier Messiaen: Quatuor pour la fin du temps
  • 1944: Aaron Copeland: Appalachian Spring

Kurt Weill's work might be considered theater music rather than art music, but I would argue that it is both of those things. Messiaen is admittedly avant garde and a bit outside of the mainstream, but is approachable by a wide range of audiences, including many who would not care for the composers of the Second Viennese School. Many of Messiaen's compositions could have been added to the list, so I picked one of the best known.

Comment author: gjm 14 April 2017 02:46:57PM 0 points [-]

For what it's worth, I omitted Weill and Gershwin because I thought ohwilleke might not consider them arty enough, Messiaen becase I wasn't confident enough ohwilleke would concede that his music sounds good, and Copeland because Appalachian Spring was the obvious work to use and I already had enough from around that time :-). Of course I agree that otherwise those works are all worthy of inclusion in any list like mine.

Comment author: TheAncientGeek 08 April 2017 01:59:25PM 1 point [-]

"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."

Really

Eg reinventing logical positivism!

Comment author: gjm 13 April 2017 10:02:52PM *  0 points [-]

hundreds or thousands of years ago

reinventing logical positivism

Logical positivism isn't even one hundred years old yet.

Comment author: mtraven 31 March 2011 03:26:01AM -1 points [-]

"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago."

See the paper on the Heideggerian critique of AI I posted earlier.

The notion that we have Platonic a priori knowledge looks pretty silly without a great deal of massaging as we learn more about the mechanism of brain development.

Oh? I would think that one of the lessons of neuroscience is that we are in fact hardwired for a great many things.

The language in impenatrable because they have nothing to say.

How do you know? That is, what evidence other than your lack of understanding do you have for this?

Comment author: jwdink 30 March 2011 09:36:09PM 4 points [-]

Continental philosophy, on the other hand, if you can manage to make sense of it, actually >can provide new perspectives on the world, and in that sense is worthwhile. Don't assume >that just because you can't understand it, it doesn't have anything to say.

It's not that people coming from the outside don't understand the language. I'm not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It's that, when I ask "hey, okay, if the language is just tough, but there is content to what Hegel/Heidegger/etc is saying, then why don't you give a single example of some hypothetical piece of evidence in the world that would affirm/disprove the putative claim?" In other words, my accusation isn't that continental philosophy is hard, it's that it makes no claims about the objective hetero-phenomenological world.

Typically, I say this to a Hegelian (or whoever), and they respond that they're not trying to talk about the objective world, perhaps because the objective world is a bankrupt concept. That's fine, I guess-- but are you really willing to go there? Or would you claim that continental philosophy can make meaningful claims about actual phenomena, which can actually be sorted through?

I guess I'm wholeheartedly agreeing with the author's statement:

You will occasionally stumble upon an argument, but it falls prey to magical categories >and language confusions and non-natural hypotheses.

Comment author: mtraven 31 March 2011 01:06:33AM *  0 points [-]

I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.

So the question is whether philosophy's position as meta to science and everything else can provide utility. I've found it useful, YMMV.

BTW here is the latest round of Heideggerian critique of AI (pdf) which, again, you may or may not find useful.

Comment author: jwdink 31 March 2011 09:37:03PM 0 points [-]

I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy.

Hmm.. I suspect the phrasing "evidence/phenomena in the world" might give my assertion an overly mechanistic sound to it. I don't mean verifiable/disprovable physical/atomistic facts must be cited-- that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Note that "evidence" doesn't exclude "mental phenomena." If we don't ask that philosophy cite evidence, what distinguishes it from meaningless nonsense, or fiction?

I'm trying to write a more thorough response to your statement, but I'm finding it really difficult without the use of an example. Can you cite some claim of Heidegger's or Hegel's that you would assert is meaningful, but does not spring out of an argument based on empirical evidence? Maybe then I can respond more cogently.

Comment author: mtraven 01 April 2011 04:10:42AM 0 points [-]

I'm not at all a fan of Hegel, and Heidegger I don't really understand, but I linked to a paper that describes the interaction of Heideggerian philosophy and AI which might answer your question.

I still think you don't have your categories straight. Philosophy does not make "claims" that are proved or disproved by evidence (although there is a relatively new subfield called "experimental philosophy"). Think of it as providing alternate points of view.

To illustrate: your idea that the only valid utterances are those that are supported by empirical evidence is a philosophy. That philosophy itself can't be supported by empirical evidence; it rests on something else.

Comment author: jwdink 01 April 2011 06:11:55PM 1 point [-]

That philosophy itself can't be supported by empirical evidence; it rests on something else.

Right, and I'm asking you what you think that "something else" is.

I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?

Comment author: mtraven 02 April 2011 01:19:24AM -1 points [-]

Right, and I'm asking you what you think that "something else" is.

Hell, how would I know? Let's say "thinking" for the sake of argument.

I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?

People think it makes sense.

"Definitions may be given in this way of any field where a body of definite knowledge exists. But philosophy cannot be so defined. Any definition is controversial and already embodies a philosophic attitude. The only way to find out what philosophy is, is to do philosophy." -- Bertrand Russell

Comment author: jwdink 31 March 2011 09:58:46PM 0 points [-]

Unless you think the "Heideggerian critique of AI" is a good example. In which case I can engage that.

Comment author: lukeprog 29 March 2011 06:16:20PM 5 points [-]

A reply on just one point:

I don't mean to make reductionism unquestionable, I'm just not making reductionism "my battle" so much anymore. Heck, for several years I spent my time arguing about theism. I'm just moving on to other subjects, and taking for granted the non-existence of magical beings, and so on. Like I say in my original post, I'm glad other people are working those out, and of course if I was presented with good reason to believe in magical beings or something, I hope I would have the honesty to update. Nobody's suggesting discrimination or criminal charges for not "believing in" reductionism.

Comment author: curiousepic 29 March 2011 03:24:17PM *  7 points [-]

I posted this on Reddit r/philosophy, if anyone would like to upvote it there.

Comment author: wnewman 29 March 2011 02:33:13PM 2 points [-]

lukeprog wrote "philosophers are 'spectacularly bad' at understanding that their intuitions are generated by cognitive algorithms." I am pretty confident that minds are physical/chemical systems, and that intuitions are generated by cognitive algorithms. (Furthermore, many of the alternatives I know of are so bizarre that given that such an alternative is the true reality of my universe, the conditional probability that rationality or philosophy is going to do me any good seems to be low.) But philosophy as often practiced values questioning everything, and so I don't think it's quite fair to expect philosophers to "understand" this (which I read in this context as synonymous as "take this for granted"). I'd prefer a formulation like "spectacularly bad at seriously addressing [or, perhaps, even properly understanding] the obvious hypothesis that their intuitions are generated by cognitive algorithms." It seems to me that the criticism rewritten in this form remains severe.

Comment author: quen_tin 29 March 2011 02:54:28PM 0 points [-]

I agree, and I really doubt philosophers fail to deeply question their own intuitions.

Comment author: wedrifid 29 March 2011 04:05:20AM *  16 points [-]

Eliezer's anti-philosophy rant Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

This opening paragraph set off a huge warning claxon in my bullshit filter. To put it generously it is heavy on 'spin'. Specifically:

  • It sets up a comparison based on upvotes between a post written in the last month and a post written on a different blog.
  • Luke's post is presented as a contrast to controversy despite being among the most controversial posts to ever appear on the site. This can be measured based on the massive series of replies and counter replies, most of which were heavily upvoted - which is how controversy tends to present itself here. (Not that controversy is a bad thing.)
  • Upvotes for a well written post that contains useful references are equivocated with support for the agenda that prompted the author to write it.
  • The first 3.5 words were "Eliezer's anti-philosophy rant". Enough said.

All of the above is unfortunate because the remainder of this post was overwhelmingly reasonable and a promise of good things too come.

Comment author: lukeprog 29 March 2011 04:32:19AM *  1 point [-]

Interesting, thanks.

By the way, what is 'the agenda that prompted the author to write it'?

Comment author: lukeprog 29 March 2011 04:59:52AM 6 points [-]

I just realized that 'rant' doesn't have the usual negative connotations for me that it probably does for others. For example, here is my rant about people changing the subject in the middle of an argument.

For the record, the article originally began "Eliezer's anti-philosophy rant..." but I'm going to change that.

Comment author: lessdazed 30 March 2011 10:00:31AM 1 point [-]

It is similar to the word "extremist", the technical definition is rarely only what people mean to invoke, and it's acquiring further connotations.

Losing precise meaning is the way to newspeak, and it distresses me. It is sometimes the result of being uncomfortable with or incapable of discussing specific facts, which is harder than the inside view.

Comment author: FAWS 29 March 2011 05:26:52AM *  3 points [-]

Rant doesn't necessarily have negative connotations for me either, it really depends on the context. Your usage didn't look pejorative at all to me. It's sort of like a less intensive version of "vitriol" and there is no problem (implied) if the target deserves it (or is presented so).

Comment author: atucker 29 March 2011 03:18:08AM 4 points [-]

Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).

Could you please write a translation key for these?

I think it would help LWers read mainstream philosophy, and people with philosophy backgrounds read LW.

Comment author: lukeprog 29 March 2011 03:21:06AM *  3 points [-]

Not a bad idea, though it's far more complicated than termX = termY.

Comment author: atucker 29 March 2011 11:07:40PM 1 point [-]

Fair enough.

I think that reading about how the terms differ would actually help a lot with getting a brief background in the subject, more than a direct but inaccurate one to one mapping.

Comment author: ata 29 March 2011 12:10:30AM *  8 points [-]

Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

I'd add "Learn LaTeX" to this one; if you're publishing in a journal, that matters more than your font preferences and formatting skills (which won't be used in the published version), and if you're publishing online, it can make your paper look like a journal article, which is probably good for status. Even TeX's default Computer Modern font, which I wouldn't call beautiful, has a certain air of authority to it — maybe due to some of its visual qualities, but possibly just by reputation.

Comment author: [deleted] 29 March 2011 01:38:14AM *  2 points [-]

The ironic bit is that I don't know a modern philosophy journal that accepts TeX.

EDIT: Minds and Machines, as mentioned below. Also, Mind doesn't.

Comment author: ata 29 March 2011 05:52:53PM 1 point [-]

The ironic bit is that I don't know a modern philosophy journal that accepts TeX.

Hey, I didn't say it wasn't a diseased discipline. :P

Comment author: thomblake 29 March 2011 01:48:54PM 0 points [-]

Last I checked, Minds and Machines requires LaTeX.

Comment author: [deleted] 29 March 2011 02:40:27PM 0 points [-]

Ah, okay. I knew Mind didn't, and now I realize I was generalizing from one example. Oops.