Part of the sequence: Rationality and Philosophy

Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most.

That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.)

Failed methods

Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used.

Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

A diseased discipline

What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don't need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians.

Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature:

  1. Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it's useful, then it's science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful.
  2. Most philosophers don't understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don't trump science. They fight beside the poor atheistic philosophers who keep coming up with new ways to argue that the universe was not created by someone's invisible magical friend.
  3. Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
  4. Because they were trained in traditional philosophical ideas, arguments, and frames of mind, naturalists will anchor and adjust from traditional philosophy when they make progress, rather than scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science. Sometimes, philosophical work is useful to build from: Judea Pearl's triumphant work on causality built on earlier counterfactual accounts of causality from philosophy. Other times, it's best to ignore the past confusions. Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
  5. Many naturalists aren't trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don't know how your tool works then you'll use it poorly. AI is useful because it keeps you honest: you can't write confused concepts or non-natural hypotheses in a programming language.
  6. Mainstream philosophy publishing favors the established positions and arguments. You're more likely to get published if you can write about how intuitions are useless in solving Gettier problems (which is a confused set of non-problems anyway) than if you write about how to make a superintelligent machine preserve its utility function across millions of self-modifications.
  7. Even much of the useful work naturalistic philosophers do is not at the cutting-edge. Chalmers' update for I.J. Good's 'intelligence explosion' argument is the best one-stop summary available, but it doesn't get as far as the Hanson-Yudkowsky AI-Foom debate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy summaries of much of the heuristics and biases literature, just like Eliezer has so usefully done on Less Wrong, but of course this isn't cutting edge. You could always just read it in the primary literature by Kahneman and Tversky and others.

Of course, there is mainstream philosophy that is both good and cutting-edge: the work of Nick Bostrom and Daniel Dennett stands out. And of course there is a role for those who keep arguing for atheism and reductionism and so on. I was a fundamentalist Christian until I read some contemporary atheistic philosophy, so that kind of work definitely does some good.

But if you're looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first, or try to solve the problem by applying rationalist thinking: like this.

Swimming the murky waters of mainstream philosophy is perhaps a job best left for those who already spent several years studying it - that is, people like me. I already know what things are called and where to look, and I have an efficient filter for skipping past the 95% of philosophy that isn't useful to me. And hopefully my rationalist training will protect me from picking up bad habits of thought.

Philosophy: the way forward

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable. How can we proceed?

First, we must remain vigilant with our rationality training. It is not easy to overcome millions of years of brain evolution, and as long as you are human there is no final victory. You will always wake up the next morning as homo sapiens.

Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy. You'll learn more in math and science, and your learning will be of a higher quality. Ask a fellow rationalist who is knowledgeable about philosophy what the standard positions and arguments in philosophy are on your topic. If any of them seem really useful, grab those particular works and read them. But again: you're probably better off trying to solve the problem by thinking like a cognitive scientist or an AI programmer than by ingesting mainstream philosophy.

However, I must say that I wish so much of Eliezer's cutting-edge work wasn't spread out across hundreds of Less Wrong blog posts and long SIAI articles written in with an idiosyncratic style and vocabulary. I would rather these ideas were written in standard academic form, even if they transcended the standard game of mainstream philosophy.

But it's one thing to complain; another to offer solutions. So let me tell you what I think cutting-edge philosophy should be. As you might expect, my vision is to combine what's good in LW-style philosophy with what's good in mainstream philosophy, and toss out the rest:

  1. Write short articles. One or two major ideas or arguments per article, maximum. Try to keep each article under 20 pages. It's hard to follow a hundred-page argument.
  2. Open each article by explaining the context and goals of the article (even if you cover mostly the same ground in the opening of 5 other articles). What topic are you discussing? Which problem do you want to solve? What have other people said about the problem? What will you accomplish in the paper? Introduce key terms, cite standard sources and positions on the problem you'll be discussing, even if you disagree with them.
  3. If possible, use the standard terms in the field. If the standard terms are flawed, explain why they are flawed and then introduce your new terms in that context so everybody knows what you're talking about. This requires that you research your topic so you know what the standard terms and positions are. If you're talking about a problem in cognitive science, you'll need to read cognitive science literature. If you're talking about a problem in social science, you'll need to read social science literature. If you're talking about a problem in epistemology or morality, you'll need to read philosophy.
  4. Write as clearly and simply as possible. Organize the paper with lots of heading and subheadings. Put in lots of 'hand-holding' sentences to help your reader along: explain the point of the previous section, then explain why the next section is necessary, etc. Patiently guide your reader through every step of the argument, especially if it is long and complicated.
  5. Always cite the relevant literature. If you can't find much work relevant to your topic, you almost certainly haven't looked hard enough. Citing the relevant literature not only lends weight to your argument, but also enables the reader to track down and examine the ideas or claims you are discussing. Being lazy with your citations is a sure way to frustrate precisely those readers who care enough to read your paper closely.
  6. Think like a cognitive scientist and AI programmer. Watch out for biases. Avoid magical categories and language confusions and non-natural hypotheses. Look at your intuitions from the outside, as cognitive algorithms. Update your beliefs in response to evidence. [This one is central. This is LW-style philosophy.]
  7. Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).
  8. Don't dwell too long on what old dead guys said, nor on semantic debates. Dissolve semantic problems and move on.
  9. Conclude with a summary of your paper, and suggest directions for future research.
  10. Ask fellow rationalists to read drafts of your article, then re-write. Then rewrite again, adding more citations and hand-holding sentences.
  11. Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Meeting journals standards is not the most important reason to follow the suggestions above. Write short articles because they're easier to follow. Open with the context and goals of your article because that makes it easier to understand, and lets people decide right away whether your article fits their interests. Use standard terms so that people already familiar with the topic aren't annoyed at having to learn a whole new vocabulary just to read your paper. Cite the relevant positions and arguments so that people have a sense of the context of what you're doing, and can look up what other people have said on the topic. Write clearly and simply and with much organization so that your paper is not wearying to read. Write lots of hand-holding sentences because we always communicate less effectively then we thought we did. Cite the relevant literature as much as possible to assist your most careful readers in getting the information they want to know. Use your rationality training to remain sharp at all times. And so on.

That is what cutting-edge philosophy could look like, I think.

Next post: How You Make Judgments

Previous post: Less Wrong Rationality and Mainstream Philosophy

Philosophy: A Diseased Discipline
New Comment
447 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]djc390

As a professional philosopher who's interested in some of the issues discussed in this forum, I think it's perfectly healthy for people here to mostly ignore professional philosophy, for reasons given here. But I'm interested in the reverse direction: if good ideas are being had here, I'd like professional philosophy to benefit from them. So I'd be grateful if someone could compile a list of significant contributions made here that would be useful to professional philosophers, with links to sources.

(The two main contributions that I'm aware of are ideas about friendly AI and timeless/updateless decision theory. I'm sure there are more, though. Incidentally I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way.)

Yes, this is one reason I'm campaigning to have LW / SIAI / Yudkowsky ideas written in standard form!

[-][anonymous]130

As a professional philosopher who's interested in some of the issues discussed in this forum. . .

Oh wow. The initials 'djc' match up with David (John) Chalmers. Carnap and PhilPapers are mentioned in this user's comments. Far from conclusive evidence, but my bet is that we've witnessed a major analytic philosopher contribute to LW's discussion. Awesome.

4enye-word
In the comment he links to above, djc states "One way that philosophy makes progress is when people work in relative isolation, figuring out the consequences of assumptions rather than arguing about them. The isolation usually leads to mistakes and reinventions, but it also leads to new ideas." When asked about LessWrong in a reddit AMA, David Chalmers stated "i think having subcommunities of this sort that make their own distinctive assumptions is an important mechanism of philosophical progress" and an interest in TDT/UDT. (See also: https://slatestarcodex.com/2017/02/06/notes-from-the-asilomar-conference-on-beneficial-ai/) (Sorry to dox you, David Chalmers. Hope you're doing well these days.)
8XiXiDu
Actually in one case this "forum" could benefit from the help of professional philosophers, as the founder Eliezer Yudkowsky especially asks for help on this problem: I think that if you show that professional philosophy can dissolve that problem then people here would be impressed.
5Vladimir_Nesov
Do you know about the TDT paper?
1radical_negative_one
Just in case you haven't seen it, here is Eliezer's Timeless Decision Theory paper. It's over a hundred pages so i'd hope that it represents a "clear statement". (Although i can't personally comment on anything in it because i don't currently have time to read it.)
[-]djc460

That's the one. I sent it to five of the world's leading decision theorists. Those who I heard back from clearly hadn't grasped the main idea. Given the people involved, I think this indicates that the paper isn't a sufficiently clear statement.

8[anonymous]
It's somewhat painful to read. I've tried to read it in the past and get a bit eyesore after the first twenty pages. Doing the math, I realize it's probably irrational for Yudkowsky-san to spend time learning LaTeX or some other serious typesetting system, but I can dream, right?

Your dream has come true.

3[anonymous]
Happiness is too general a term to express my current state of mind. May the karma flow through you like so many grains of sand through a sieve.
1wedrifid
Not quite sure how this one works. Usually I associate sieve with "leaking like a sieve", generally a bad thing---do you want all his karma to be assassinated away as fast as it comes?
2[anonymous]
Oh, no. Lukeprog is the sieve, and the grains of sand are whatever fraction of a hedon he gets from being upvoted.
0gmpalmer
I hope this is corrected later in the paper and my apologies if this is a stupid question but could you please explain how the example of gum chewing and abscesses makes sense? That is, in the explanation you are making your decision based on evidence. Indeed, you'd be happy--or anyone would be happy--to hear you're chewing gum once the results of the second study are known. How is that causal and not evidential? I see later in the paper that gum chewing is evidence for the CGTA gene but that doesn't make any sense. You can't change whether or not you have the gene and the gum chewing is better for you at any rate. Still confused about the value of the gum chewing example.
6Richard_Kennaway
The LaTeX to format a document like that can be learnt in an hour or two with no previous experience, assuming at least basic technically-minded smarts.
6RHollerith
And the learning (and formatting of the document) does not have to be done by the author of the document.
[-]prase290

Unfortunately, many important problems are fundamentally philosophical problems. Philosophy itself is unavoidable.

Isn't this true just because the way philosophy is effectively defined? It's a catch-all category for poorly understood problems which have nothing in common except that they aren't properly investigated by some branch of science. Once a real question is answered, it no longer feels like a philosophical question; today philosophers don't investigate motion of celestial bodies or structure of matter any more.

In other words, I wonder what are the fundamentally philosophical questions. The adverb fundamentally creates the impression that those questions will be still regarded as philosophical after being uncontroversially answered, which I doubt will ever happen.

[-]ata230

Strongly agreed. I think "philosophical questions" are the ones that are fun to argue endlessly about even if we're too confused to actually solve them decisively and convincingly. Thinking that any questions are inherently philosophical (in that sense) would be mind projection; if a question's philosophicalness can go away due to changes in facts about us rather than facts about the question, then we probably shouldn't even be using that as a category.

6prase
I would say that the sole thing which philosophical questions have in common is that it is only imaginable to solve them using intuition. Once a superior method exists (experiment, formal proof), the question doesn't belong to philosophy.
4Vladimir_Nesov
Nice pattern.
2shokwave
I think that's a good reason to keep using the category. By looking at current philosophy, we can determine what facts about us need changing. Cutting-edge philosophy (of the kind lukeprog wants) would be strongly determining what changes need to be made. To illustrate: that there is a "philosophy of the mind" and a "free will vs determinism debate" tells us there are some facts about us (specifically, what we believe about ourselves) that need changing. Cutting-edge philosophy would be demonstrating that we should change these facts to ones derived from neuroscience and causality. Diagrams like this would be cutting-edge philosophy.
2Perplexed
The thing that I find attractive about logic and 'foundations of mathematics' is that no one argues endlessly about philosophical questions, even though the subject matter is full of them. Instead, people in this field simply assume the validity of some resolution of the philosophical questions and then proceed on to do the real work. What I think that most fans of philosophy fail to realize is that answers to philosophical questions are like mathematical axioms. You don't justify them. Instead, you simply assume them and then work out the consequences. Don't care for the consequences? Well then choose a different set of axioms.
0robertzk
Are you suggesting that philosophy lies in the orthogonal complement to science and potential science (the questions science is believed to be capable of eventually answering)?
4prase
I am suggesting that the label philosophical is usually attached to problems where we have no agreed upon methodology of investigation. Therefore whether a question belongs to philosophy or science isn't defined solely by its objective properties, but also by our knowledge, and as our knowledge grows the formerly philosophical question is more likely to move into "science" category. The point thus was that potential science isn't orthogonal to philosophy, on the contrary, I have expressed belief that those categories may be identical (when nonsensical parts of philosophy are excluded). On the other hand, I assume philosophy and actual (in contrast to potential) science are disjoint. This is just how the words are used.
-1quen_tin
In a sense, science is nothing but experimental philosophy (in a broad sense), and the job of non-experimental-philosophy (what we label philosophy) is to make any question become an experimental question... But I would say that philosophy remains important as the framework where science and scientific fundamental concepts (truth, reality, substance) are defined and discussed.
0prase
Not universally. It's hard to find experiments in mathematics.
4[anonymous]
You'd have to look inside mathematicians' heads.
2Vladimir_M
In a sense, computers are nothing but devices for doing experimental mathematics.
8Clippy
In a sense, apes are nothing but devices for making ape DNA.
9Vladimir_M
I think Richard Dawkins made that observation a while ago at book length.
8Clippy
In a sense, Richard Dawkins is nothing but a device for making books.
4Friendly-HI
In a sense, a book is nothing but a device for copying memes into other brains.
1Richard_Kennaway
Experimental Mathematics.
0Clippy
I do a lot of that when I experiment with various strings to find preimages for a class of hashes.
1Marius
Which is why mathematics isn't science.
4cousin_it
I sense an argument about definitions of words. Please don't.
-3Marius
"what is science" is not a mere matter of definitions. It's fundamental to how we decide how certain we are of various propositions.
7cousin_it
Um... no it isn't? A Bayesian processes evidence the same way whether or not it's labeled "science". If you're talking about the word "science" as some sort of FDA seal of approval, invented so people can quickly see who to trust without examining the claims in detail, then I see no reason to exclude math. Do you think math gives less reliable conclusions than empirical disciplines?
4Marius
A Bayesian may process probabilities the same way, but information is not evaluated the same way. Determining that a piece of information was derived scientifically does not provide a "seal of approval", it tells us how to evaluate the likelihood of that information being true. For instance, if I know that a piece of information was derived via scientific methods, I know to look at related studies. A single study is never definitive, because science involves reproducible results based on empirical evidence. Further studies may alter my understanding of the information the first study produced. On the other hand, if I know that a piece of information was derived mathematically, I need only look at a single proof. If the proof is sound, I know that the premises lead inexorably to the conclusion. On the other hand, encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian - a new proof must be created. On the other hand, experiments may yield some useful evidence even if the study has flawed premises or methods; precisely what parts are useful requires an understanding of what science is. So this is actually important - it's not just a matter of definitions.
4cousin_it
Thanks, that's a valid argument that I didn't think of. But it's sorta balanced by the fact that a lot of established math is really damn established. For example, compare Einstein's general relativity with Brouwer's fixed point theorem. Both were invented at about the same time, both are really important and have been used by lots and lots of people. Yet I think Brouwer's theorem is way more reliable and less likely to be overturned than general relativity, and I'm not sure if anyone anywhere thinks otherwise.
3Dreaded_Anomaly
I'm not sure if "overturning" general relativity is the appropriate description. We may well find a broader theory which contains general relativity as a limiting case, just as general relativity has special relativity and Newtonian mechanics as limiting cases. With the plethora of experimental verifications of general relativity, however, I wouldn't expect to see it completely discarded in the way that, e.g., phlogiston theory was.
2Marius
Oh, I'm not calling mathematics more or less reliable than science. I'm saying that the ways in which one would overturn an established useful theorem would be very different from the ways in which one would overturn an established scientific theory. Another way in which mathematics is more reliable is that bias is irrelevant. Scientists have to disclose their conflicts of interest because it's easy for those conflicts to interfere with their objectivity during data collection or analysis, and so others must pay special attention. Mathematicians don't need to because all their work can be contained in one location, and can be checked in a much more rigorous fashion.
3JoshuaZ
This doesn't follow. If for example, one does have a single proof and one encounters a hole in it and the hole looks like it makes plausible assumptions then one should still increase one's confidence that the claim is true. Thus, physicists are very fond of assuming that terms in series are of lower order even when they can't actually prove it. Very often, under reasonable assumptions, their claims are correct. To use a specific example, Kempe's "proof" of the four color theorem had a hole and so a repaired version could only prove that planar maps require at most five colors. But, the general thrust of the argument provided a strong plausibility heuristic for believing the claim as a whole. Similarly, from a Bayesian stand-point, seeing multiple distinct proofs of a claim should make one more confident in the claim since even if one of the proofs has an unseen flaw, the others are likely to go through. (There are complicating factors here. No one seems to have a good theory of confidence for mathematical statements which allows for objective priors since most standard objective priors (such as those based on some notion of computability) only make sense if one can perform arbitrary calculations correctly. Similarly it isn't clear how one meaningfully can talk about say the probability that Peano arithmetic is consistent.)
0Marius
I don't think we actually disagree at all. Your "hole" is really the introduction of additional premises. If the premises are true and the reasoning sound, the conclusions follow. If they are shown to be untrue, you can discard the conclusion. Mathematics rarely has a way to evaluating the likelihood its premises are true - usually the best it can do is to show that certain premises are or are not compatible with one another. What you are saying regarding multiple distinct proofs of a claim is true according to some informal logic, but not in any strict mathematical sense. Mathematically, you've either proven something or you haven't. Mathematicians may still be convinced by scientific, theologic, literary, financial, etc. arguments of course.
0JoshuaZ
Not really. Consider for example someone who has seen Kempe's argument. They should have a higher confidence that say "The four color theorem is true in ZFC" then someone who has not seen Kempe's argument. There's no additional premise being added but Kempe's argument is clearly wrong. Not sure what you mean here. It looks like the sentence was cut off?
0Marius
Would you mind explain in a little more detail why you say a person who has seen Kempe's flawed proof should have higher confidence than one who has not? Do you mean that it's so emotionally compelling that one's mind is convinced even if the math doesn't add up? Or that the required (previously-hidden) premise that allows Kempe to ignore the degree 5 vertex has some possibility of truth, so that the conclusion has an increased likelihood of truth? also: fixed the end.
1JoshuaZ
Hmm, I'm not sure how to do so without just going through the whole proof. Essentially, Kempe's proof showed that a smallest counterexample graph couldn't have certain properties. One part of the proof was showing that the graph could not contain a vertex of degree 5. But this part was flawed. But Kempe did show that it couldn't contain a vertex of degree 4, and moreover, it showed that any minimal counterexample must have a vertex of degree 5. This makes us more confident in the original claim since a minimal counterexample has to have a very restricted looking form. Replying to the fixed end here so as to minimize confusion: Well, yes but the claim I was addressing was that the claim you made that "encountering a single incorrect premise or step means that the conclusion has zero utility to the Bayesian" which is wrong. I agree that a flawed proof is not a proof. And yes, the logic is in any case informal. See my earlier parenthetical remark. I actually consider the problem of confidence in mathematical reasoning to be one of the great difficult open problems within Bayesianism. One reason I don't (generally) self-identify as a Bayesian is due to an apparent lack of this theory. (This itself deserves a disclaimer that I'm by no means at all an expert in this field and so there may be work in this direction but if so I haven't seen any that is at all satisfactory.)
0Marius
I think you are assuming I count a dubious premise as an incorrect premise. Obviously, a merely dubious premise allows the conclusion to have some utility to the Bayesian. I really don't think we actually disagree.
0JoshuaZ
Really? Even incorrect premises can be useful. For example, one plausibility argument for the Riemann hypothesis rests on assuming that the Mobius function behaves like a random variable. But that's a false statement. Nevertheless, it acts close enough to being a random variable that many find this argument to be evidence for RH. And there's been very good work trying to take this false statement and make true versions of it. Similarly, if one believes what you have said then one would have to conclude that if one lived in the 1700s that all of calculus would have been useless because it rests on the notion of infinitesimals which didn't exist. The premise was incorrect, but the results were sound.
2Sniffnoy
Incidentally, as more evidence, apparently this AC0 conjecture has just been proved true by Ben Green (rather, he noticed that other people had already done stuff that had this as a consequence, which the people asking the question hadn't known about).
0Marius
Ok, I need to refine my description of math a bit. I'd claimed that an incorrect premise gives useless conclusions; actually as you point out if we have a close-to-correct premise instead, we can have useful conclusions. The word "instead" is important there, because otherwise we can then add in a correct contradictory premise, generating new and false conclusions. In some sense this is necessary to all math, most evidently geometry: we don't actually have any triangles in the world, but we use near-triangles all the time, pretending they're triangles, with great utility. Also, to look again at Kempe's "proof": we can see where we can construct a vertex of degree 5 where his proof does not hold up. And we can try to turn that special case back into a map. The fact that nobody's managed to construct an actual map relying on that flaw does not give any mathematical evidence that an example can't exist. Staying within the field of math, the Bayesian is not updated and we can discard his conclusion. But we can step outside math's rules and say "there's a bunch of smart mathematicians trying to find a counterexample, and Kempe shows them exactly where the counterexample would have to be, and they can't find one." That fact updates the Bayesian, but reaches outside the field of math. The behavior of mathematicians faced by a math problem looks like part of mathematics, but actually isn't.
2twanvl
That simply doesn't follow: why does involving reproducible results imply not being definitive? Empirical results are never 'definitive' as in being 100.0% certain, but we can get very close. Whether this is done in a single study or with multiple studies doesn't matter at all. In practice there are good reasons to want multiple studies, but they have more to do with questions not addressed in a single study, trustworthiness of the authors, etc. Even wrong mathematical proofs have a non-zero utility, because they often lead to new insights. For example, if only the last of 100 steps is wrong, then you are 99 steps closer to some goal.
2Marius
A single study can't get close to 100% certainty, because that's just not how science works. If you look at all the studies that were true with 95% certainty, you'll find that well over 5% have found conclusions now believed to be false. There are issues of trust, issues of data collection errors, issues of statistical evaluation, the fact that scientific methods are designed under the assumption that studies will be repeated, etc. The steps within unsound mathematical proofs may be valuable, but their conclusions are not.
0twanvl
The current scientific method is in no way ideal. If a study were properly Bayesian, then you should be able to confidently learn from its results. That still leaves issues of trust and the possibility of human error, but there might also be ways to combat those. But in a human society, repeating studies is perhaps the best thing one can hope for. Agreed. That is the one part of an unsound proof that is useless.
0Marius
Can you describe a better, more Bayesian scientific method? The main way I would change it is to increase the number of studies that are repeated, to improve the accuracy of our knowledge. How would you propose to improve our confidence other than by showing that an experiment has reproducible results?
-1ksolez
In a recent interview on Singularity One on One http://singularityblog.singularitysymposium.com/question-everything-max-more-on-singularity-1-on-1/ (first video) Max More, one of the founders of transhumanism talks about how important philosophy was as the starting point for the important things he has done. Philosophy provided an important vantage point from which he wrote the influential papers which started transhumanism. Philosophy is not something to give up or shun, you just need to know what parts of it to ignore in pursuing important objectives.
2prase
I am not questioning the importance of philosophy, but the use of the label "philosophical" together with "fundamental". If someone draw a map of human knowledge, mathematics and biology and physics and history would form wedges starting from well-established facts near the center and reaching more recent and complex theories further away; philosophy, on the other hand, would be the whole ever receding border region of uncertain conjectures still out of reach of scientific methods. To expand the human knowledge these areas indeed must be explored, but once that happens, some branch of science will claim their possession and there will be no reason to further call them philosophy.

Eliezer's anti-philosophy rant Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.

This opening paragraph set off a huge warning claxon in my bullshit filter. To put it generously it is heavy on 'spin'. Specifically:

  • It sets up a comparison based on upvotes between a post written in the last month and a post written on a different blog.
  • Luke's post is presented as a contrast to controversy despite being among the most controversial posts to ever appear on the site. This can be measured based on the massive series of replies and counter replies, most of which were heavily upvoted - which is how controversy tends to present itself here. (Not that controversy is a bad thing.)
  • Upvotes for a well written post that contains useful references are equivocated with support for the agenda that prompted the author to write it.
  • The first 3.5 words were "Eliezer's anti-philosophy rant". Enough said.

All of the above is unfortunate because the remainder of this post was overwhelmingly reasonable and a promise of good things too come.

1lukeprog
Interesting, thanks. By the way, what is 'the agenda that prompted the author to write it'?
9lukeprog
I just realized that 'rant' doesn't have the usual negative connotations for me that it probably does for others. For example, here is my rant about people changing the subject in the middle of an argument. For the record, the article originally began "Eliezer's anti-philosophy rant..." but I'm going to change that.
4FAWS
Rant doesn't necessarily have negative connotations for me either, it really depends on the context. Your usage didn't look pejorative at all to me. It's sort of like a less intensive version of "vitriol" and there is no problem (implied) if the target deserves it (or is presented so).
1lessdazed
It is similar to the word "extremist", the technical definition is rarely only what people mean to invoke, and it's acquiring further connotations. Losing precise meaning is the way to newspeak, and it distresses me. It is sometimes the result of being uncomfortable with or incapable of discussing specific facts, which is harder than the inside view.

Note that this is not just my vision of how to get published in journals. It's my vision of how to do philosophy.

Your vision of how to do philosophy suspiciously conforms to how philosophy has traditionally been done, i.e. in journals. Have you read Michael Nielsen's Doing Science Online? It's written specifically about science, but I see no reason why it couldn't be applied to any kind of scholarly communication. He makes a good argument for including blog posts into scientific communication, which, at present, doesn't seem to be amenable with writing journal articles (is it kosher to cite blog posts?):

Many of the best blog posts contain material that could not easily be published in a conventional way: small, striking insights, or perhaps general thoughts on approach to a problem. These are the kinds of ideas that may be too small or incomplete to be published, but which often contain the seed of later progress.

You can think of blogs as a way of scaling up scientific conversation, so that conversations can become widely distributed in both time and space. Instead of just a few people listening as Terry Tao muses aloud in the hall or the seminar room about the Navier-Stokes e

... (read more)

No, I agree that much science and philosophy can be done in blogs and so on. Usually, it's going to be helpful to do some back-and-forth in the blogosphere before you're ready to publish a final 'article.' But the well-honed article is still very valuable. It is much easier for people to read, it cites the relevant literature, and so on.

Articles could be, basically, very well-honed and referenced short summaries of positions and arguments that have developed over dozens of conversations and blog posts and mailing list discussions and so on.

4Dustin
I often get lost in back-and-forth on blogs because it jumps from here to there and assumes the reader has kept track of everything everyone involved has said on the subject. My point being, that I agree that both the blogosphere and article are important.
6alfredmacdonald
YeahOKButStill has an interesting take on the interaction between philosophy done in blogs and philosophy done in journals:
[-]FAWS190

Eliezer's anti-philosophy rant Against Modal Logics hovers near 0 karma points, while my recent pro-philosophy (by LW standards) post and my list of mainstream philosophy contributions were massively upvoted.

The karma of pre-LW OvercomingBias posts that were ported over should not be compared to that of LW post proper. Most of Eliezer's old posts are massively under-voted that way, though some frequently linked to posts less so.

5lukeprog
True, but most of Eliezer's substantive pre-LW posts seem to have karma in the low teens, and the comments section of Against Modal Logics also shows that post was highly controversial.
4Vladimir_Nesov
Not exactly. Most posts published around the same time have similar Karma level. Earliest posts or highly linked-to posts get more Karma, but people either rarely get far in reading of the archives, or their impulse to upvote atrophies by the time they've read hundreds of posts, and as a result Karma level of a typical post starting from about April 2008 currently stands at about 0-10. The post in question currently ranks 4 Karma.
4Normal_Anomaly
Also, many users read the early posts while still in the lurker stage, at which point they can't upvote.
0David_Gerard
Do we actually know this?
0Normal_Anomaly
Well, whenever somebody starts posting and doesn't act like they've already read the sequences, they get told to go read the sequences and come back afterward. Also, in the past year or so many new users have joined the site from MoR, and the link in the MoR author's notes goes to the main sequences list. I know that I at least decided to join LW when MoR linked me to the sequences and I liked them.
6David_Gerard
I haven't seen this in several months (and I've been watching); the admonishment seems to have vanished from the local meme selection. More often, someone links to a specific apposite post, or possibly sequence. It's just entirely unclear how we'd actually measure whether people who read the sequences do so before or after logging in. (I'd suspect not, given they're a million words of text and a few million of accompanying comments, but then that's not even an anecdote ...)

Poll: If you read the sequences before opening your account, upvote this comment.

If you read the sequences before LessWrong was created upvote this comment.

Poll: If you read the sequences after opening your account, upvote this comment.

2Normal_Anomaly
You may be right. I think there has been less of that lately. I wouldn't say it's entirely unclear. I'm curious enough to start a poll.
0David_Gerard
Could also do with "Poll: If you still haven't read the sequences, upvote this comment."
0Normal_Anomaly
I'd been considering that, and since you agree I went and added it.
1Desrtopa
I think this has mainly declined after a number of posts discussing the sheer length of the sequences and the deceptive difficulty of the demand, and potential ways to make the burden easier.
1Normal_Anomaly
Poll: If you haven't read the sequences yet, upvote this comment.
2Desrtopa
Should this perhaps be made into a discussion article where it will be noticed more?
0Normal_Anomaly
I'm tempted to start a poll to see if people think I should make this a discussion article, but I will restrain myself. I'll just go ahead and post the discussion article: there's been enough traffic in the poll that it apparently interests people.
-20Normal_Anomaly
0lukeprog
Funny, I remember it having 0 points, and then when I published this post it had 2 points. Anyway, thanks FAWS and Vladimir_Nesov for the correction. I've changed the wording of the original post.
4FAWS
Yes, that's an example of the effect of linking. I guess the post in question will easily break 10 now, perhaps even 20.

Have you considered taking some of EY's work and jargon-translating it into journal-suitable form?

I'd love to do that if I had time, and if Yudkowsky was willing to answer lots of questions.

[-]Jordan180

You could probably find other philosophers to help out. The end result, if supported properly by Eliezer, could be very helpful to SIAI's cause.

If SIAI donations could be earmarked for this purpose I would double my monthly contribution.

[-][anonymous]100

.

7Emile
For what it's worth, I greatly enjoy Eliezer's style, and usually find him quite clear and understandable (except maybe some older texts like the intuitive explanantion of Bayes and the technical explanation of technical explanations).
2[anonymous]
.
3atucker
I'm flattered that you think so, and to be mentioned in the same sentence as lukeprog. Out of a mixture of a desire to help and curiosity (probably along with a dash of vanity), what comments have you found particularly readable? That would actually help me a lot in improving my writing style.
2[anonymous]
.
2atucker
Presumably, Eliezer's upcoming book would do the same.
0[anonymous]
What would you start with?
[-]ata100

Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).

I'd add "Learn LaTeX" to this one; if you're publishing in a journal, that matters more than your font preferences and formatting skills (which won't be used in the published version), and if you're publishing online, it can make your paper look like a journal article, which is probably good for status. Even TeX's default Computer Modern font, which I wouldn't call beautiful, has a certain air of authority to it — maybe due to some of its visual qualities, but possibly just by reputation.

3[anonymous]
The ironic bit is that I don't know a modern philosophy journal that accepts TeX. EDIT: Minds and Machines, as mentioned below. Also, Mind doesn't.
8lukeprog
You just export to PDF. Lyx is a fairly easy-to-use LaTeX editor.
0[anonymous]
I personally recommend texmacs over latex, even if the latex is edited in Lyx or auxtex, although I use emacs for programming and wiki edition.
0[anonymous]
That doesn't make sense to me. One can't re-typeset PDF -- well, perhaps you can, but I can't imagine it would be easy.
2[anonymous]
I'm a bit confused. What I'm used to is, I make a TeX document (editable) then I typeset it into a PDF document. Anybody can read the PDF, but can't edit it. If I want the receiver to be able to edit, I send both the TeX file and the PDF. Did you mean that philosophy journals won't accept the .tex file format, or that they'll reject a .pdf written in LaTeX for stylistic reasons?
0Manfred
Adobe's business model is to give away the reader for free and then sell the editor for a profit. So I would guess most publishers would have no problem.
2ata
Hey, I didn't say it wasn't a diseased discipline. :P
0thomblake
Last I checked, Minds and Machines requires LaTeX.
0[anonymous]
Ah, okay. I knew Mind didn't, and now I realize I was generalizing from one example. Oops.

This paragraph, from Eugene Mills' 'Are Analytic Philosophers Shallow and Stupid?', made me laugh out loud:

The paradox of analysis concludes that

(PA) A conceptual analysis is correct only if it is trivial.

Philosophers from Socrates onward have [provided] conceptual analyses of knowledge, freedom, truth, goodness, and more. The paradox of analysis suggests that these philosophers... are shallow and stupid: shallow because they stalk triviality, stupid because it so often eludes them.

Mills goes on to defend philosophers, with two sections entitl... (read more)

I agree with a lot of the content -- or at least the spirit -- of the post, but I worry that there is some selectivity that makes philosophy come off worse than it actually is. Just to take one example that I know something about: Pearl is praised (rightly) for excellent work on causation, but very similar work developed at the same time by philosophers at Carnegie Mellon University, especially Peter Spirtes, Clark Glymour, and Richard Scheines, isn't even mentioned.

Lots of other philosophers could be added to the list of people making interesting, useful... (read more)

2[anonymous]
Thanks for your comment; I'm working on learning causation theory at the moment, and I didn't know anyone in the field other than Pearl.
3JonathanLivengood
You're welcome, of course. Pearl's book on causality is a great place to start. I also recommend Spirtes, Glymour, and Scheines Causation, Prediction, and Search. Depending on your technical level and your interests, you might find Woodward's book Making Things Happen a better place to start. After that, there are many excellent papers, depending on your interests.
5[anonymous]
I'm a graduate student in mathematics; the more technical, the better. I'm currently three chapters into Pearl. After that in my queue comes Tversky and Kahneman, and now I'll add Spirtes et al. to the end of that.

I posted this on Reddit r/philosophy, if anyone would like to upvote it there.

philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.

What makes you think this? It's true that many philosophers recognize the genetic fallacy, and hence don't take "you judge that P because of some fact about your brain" to necessarily undermine their judgment. But it's ludicrously uncharitable to interpret this principled epistemological disagreement as a mere factual misunderstanding.

Again: We can agree on all the facts about how human psychology works. What we disagr... (read more)

Richard Chappell,

Of course, you know how intuitions are generally used in mainstream philosophy, and why I think most such arguments are undermined by facts about where our intuitions come from, which undermine the epistemic usefulness of those intuitions. (So does the cross-checking problem.)

I'll break the last part into two bits:

What I'm saying with the 'people are made of atoms' bit is that it looks like a slight majority of philosophers may now think that is at least a component of a person that is not made of atoms - usually consciousness.

As for intuitions trumping science, that was unclear. What I mean is that, in my view, philosophers still often take their intuitions to be more powerful evidence than the trends of science (e.g. reductionism) - and again I can point to this example.

I'm sure this post must have been highly annoying to a pro such as yourself, and I appreciate the cordial tone of your reply.

3RichardChappell
Ah, you mean capital-S 'Science', as opposed to just the empirical data. One might have a view compatible with all the scientific data without buying in to the ideological picture that we can't use non-empirical methods (viz. philosophy) when investigating non-empirical questions.
3lukeprog
Non-empirical questions like... what? Mathematical questions?
3RichardChappell
Like, whether phenomenal properties just are certain physical/functional properties, or whether the two are merely nomologically co-extensive (going together in all worlds with the same natural laws as our own). This is obviously neither mathematical nor empirical. Similarly with normative questions: what's a reasonable credence to have given such-and-such evidence, etc. See: Overcoming Scientism
3jhuffman
The comments on your linked article really do a good job of demonstrating the enormous gulf between many philosophical thinkers and the LW community. I especially enjoyed the comments about how physicalism always triumphs because it expands to include new strange idea. So, the dualists understand that their beliefs are not based on evidence, and in fact they sneer at evidence as if its a form of cheating. Sorry but I do not think this patient can be saved.
2Jonathan_Graehl
Which comments do you agree or disagree with? What is the patient? LW? Many-philosophers? The idea of LW-contributing-to-philosophy (or conversely)?
0ohwilleke
It seems to me that philosophy is most important for refining mere intuitions and bumbling around until we find a rigorous way of posing the questions that are associated with those intuitions. Once you have a well posed question, any old scientist can answer it. But, philosophy is necessary to turn the undifferentiated mass of unprocessed data and potential ideas into something that is succeptible to being examined. Rationality is all fine and good, but reason applies known facts and axioms with accepted logical relationships to reach conclusions. The importance of hypothesis generation is much underappreciated by scientists, but critical to the enterprise, and to generate a hypothesis, one needs intuition as much as reason. Genius, meanwhile, comes from being able to intuitively generate a hypothesis the nobody else would, breaking the mold of others intuitions, and building new conceptual structures from which to generate novel intuitive hypothesises and eventually to formulate the conceptual structure well enough that it can be turned over to the rationalists.
7Eliezer Yudkowsky
Richard, I'm pretty sure I remember you treating the apparent conceivability of zombies as a primary fact about the conceivability of zombies to which you have direct access, rather than treating it as an output of some cognitive algorithm in your brain and asking what sort of thought process might have produced it.
4CuSithBell
It seems like some people are using "conceivable" to mean "imaginable at some resolution", and some to mean "coherently imaginable at any resolution", or something. By which I mean, the first group would say that they could conceive of "America lost the Revolutionary War" or "heavier objects fall faster" or "we are composed of sentient superstrings, and the properties of matter are their tiny, tiny emotions" or "the president has been kidnapped by ninjas"; whereas the second group would say these things are not conceivable. As a result, group A wouldn't really consider the conceivability of p-zombies as evidence of their possibility (well, it'd technically be extremely weak evidence), whereas group B would consider the problem of the conceivability of p-zombies as essentially equivalent to the actuality of p-zombies. (There may be other groups, such as those who think "If it's imaginable, then it's coherent," but based on my brief glance the discussion hasn't actually made it that far yet.) Is this right? I'd think the whole thing could be resolved if you taboo'd "conceivable"...?
-2Peterdjones
Talking about "the" possibility of p-zombies is pretty pointless, because of the important difference between logical and physical impossibility. Even Chalmers thinks PZs are physically/naturally impossible. I don't think the coherent/incoherent distinction you are making is clear. Of course, in a universe where everything is exactly the same, heavier objects would not fall faster in vacuo. But then we understand gravity and acceleration, so we can say what the contradictions would be. We don't understand what the contradictions would be in the case of p-zombies, because we don't have the psychophysical laws.Physicalism is Not An Explanation.
0CuSithBell
By 'coherent', I mean something like 'consistent' (to make an analogy to logic) - given all our observations, and extrapolating the concept as needed, there are no contradictions. "Heavier objects fall faster" leads to contradictions pretty quickly. Some people believe that "p-zombies are possible" (in some sense, which might match up with what you mean by either logical or physical) also leads to a contradiction, though we of course don't understand the laws that would cause this. This is beside the point! I'm not arguing for or against p-zombies (here), I'm saying I think the people in this argument are talking past each other because they have diverging definitions.
-3Peterdjones
"Heavier objects fall faster" leads to contradictions with a theory, If we don't know the laws that would contradict p-zombies, there is no see-able contradiction in them, and conceivability=logical possibility follows.
0CuSithBell
"Heavier objects fall faster" is imaginable at a particular resolution. Once you ask, say, "what happens if you glue two stones together?", it contradicts more deeply-held notions, and the concept falls apart at that resolution. Some people believe that p-zombies are incoherent if analyzed sufficiently, or expect that they necessitate a severe contradiction of much more deeply-held beliefs. Moreover, it is possible to hold that we don't know the laws that would contradict p-zombies but that they are nevertheless contradicted - as it is possible to hold that things should not fall up without knowing the laws of gravitation (leaving aside that some things do fall up). Do you disagree with my central assertion, or just my definition of coherence?
-2Peterdjones
The stone-gluing can be worked around with auxilliary laws. To assume those laws are absent is to assume some other laws. People can believe what they like. If you are going to stake a claim that there is a literal self contradiction in p-zombies, you need to say what it is. However most cases of aleged self contradiction turn out to be contradiction with unexamined background assumptions--laws, again. Talk of "resolution" is misleading: this is cognitive, not pictorial. It is in fact the philosopher's point that p-zombies are really, for unknown reasons, impossible. They are not arguing zombies in order to argue zombies! Non-philosophers keep misunderstanding that.
0CuSithBell
So, ah, just the latter then? That's all right, and I admit it's a fuzzy term. But if you want to make any progress, I suggest you consider the former point instead.
3[anonymous]
Can you make the connection between Richard's comment and yours clearer?
-1Peterdjones
That's a difference that doesn't make a difference. That I can (not) conceive p-zombies can only mean that my cognitive processes produce a certain output. Whether is is somehow a mistaken output is another matter entirely.
-2RichardChappell
Distinguish two questions: (1) Are zombies logically coherent / conceivable? (2) What cognitive processes make it seem plausible that the answer to Q1 is 'yes'? I'm fully aware that one can ask the second, cogsci question. But I don't believe that cogsci answers the first question.

The first question should really be: what does the apparent conceivability of zombies by humans imply about their possibility?

Philosophers on your side of the debate seem to take it for granted (or at least end up believing) that it implies a lot, but those of us on the other side think that the answer to the cogsci question undermines that implication considerably, since it shows how we might think zombies are conceivable even when they are not.

It's been quite a while since I was actively reading philosophy, so maybe you can tell me: are there any reasons to believe zombies are logically possible other than people's intuitions?

3RichardChappell
I'm aware that the LW community believes this, but I think it is incorrect. We have an epistemological dispute here about whether non-psychological facts (e.g. the fact that zombies are coherently conceivable, and not just that it seems so to me) can count as evidence. Which, again, reinforces my point that the disagreement between me and Eliezer/Lukeprog concerns epistemological principles, and not matters of empirical fact. For more detail, see my response to TheOtherDave downthread.

We have an epistemological dispute here about whether non-psychological facts (e.g. the fact that zombies are coherently conceivable, and not just that it seems so to me) can count as evidence

At least around here, "evidence (for X)" is anything which is more likely to be the case under the assumption that X is true than under the assumption that X is false. So if zombies are more likely to be conceivable if non-physicalism is true than if physicalism is true, then I for one am happy to count the conceivability of zombies as evidence for non-physicalism.

But again, the question is: how do you know that zombies are conceivable? You say that this is a non-psychological fact; that's fine perhaps, but the only evidence for this fact that I'm aware of is psychological in nature, and this is the very psychological evidence that is undermined by cognitive science. In other words, the chain of inference still seems to be

people think zombies are conceivable => zombies are conceivable => physicalism is false

so that you still ultimately have the "work" being done by people's intuitions.

1RichardChappell
How do you know that "people think zombies are conceivable"? Perhaps you will respond that we can know our own beliefs through introspection, and the inferential chain must stop somewhere. My view is that the relevant chain is merely like so: zombies are conceivable => physicalism is false I claim that we may non-inferentially know some non-psychological facts, when our beliefs in said facts meet the conditions for knowledge (exactly what these are is of course controversial, and not something we can settle in this comment thread).
4komponisto
I know that people think zombies are conceivable because they say they think zombies are conceivable (including, in some cases, saying "zombies are conceivable"). To say that we may "non-inferentially know" something appears to violate the principle that beliefs require justification in order to be rational. By removing "people think zombies are conceivable", you've made the argument weaker rather than stronger, because now the proposition "zombies are conceivable" has no support. In any case, you now seem as eminently vulnerable to Eliezer's original criticism as ever: you indeed appear to think that one can have some sort of "direct access" to the knowledge that zombies are conceivable that bypasses the cognitive processes in your brain. Or have I misunderstood?
2RichardChappell
Depending on what you mean by 'direct access', I suspect that you've probably misunderstood. But judging by the relatively low karma levels of my recent comments, going into further detail would not be of sufficient value to the LW community to be worth the time.
3SilasBarta
You're still getting voted up on net, despite not explaining how, as you've claimed, the psychological fact of p-zombie plausibility is evidence for it (at least beyond references to long descriptions of your general beliefs).
3komponisto
Actually he seems to have denied this here, so at this point I'm stuck wondering what the evidence for zombie-conceivability is.

I believe he's trying to draw a distinction between two potential sources of evidence:

  1. The factual claim that people believe zombies are conceivable, and
  2. The actual private act of conceiving of zombies.

Richard is saying that his justification for his belief that p-zombies are conceivable lies in his successful conception of p-zombies. So what licenses him to believe that he's successfully conceived of zombies after all? His answer is that he has direct access to the contents of his conception, in the same way that he has access to the contents of his perception. You don't need to ask, "How do I know I'm really seeing blue right now, and not red?" Your justification for your belief that you're seeing blue just is your phenomenal act of noticing a real, bluish sensation. This justification is "direct" insofar as it comes directly from the sensation, and not via some intermediate process of reasoning which involve inferences (which can be valid or invalid) or premises (which can be true or false). Similarly, he thinks his justification for his belief that p-zombies are conceivable just is his p-zombie-ish conception.

A couple of things to note. One is that this e... (read more)

4komponisto
I was very deliberately ignoring this distinction: "people" includes Richard, even for Richard. The point is that Richard cannot simply trust his intuition; he has to weigh his apparent successful conception of zombies against the other evidence, such as the scientific success of reductionism, the findings from cognitive science that show how untrustworthy our intuitions are, and in particular specific arguments showing how we might fool ourselves into thinking zombies are conceivable. This would appear to violate Aumann's agreement theorem. This is a confusion of map and territory. It is possible to be rationally uncertain about logical truths; and probability estimates (which include the extent to which a datum is evidence for a proposition) are determined by the information available to the agent, not the truth or falsehood of the proposition (otherwise, the only possible probability estimates would be 1 and 0). It may be rational to assign a probability of 75% to the truth of the Riemann Hypothesis given the information we currently have, even if the Riemann Hypothesis turns out to be false (we may have misleading information). My position could be described by any of those three options -- in other words, they seem to differ only in the interpretation of terms like "conceivable", and don't properly hug the query. I must do so to the extent I believe zombies are in fact inconceivable. But I don't see why it should be a conversation-stopper: if Richard is right and I am wrong, Richard should be able to offer evidence that he is unusually capable of determining whether his apparent conception is in fact successful (if he can't, then he should be doubting his own successful conception himself). I can assent to this if "conceive" is interpreted in such a way that it is possible to conceive of something that is logically impossible (i.e. if it is granted that I can conceive of Fermat's Last Theorem being false). "Private knowledge" in this sense is ruled out by
4antigonus
I don't think Richard said anything to dispute this. He never said that his direct access to the conceivability of zombies renders his justification indefeasible. This is not a case in which you share common priors, so the theorem doesn't apply. You don't have, and in fact can never have, the information Richard (thinks he) has. Aumann's theorem does not imply that everyone is capable of accessing the same evidence. That's certainly true, but I can't see its relevance to what I said. In part because of some of the very reasons you name here, we can be mistaken about whether an observation O confirms a hypothesis H or not, hence whether an observation is evidence for a hypothesis or not. If the hypothesis in question concerns whether O is in fact even observable, and my evidence for ~H is that I've made O, then someone who strongly disagrees with me about H will conclude that I made some other observation O' and have been mistaking it for O. And since the observability of O' doesn't have any evidentiary bearing on H, he'll say, my observation wasn't actually the evidence that I took it to be. That's the point I was trying to illustrate: we may not be able to agree about whether my purported evidence should confirm H if we antecedently disagree about H. [Edited this sentence to make it clearer.] I don't really see what this could mean. Richard didn't state that his evidence for the conceivability of zombies is absolutely incontrovertible. He just said he had direct access to it, i.e., he has extremely strong evidence for it that doesn't follow from some intermediary inference.
3komponisto
Why not? Postulating uncommon priors is not to be done lightly: it imposes specific constraints on beliefs about priors. See Robin Hanson's paper "Uncommon Priors Require Origin Disputes". In any case, what I want to know is how I should update my beliefs in light of Richard's statements. Does he have information about the conceivability of zombies that I don't, or is he just making a mistake? In such a dispute, there is some observation O'' that (both parties can agree) you made, which is equal to (or implies) either O or O', and the dispute is about which one of these it is the same as (or implies). But since O implies H and O' doesn't, the dispute reduces to the question of whether O'' implies H or not, and so you may as well discuss that directly. In the case at hand, O is "Richard has conceived of zombies", O' is "Richard mistakenly believes he has conceived of zombies", and O'' is "Richard believes he has conceived of zombies". But in the discussion so far, Richard has been resisting attempts to switch from discussing O (the subject of dispute) to discussing O'', which obviously prevents the discussion from proceeding.
1antigonus
Because, again, you do not have access to the same evidence (if Richard is right about the conceivability of zombies, that is!). Robin's paper is unfortunately not going to avail you here. It applies to cases where Bayesians share all the same information but nevertheless disagree. To reiterate, Richard (as I understand him) believes that you and he do not share the same information. Well, you shouldn't take his testimony of zombie conceivability as very good evidence of zombie conceivability. In that sense, you don't have to sweat this conversation very much at all. This is less a debate about the conceivability of zombies and more a debate about the various dialectical positions of the parties involved in the conceivability debate. Do people who feel they can "robustly" conceive of p-zombies necessarily have to found their beliefs on publicly evaluable, "third-person" evidence? That seems to me the cornerstone of this particular discussion, rather than: Is the evidence for the conceivability of p-zombies any good? Yes, that's the "neutral" view of evidence Richard professed to deny. The actual values of O and O' at hand are "That one particular mental event which occurred in Richard's mind at time t [when he was trying to conceive of zombies] was a conception of zombies," and "That one particular mental event which occurred in Richard's mind at time t was a conception of something other than zombies, or a non-conception." The truth-value value of the O'' you provide has little bearing on either of these. EDIT: Here's a thought experiment that might illuminate my argument a bit. Imagine a group of evil scientists kidnaps you and implants special contact lenses which stream red light directly into your retina constantly. Your visual field is a uniformly red canvas, and you can never shut it off. The scientists then strand you on an island full of Bayesian tribespeople who are congenitally blind. The tribespeople consider the existence of visual experience ridicu
1komponisto
This is not correct. Even the original Aumann theorem only assumes that the Bayesians have (besides common priors) common knowledge of each other's probability estimates -- not that they share all the same information! (In fact, if they have common priors and the same information, then their posteriors are trivially equal.) Robin's paper imposes restrictions on being able to postulate uncommon priors as a way of escaping Aumann's theorem: if you want to assume uncommon priors, certain consequences follow. (Roughly speaking, if Richard and I have differing priors, then we must also disagree about the origin of our priors.) In any event, you do get closer to what I regard as the point here: Another term for "conditionalize" is "update". Why can't you update on an experience? The sense I get is that you're not wanting to apply the Bayesian model of belief to "experiences". But if our "experiences" affect our beliefs, then I see no reason not to. In these terms, O'' is simply "that one particular mental event occurred in Richard's mind" -- so again, the question is what the occurrence of that mental event implies, and we should be able to bypass the dispute about whether to classify it as O or O' by analyzing its implications directly. (The truth-value of O'' isn't a subject of dispute; in fact O'' is chosen that way.) It goes down, since the tribespeople would be more likely to say that if there is no visual experience than if there is. Of course, the amount it goes down by will depend on my other information (in particular, if I know they're congenitally blind, that significantly weakens this evidence).
2wnoise
I would categorize my position as somewhere between 1 and 2, depending on what you mean by "conceiving". I think he has a name attached to some properties associated with p-zombies and a world in which they exist, but this doesn't mean a coherent model of such a world is possible, nor that he has one. That is, I believe that following out the necessary implications will eventually lead to contradiction. My evidence for this is quite weak, of course. I can certainly talk about an even integer larger than two that is not expressible as the sum of two primes. But that doesn't mean it's logically possible. It might be, or it might not. Does a name without a full-fledged model count as conceiving, or not? Either way, it doesn't appear to be significant evidence for.
0SilasBarta
I think the critics of Richard Chappell here are taking route 2 in your categorization.
0antigonus
komponisto and TheOtherDave appear to have been taking route 3. (challenging Richard's purported access to evidence for zombie conceivaiblity).
2SilasBarta
I think they were stuck on the task of getting him to explain what that evidence was (and what evidence the access he does have gives him), which in turn was complicated by his insistence that he wasn't referring to a psychological fact of ease of conceivability.
2TheOtherDave
If it helps (which I don't expect it does), I've been pursuing the trail of this (and related things) here. Thus far his response seems to be that certain beliefs don't require evidence (or, at least, don't require "independent justification," which may not be the same thing), and that his beliefs about zombies "cohere well" with his other beliefs (though I'm not sure which beliefs they cohere well with, or whether they coheres better with them than their negation does), and that there's no reason to believe it's false (though it's not clear what role reasons for belief play in his decision-making in the first place).
5komponisto
So, the Bayesian translation of his position would seem to be that he has a high prior on zombies being conceivable. But of course, that in turn translates to "zombies are conceivable for reasons I'm not being explicit about". Which is, naturally, the point: I'd like to know what he thinks he knows that I don't. Regarding coherence, and reasons to believe it's false: the historical success of reductionism is a very good reason to believe it's false, it seems to me. Despite Richard's protestations, it really does appear to me that this is a case of undue reluctance on the part of philosophers to update their intuitions, or at least to let them be outweighed by something else.
2SilasBarta
Good point. I think my biggest frustration is that I can't tell what point Richard Chappell is actually making so I can know whether I agree with it. It's one thing to make a bad argument; it's quite another to have a devastating argument that you keep secret.
0[anonymous]
You would probably have had more opportunity to draw it out of him if it weren't for the karma system discouraging him from posting further on the topic. Remember that next time you're tallying the positives and negatives of the karma system.
3SilasBarta
I don't follow: he's getting positive net karma from this discussion, just not as much as other posters. Very few of his comments, if any, actually went negative. In what sense in the karma system discouraging him?
2[anonymous]
Yes, slightly positive. Whether something encourages or discourages a person is a fact, not about the thing considered in itself, but about its effect on the person. The fact that the karma is slightly net positive is a fact about the thing considered in itself. The fact that he himself wrote: tells us something about its effect on the person.
4SilasBarta
Yes, he's taking that as evidence that his posts are not valued. And indeed, like most posts that don't (as komponisto and I noted) clearly articulate what their argument is, his posts aren't valued (relative to others in the discussion). And he is correctly reading the evidence. I was interpreting the concerns about "low karma being discouraging" as saying that if your karma goes negative, you actually get posting restrictions. But that's not happening here; it's just that Richard Chappell is being informed that his posts aren't as valued as the others on this topic. Still positive value, mind you -- just not as high as others. In the absence of a karma system, he would either be less informed about his unhelpfulness in articulating his position, or be informed through other means. I don't understand what your complaint is. Yes, people who cannot articulate their position rigorously are going to have their feelings hurt at some level when people aren't satisfied with their explanations. What does that have to do with the merits of the karma system?
0[anonymous]
You are speculating about possible reasons that people might have had for faling to award karma points. The position of your sentence implies that "that" refers to your speculation about the reasons that people might have had for withholding karma points. But my statement concerning the merits of the karma system had not referred to that speculation. Here is my statement again: I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this? And then I wrote: I have left it up to you to decide whether your loss of this opportunity is on the whole a positive or a negative.

You are speculating about possible reasons that people might have had for faling to award karma points.

Kind of. I was drawing on my observations about how the karma system is used. I've generally noticed (as have others) that people with outlier views do get modded up very highly, so long as they articulate their position clearly. For example: Mitchell Porter on QM, pjeby on PCT, lukeprog on certain matters of mainstream philosophy, Alicorn on deontology and (some) feminism, byrnema on theism, XiXiDu on LW groupthink.

Given that history, I felt safe in chalking up his "insufficiently" high karma to inscrutability rather than "He's deviating from the party line -- get him!" And you don't get to ignore that factor (of controversial, well-articulated positions being voted up) by saying you "weren't referring to that speculation".

I am pointing out that had he not been discouraged as early as he was in the exchange, then you would probably have had more opportunity to draw him out. Do you dispute this?

My response is that, to the extent that convoluted, error-obscuring posting is discouraged, I'm perfectly fine with such discouragement, and I don't... (read more)

3NancyLebovitz
Thanks for laying this out. I'm one of the people who thinks philosophical zombies don't make sense, and now I understand why-- they seem like insisting that a result is possible while eliminating the process which leads to the result. This doesn't explain why it's so obvious to me that pz are unfeasible and so obvious to many other people that pz at least make enough sense to be a basis for argument. Does the belief or non-belief in pz correlate with anything else?
-2Peterdjones
Since no physical law is logically necessary, it is always logically possible that an effect could fail to follow from a cause.
-8Peterdjones
8[anonymous]
It's hard to be sure that I'm using the right words, but I am inclined to say that it's actually the connection between epistemic conceivability and metaphysical possibility that I have trouble with. To illustrate the difference as I understand it, someone who does not know better can epistemically conceive that H2O is not water, but nevertheless it is metaphysically impossible that H2O is not water. I am not confident I know the meanings of the philosophical terms of the preceding comment, but employing mathematics-based meanings of the words "logic" and "coherent", then it is perfectly logically coherent for someone who happens to be ignorant of the truth to conceive that H2O is not water, but this of course tells us very little of any significant interest about the world. It is logically coherent because try as he might, there is no way for someone ignorant of the facts to purely logically derive a contradiction from the claim that H2O is not water, and therefore reveal any logical incoherence in the claim. To my way of understanding the words, there simply is no logical incoherence in a claim considered against the background of your (incomplete) knowledge unless you can logically deduce a contradiction from inside the bounds of your own knowledge. But that's simply not a very interesting fact if what you're interested in is not the limitations of logic or of your knowledge but rather the nature of the world. I know Chalmers tries to bridge the gap between epistemic conceivability and metaphysical possibility in some way, but at critical points in his argument (particularly right around where he claims to "rescue" the zombie argument and brings up "panprotopsychism") he loses me.
8AlephNeil
My view on this question is similar to that of Eric Marcus (pdf). When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that HasNoExperiences(ThisPerson) but there's no necessary reason why a predicate called "HasNoExperiences" must track whether or not people have experiences.) Here, I think, is how Chalmers might drive a wedge between the zombie example and the "water = H2O" example: Imagine that we're prescientific people familiar with a water-like substance by its everyday properties. Suppose we're shown two theories of chemistry - the correct one under which water is H2O and another under which it's "XYZ" - but as yet have no way of empirically distinguishing them. Then when we epistemically conceive of water being XYZ, we have a coherent picture in our minds of 'that wet stuff we all know' turning out to be XYZ. It isn't water, but it's still wet. To epistemically but not metaphysically conceive of p-zombies would be to imagine a scenario where some physically normal people lack 'that first-person experience thing we all know' and yet turn out to be conscious after all. But whereas there's a semantic gap between "wet stuff" and "real water" (such that only the latter is necessarily H2O), there doesn't seem to be any semantic gap between "that first-person thing" and "real consciousness". Consciousness just is that first-person thing. Perhaps you can hear the sound of some hairs being split. I don't think we have much difference of opinion, it's just that the idea of "conceiving of something" is woolly and incapable of precision.
2[anonymous]
Thanks, I like the paper. I understand the core idea is that to imagine a zombie (in the relevant sense of imagine) you would have to do it first person - which you can't do, because there is nothing first person to imagine. I find the argument for this persuasive. And this is just what I have been thinking:
0RichardChappell
This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness? It's not like we can imagine a microphysical duplicate of our world that's lacking chairs. Once we've imagined the atoms-arranged-chairwise, that's all it is to be a chair. It's analytic. But there's no such conceptual connection between neurons-instantiating-computations and consciousness, which arguably precludes identifying the two.
[-]FAWS190

But there's no such conceptual connection between neurons-instantiating-computations and consciousness

Only for people who haven't properly internalized that they are brains. Just like people who haven't internalized that heat is molecular motion could imagine a cold object with molecules vibrating just as fast as in a hot object.

2RichardChappell
Distinguish physical coldness from phenomenal coldness. We can imagine phenomenal coldness (i.e. the sensation) being caused by different physical states -- and indeed I think this is metaphysically possible. But what's the analogue of a zombie world in case of physical heat (as defined in terms of its functional role)? We can't coherently imagine such a thing, because physical heat is a functional concept; anything with the same microphysical behaviour as an actual hot (cold) object would thereby be physically hot (cold). Phenomenal consciousness is not a functional concept, which makes all the difference here.
8FAWS
You are simply begging the question. For me philosophical zombies make exactly as much sense as cold objects that behave like hot objects in every way. I can even imagine someone accepting that molecular movement explains all observable heat phenomena, but still confused enough to ask where hot and cold come from, and whether it's metaphysically possible for an object with a lot of molecular movement to be cold anyway. The only important difference between that sort of confusion and the whole philosophical zombie business in my eyes is that heat is a lot simpler so people are far, far less likely to be in that state of confusion.
2RichardChappell
This comment is unclear. I noted that out heat concepts are ambiguous, between what we can call physical heat (as defined by its causal-functional role) and phenomenal heat (the conscious sensations). Now you write: Which concept of 'hot' and 'cold' are you imagining this person to be employing? If the phenomenal one, then they are (in my view) correct to see a further issue here: this is simply the consciousness debate all over again. If the physical-functional concept, then they are transparently incoherent. Now, perhaps you are suggesting that you only have a physical-function conception of consciousness, and no essentially first-personal (phenomenal) concepts at all. In that case, we are talking past each other, because you do not have the concepts necessary to understand what I am talking about.
1FAWS
You are over-extending the analogy. The heat case has an analogous dichotomy (heat vs. molecular movement) to first and third person, but if you try to replace it with the very same dichotomy the analogy breaks. The people I imagine are thinking about heat as a property of the objects themselves, so non-phenomenal, but using words like functional or physical would imply accepting molecular movement as the thing itself, which they are not doing. They are talking about the same thing as physical heat, but conceptionalize it differently. No, and I imagine you also have some degree of separation between the concepts of physical heat and molecular movement even though you know them to be the same, so you can e. g. make sense of cartoons with freeze rays fueled by "cold energy". The fact that I understand "first-" and "third person consciousness" to be the same thing doesn't mean I have no idea at all what people who (IMO confusedly) treat them as different things mean when they are talking about first person consciousness.
6RichardChappell
Yes and no. It's a superficially open question what microphysical phenomena fills the macro-level functional role used to define physical heat (causing state changes, making mercury expand in the thermometer, or whatever criteria we use to identify 'heat' in the world). So they can have a (transparently) functional concept of heat without immediately recognizing what fills the role. But once they have all the microphysical facts -- the Laplacean demon, say -- it would clearly be incoherent for them to continue to see a micro-macrophysical "gap" the way that we (putatively) find a physical-phenomenal gap.
1FAWS
(knowledge that molecular movement is sufficient to explain observable macro-phenomena was assumed so the first half of the reply does not apply) You and I would agree on that, but presumably they would disagree on being incoherent. And I see no important distinction between their claim to coherence and that of philosophical zombies, other than simplicity of the subject matter.
5RichardChappell
You can show that they're incoherent by (i) explicating their macro-level functional conception of heat, and then (ii) showing how the micro functional facts entail the macro functional facts. The challenge posed by the zombie argument is to get the physicalist to offer an analogous response. This requires either (i) explicating our concept of phenomenal consciousness in functional terms, or else (ii) showing how functional-physical facts can entail non-functional phenomenal facts (by which I mean, facts that are expressible using non-functional phenomenal concepts). Do you think you can do one of these? If so, which one?
3[anonymous]
Okay, let's imagine this. First, to explicate "macro functional facts", we have the examples: So, you try to show someone that jiggling around the molecules of mercury will cause the mercury to expand. How exactly would you do this? I'll try to imagine it. You present them with some mercury. You lend them an instrument which lets them see the individual molecules of the mercury. Then you start jiggling the molecules directly by some means (demonic powers maybe), and the mercury expands. Or, alternatively, you apply what they recognize as heat to mercury, and you show them that the molecules are jiggling faster. So, in experience after experience, you show them that what they recognize as heat rises if and only if the molecules jiggle faster. This is not mere observation of correlation, because you are manipulating the molecules and the mercury by one means or another rather than passively observing. But what they can say to you is, "I accept that there seems to be some sort of very tight relationship between the jiggling and the heat, but this doesn't mean that the jiggling is the heat. After all, we already know that there is a tight relationship between manipulations of the brain and conscious experiences, but that doesn't disprove dualism." What could you say in response? Maybe: "if you jiggle the molecules, the molecules spread apart, i.e., the mercury expands." They could reply, "you are assuming that the molecules are identical with the mercury. But all I see is nothing but a tight correlation between where the molecules are and where the mercury is - similar to the tight correlation between where the brain is and where the conscious mind finds itself, but that doesn't disprove dualism." How do you force a reluctant person to accept the identification of certain macro facts with certain micro facts? But of course, you don't really have to, because when people see such strong correlations, their natural inclination is to stop seeing two things and start s
0FAWS
They already agree on that, just like zombie postulators will (usually?) grant that a functional view will be sufficient to explain all outward signs of consciousness. Their postulated opinion that there is something more to the question is IMO only more transparently incoherent than the equivalent. If you were claiming that the functional view was insufficient to explain people writing about conscious experience that would mean not sharing the same incoherence. For example, assume I stubbed by toe. From my first person perspective I feel pain. From a third person perspective a nerve signal is sent to the brain and causes various parts of the neural machinery to do things. If I look at what I call "pain" from my first person perspective I can discriminate various, but perhaps not all parts of the sensation. I can feel where it comes from, spatially, and that the part of my body it comes from is that toe. From a third person perspective this information must be encoded somewhere since the person can answer the corresponding questions, or simply point, and perhaps we can already tell form neuroimaging? From an evolutionary perspective it's obvious why that information is present. Back to first person, I strongly want it to stop. Also verifiable and explainable. I have difficulty averting my attention, find myself physically reacting in various ways unless I consciously stop it, I have pain related associations like the word "ouch" or the color red, and so on. Nothing I can observe first person except the base signal and baggage I can deduce to have a correlate third person stands out. The signal itself seems uninteresting enough that I'm not sure if I would even notice if it was replaced with a different signal as long as all baggage was kept the same (and that didn't imply my memories changed to match). I'm not even completely sure that I really perceive such a base signal and it's not just the various types of baggage bleeding together. If such a base signal is t
1SilasBarta
Not so fast! That is possible, and that was EY's point here: And then he gave the later example of the flywheel, which we see as cooler than a set of metal atoms with the same velocity profile but which is not constrained to move in a circle:
2FAWS
Doesn't touch the point of the analogy though. Add "disordered" or something wherever appropriate.
2SilasBarta
I think it does. Richard was making the point that your analogy blurs an important distinction between phenomenal heat and physical heat (thereby regressing to the original dilemma). And it turns out this is important even in the LW perspective: the physical facts about the molecular motion are not enough to determine how hot you experience it to be (i.e. the phenomenal heat); it's also a function of how much you know about the molecular motion.

If you met someone who said with a straight face "Of course I can imagine something that is physically identical to a chair, but lacks the fundamental chairness that chairs in our experience partake of... and is therefore merely a fake chair, although it will pass all our physical tests of being-a-chair nevertheless," would you consider that claim sufficient evidence for the existence of a non-physical chairness?

Or would you consider other explanations for that claim more likely?

Would you change your mind if a lot of people started making that claim?

1RichardChappell
You misunderstand my position. I don't think that people's claims are evidence for anything. When I invite people to imagine the zombie world, this is not because once they believe that they can do so, this belief (about their imaginative capabilities) is evidence for anything. Rather, it's the fact that the zombie world is coherently conceivable that is the evidence, and engaging in the appropriate act of imagination is simply a psychological precondition for grasping this evidence. That's not to say that whenever you believe that you've coherently imagined X, you thereby have the fact that X is coherently conceivable amongst your evidence. For this may not be a fact at all. (This probably won't make sense to anyone who doesn't know any epistemology. Basically I'm rejecting the dialectical or "neutral" view of evidence. Two participants in a debate may be unable to agree even about what the evidence is, because sometimes whether something qualifies as evidence or not will depend on which of the contending views is actually correct. Which is to reiterate that the disagreement between me and Lukeprog, say, is about epistemological principles, and not any empirical matter of fact.)
4TheOtherDave
I agree that your belief that you've coherently imagined X does not imply that X is coherently conceivable. I agree that, if it were a fact that the zombie world were coherently conceivable, that could be evidence of something. I don't understand your reasons for believing that the zombie world is coherently conceivable.
-1RichardChappell
Are you assuming that in order for me to be able to justifiedly believe and reason from the premise that the zombie world is conceivable, I need to be able to give some independent justification for this belief? That way lies global skepticism. I can tell you that the belief coheres well with my other beliefs, which is a necessary but not sufficient condition for my being justified in believing it. There's no good reason to think that it's false. (Though again, I don't mean to suggest that this fact suffices to make it reasonable to believe.) Whether it's reasonable to believe depends, in part, on facts that cannot be agreed upon within this dialectic: namely, whether there really is any contradiction in the idea.
1TheOtherDave
At the moment, I'm asking you what your reasons are for believing that the zombie world is coherently conceivable; I will defer passing judgment on them until I'm confident that I understand them, as I try to avoid judging things I don't understand. So, no, I'm not making that assumption, though I'm not rejecting that assumption either. Which of your other beliefs cohere better with a belief that the zombie world is coherently conceivable than with a belief that it isn't?
2Tyrrell_McAllister
If someone were to claim the following, would they be making the same point as you are making? "The non-psychological fact that 'SS0 + SS0 = SSSS0' is a theorem of Peano arithmetic is evidence that 2 added to 2 indeed yields 4. A psychological precondition for grasping this evidence is to go through the process of mentally verifying the steps in a proof of 'SS0 + SS0 = SSSS0' within Peano arithmetic. "This line of inquiry would provide evidence to the verifier that 2+2 = 4. However, properly speaking, the evidence would not be the psychological fact of the occurrence of this mental verification. Rather, the evidence is the logical fact that 'SS0 + SS0 = SSSS0' is a theorem of Peano arithmetic."
0[anonymous]
Yes, when you make statements about how easy it is to imagine this thing or that thing, you do indeed seem to me to be presenting those statements as evidence of something. If I've misunderstood that, then I'll drop the subject here.
-1Alicorn
I claim to be wearing blue today.
0RichardChappell
It's a restricted quantifier :-)
[-][anonymous]120

This is an interesting proposal, but we might ask why, if consciousness is not really distinct from the physical properties, is it so easy to imagine the physical properties without imagining consciousness? It's not like we can imagine a microphysical duplicate of our world that's lacking chairs.

But these kinds of imagining are importantly dissimilar. Compare:

1) imagine the physical properties without imagining consciousness

2) imagine a microphysical duplicate of our world that's lacking chairs

The key phrases are: "without imagining" and "that's lacking". It is one thing to imagine one thing without imagining another, and quite another to imagine one thing that's lacking another. For example, I can imagine a ball without imagining its color (indeed, as experiments have shown, we can see a ball without seeing its color), but I may not be able to imagine a ball that's lacking color.

This is no small distinction.

To bring (2) into line with (1) we would need to change it to this:

2a) imagine a microphysical duplicate of our world without imagining chairs

And this, I submit, is possible. In fact it is possible not only to imagine a physical duplicate of our world wit... (read more)

3RichardChappell
Right, that's the claim. I explain why I don't think it's question-begging here and here
7pjeby
How can can you perform that step unless you've first defined consciousness as something that's other-than-physical? If the "consciousness" to be imagined were something we could point to and measure, then it would be a physical property, and would thus be duplicated in our imagining. Conversely, if it is not something that we can point to and measure, then where does it exist, except in our imagination? The logical error in the zombie argument comes from failing to realize that the mental models we build in our minds do not include a term for the mind that is building the model. When I think, "Richard is conscious", I am describing a property of my map of the world, not a property of the world. "Conscious" is a label that I apply, to describe a collection of physical properties. If I choose to then imagine that "Zombie Richard is not conscious", then I am saying, "Zombie Richard has all the same properties, but is not conscious." I can imagine this in a non-contradictory way, because "conscious" is just a label in my brain, which I can choose to apply or not apply. All this is fine so far, until I try to apply the results of this model to the outside world, which contains no label "conscious" in the first place. The label "conscious" (like "sound" in the famous tree-forest-hearing question) is strictly something tacked on to the physical events to describe a common grouping. In other words, my in-brain model is richer than the physical world - I can imagine things that do not correspond to the world, without contradiction in that more-expressive model. For example, I can label Charlie Sheen as "brilliant" or "lunatic", and ascribe these properties to the exact same behaviors. I can imagine a world in which he is a genius, and one in which he is an idiot, and yet, he remains exactly the same and does the same things. I can do this because it's just my label -- my opinion -- that changes from one world to the other. The zombie world is no different: in one wor
9komponisto
And that is a question of cognitive science, is it not?
0RichardChappell
Ha, indeed, poorly worded on my part :-)
1SilasBarta
What was poor about it? The rest of your point is consistent with that wording. What would you put there instead so as to make your point more plausible?
1RichardChappell
Good question. It really needed to be stated in more objective terms (which will make the claim less plausible to you, but more logically relevant): It's a fact that a scenario containing a microphysical duplicate of our world but lacking chairs is incoherent. It's not a fact that the zombie world is incoherent. (I know, we dispute this, but I'm just explaining my view here.) With the talk of what's easily imaginable, I invite the reader to occupy my dialectical perspective, and thus to grasp the (putative) fact under dispute; but I certainly don't think that anything I'm saying here forces you to take my position seriously. (I agree, for example, that the psychological facts are not sufficient justification.)
4SilasBarta
Okay, but there was some evidence you were trying to draw on that you previously phrased as "it's easy to imagine p-zombies..." -- and presumably that evidence can be concisely stated, without having to learn your full dialectic perspective. Whether or not you think it's "not a fact that the zombie world is incoherent", there was something you thought was relevant, and that something was related (though not equivalent!) to the ease of imagining p-zombies. What was that? (And FWIW, I do notice you are replying to many different people here and appreciate your engagement.)
5Clippy
World-models that are deficient at this aspect of world representation in ape brains.
2quen_tin
That's true. The difference between chairs and consciousness is that chair is a 3rd person concept, whereas consciousness is a 1st person concept. Imagining a world without consciousness is easy, because we never know if there are consciousnesses or not in the world - consciousness is not an empirical data, it's something we speculate other have by analogy with ourselves.
-1Peterdjones
Or in simpler terms: we can't see how particular physics produces particular consciousness, even if we accept in general that physics produces consciousness. The conceivability of p-zombies doesn't mean they are really possible, or that physicalism is false, but it does mean that our explanations are inadequate. Reductivism is not, as it stands, an explanation of consciousness, but only a proposal of the form an explanation would have.
0[anonymous]
My view on this question is similar to that of Eric Marcus (pdf). When you think you're imagining a p-zombie, all that's happening is that you're imagining an ordinary person and neglecting to imagine their experiences, rather than (impossibly) imagining the absence of any experience. (You can tell yourself "this person has no experiences" and then it will be true in your model that "HasNoExperiences(ThisPerson)" but there's no necessary reason why an inert predicate called "HasNoExperiences" must track whether or not people have no experiences.)
-1RichardChappell
Aleph basically has it right in his reply: 'water' is a special case because it's a rigid designator, picking out the actual watery stuff in all counterfactual worlds (even when some other stuff, XYZ, is the watery stuff instead of our water). Conceiving of the "twin earth" world (where the watery stuff isn't H2O) is indeed informative, since if this really is a coherent scenario then there really is a metaphysically possible world where the watery stuff isn't H2O. It happens that we shouldn't call that stuff "water", if it differs from the watery stuff in our world, but that's mere semantics. The reality is that there is a possible world corresponding to the one we're (coherently) conceiving of. For more detail, see Misusing Kripke; Misdescribing Worlds, or my undergrad thesis on Modal Rationalism

A few points:

  • Philisophy is (by definition, more or less) meta to everything else. By its nature, it has to question everything, including things that here seem to be unuqestionable, such as rationality and reductionism. The elevation of these into unquestionable dogma creates a somewhat cult-like environment.

  • Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago. That's one reason philosophers emphasize the history of ideas so much. It's probably a mistake to think you are so sma

... (read more)
7jwdink
It's not that people coming from the outside don't understand the language. I'm not just frustrated the Hegel uses esoteric terms and writes poorly. (Much the same could be said of Kant, and I love Kant.) It's that, when I ask "hey, okay, if the language is just tough, but there is content to what Hegel/Heidegger/etc is saying, then why don't you give a single example of some hypothetical piece of evidence in the world that would affirm/disprove the putative claim?" In other words, my accusation isn't that continental philosophy is hard, it's that it makes no claims about the objective hetero-phenomenological world. Typically, I say this to a Hegelian (or whoever), and they respond that they're not trying to talk about the objective world, perhaps because the objective world is a bankrupt concept. That's fine, I guess-- but are you really willing to go there? Or would you claim that continental philosophy can make meaningful claims about actual phenomena, which can actually be sorted through? I guess I'm wholeheartedly agreeing with the author's statement:
0mtraven
I think you are making a category error. If something makes claims about phenomena that can be proved/disproved with evidence in the world, it's science, not philosophy. So the question is whether philosophy's position as meta to science and everything else can provide utility. I've found it useful, YMMV. BTW here is the latest round of Heideggerian critique of AI (pdf) which, again, you may or may not find useful.
0jwdink
Hmm.. I suspect the phrasing "evidence/phenomena in the world" might give my assertion an overly mechanistic sound to it. I don't mean verifiable/disprovable physical/atomistic facts must be cited-- that would be begging the question. I just mean any meaningful argument must make reference to evidence that can be pointed to in support of/ in criticism of the given argument. Note that "evidence" doesn't exclude "mental phenomena." If we don't ask that philosophy cite evidence, what distinguishes it from meaningless nonsense, or fiction? I'm trying to write a more thorough response to your statement, but I'm finding it really difficult without the use of an example. Can you cite some claim of Heidegger's or Hegel's that you would assert is meaningful, but does not spring out of an argument based on empirical evidence? Maybe then I can respond more cogently.
0jwdink
Unless you think the "Heideggerian critique of AI" is a good example. In which case I can engage that.
-1mtraven
I'm not at all a fan of Hegel, and Heidegger I don't really understand, but I linked to a paper that describes the interaction of Heideggerian philosophy and AI which might answer your question. I still think you don't have your categories straight. Philosophy does not make "claims" that are proved or disproved by evidence (although there is a relatively new subfield called "experimental philosophy"). Think of it as providing alternate points of view. To illustrate: your idea that the only valid utterances are those that are supported by empirical evidence is a philosophy. That philosophy itself can't be supported by empirical evidence; it rests on something else.
2jwdink
Right, and I'm asking you what you think that "something else" is. I'd also re-assert my challenge to you: if philosophy's arguments don't rest on some evidence of some kind, what distinguishes it from nonsense/fiction?
-2mtraven
Hell, how would I know? Let's say "thinking" for the sake of argument. People think it makes sense. "Definitions may be given in this way of any field where a body of definite knowledge exists. But philosophy cannot be so defined. Any definition is controversial and already embodies a philosophic attitude. The only way to find out what philosophy is, is to do philosophy." -- Bertrand Russell
5lukeprog
A reply on just one point: I don't mean to make reductionism unquestionable, I'm just not making reductionism "my battle" so much anymore. Heck, for several years I spent my time arguing about theism. I'm just moving on to other subjects, and taking for granted the non-existence of magical beings, and so on. Like I say in my original post, I'm glad other people are working those out, and of course if I was presented with good reason to believe in magical beings or something, I hope I would have the honesty to update. Nobody's suggesting discrimination or criminal charges for not "believing in" reductionism.
1ohwilleke
"Often people who dismiss philosophy end up going over the same ground philosophers trode hundreds or thousands of years ago." Really? When I look at Aquinas or Plato or Aristotle, I see people mostly asking questions that we no longer care about because we have found better ways of dealing with the issues that made those questions worth thinking about. Scholastic discourse about the Bible or angels makes much less sense when you have a historical-critical context to explain how it emerged in the way that it did, and a canon of contemporaneous secular works to make sense of what was going on in their world at the time. Philosophical atomism is irrelevant once you've studied modern physics and chemistry. The notion that we have Platonic a priori knowledge looks pretty silly without a great deal of massaging as we learn more about the mechanism of brain development. Also, not all new perspectives on the world have value. Continental philosophy and post-modernism are to philosophy what mid-20th century art music is to music composition. It is a rabbit hole that a whole generation of academics got sucked into and wasted their time on. It turned out that the future of worthwhile music was elsewhere, in people like Elvis and the Beatles and rappers and Nashville studios and Motown artists and ressurrections of the greats of the classical and romantic periods in new contexts, and the tone poems and dissonant musics and other academic experiements of that era were just garbage. They lost sight of what music was for, just as the continental philosophers and post-modernist philosophers lost sight of what philosophy was for. The language in impenatrable because they have nothing to say. I know what it is like to read academic literature, for example, in the sciences or economics, that is impenetrable because it is necessarily so, but that isn't it. People who use sophisticated jargon when it is really necessary are also capable of speaking much more clearly about the ess
5gjm
wat? Here are a few pieces of mid-20th century art music. I'm taking "mid-20th-century" to mean 1930 to 1970. Some of them are quite dissonant. None of them is actually a tone poem, as it happens. They are all pieces that (1) I like, (2) are well regarded by the classical music "establishment", (3) are pretty accessible even to (serious) listeners of fairly conservative taste, (4) are still being performed, recorded, etc., (5) are clearly part of the mainstream of mid-20th-century art music, and (6) seem to me to show no lack of awareness of what music is for. * 1930: Stravinsky, Symphony of Psalms * 1936: Barber, Adagio for strings * 1941: Tippett, A child of our time * 1942: Prokofiev, Piano sonata #7 * 1945: Britten, Peter Grimes * 1948: Strauss, Four last songs * 1960: Shostakovich: String quartets #7,8 * 1965: Bernstein, Chichester Psalms (I make no claim that these are the best or most important works by their composers. I wanted things reasonably well spread out over the period in question, and subject to that picked fairly randomly.) Are these all garbage? Perhaps you had in mind only music "weirder" than those: Second Viennese School twelve-tone music (though I'd call that early rather than mid 20th century), Cage-style experimentalism, and so forth. I'm not at all convinced that that stuff had no value or influence, but in any case it's far from all that was happening in western art music in the middle of the 20th century.
2g_pepper
Great list of 20th century compositions! 20th century art music gets an undeservedly bad rap, IMO. I would add a few more composers: * 1930: Kurt Weill: Aufstieg und Fall der Stadt Mahagonny * 1935: George Gershwin: Porgy and Bess * 1940-1941: Olivier Messiaen: Quatuor pour la fin du temps * 1944: Aaron Copeland: Appalachian Spring Kurt Weill's work might be considered theater music rather than art music, but I would argue that it is both of those things. Messiaen is admittedly avant garde and a bit outside of the mainstream, but is approachable by a wide range of audiences, including many who would not care for the composers of the Second Viennese School. Many of Messiaen's compositions could have been added to the list, so I picked one of the best known.
0gjm
For what it's worth, I omitted Weill and Gershwin because I thought ohwilleke might not consider them arty enough, Messiaen becase I wasn't confident enough ohwilleke would concede that his music sounds good, and Copeland because Appalachian Spring was the obvious work to use and I already had enough from around that time :-). Of course I agree that otherwise those works are all worthy of inclusion in any list like mine.
3TheAncientGeek
Eg reinventing logical positivism!
0gjm
Logical positivism isn't even one hundred years old yet.
-1mtraven
See the paper on the Heideggerian critique of AI I posted earlier. Oh? I would think that one of the lessons of neuroscience is that we are in fact hardwired for a great many things. How do you know? That is, what evidence other than your lack of understanding do you have for this?
0alfredmacdonald
While I agree that it's important to avoid succumbing to these ideas, philosophy curricula tend to emphasize not just the history of ideas but the history of philosophers, which makes the process of getting up to speed for where contemporary philosophy is take entirely too long. It is not so important that we know what Augustine or Hume thought so much as why their ideas can't be right now. Also, "the history of ideas" is really broad, because there are a lot of ideas that by today's standards are just absurd. Including the likes of Anaximander and Heraclitus in "the history of ideas" is probably a waste of time and cognitive energy.

Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).

Could you please write a translation key for these?

I think it would help LWers read mainstream philosophy, and people with philosophy backgrounds read LW.

4lukeprog
Not a bad idea, though it's far more complicated than termX = termY.
1atucker
Fair enough. I think that reading about how the terms differ would actually help a lot with getting a brief background in the subject, more than a direct but inaccurate one to one mapping.

I'd welcome more quality discussion of philosophical topics such as morality here. You occasionally see people pop up and say confused things about morality, like

It has been suggested that animals have less subjective experience than people. For example, it would be possible to have an animal that counts as half a human for the purposes of morality.

... that got downvoted, but I still get the impression that confused thinking like that pops up more often on the topic of morality than on others (except Friendly AI), and that Eliezer didn't do a good eno... (read more)

4lukeprog
Metaethics is my specialty, so I've got some 'dissolving moral problems' posts coming up, but I need to write some dependencies first.
3Oscar_Cunningham
Why is the quoted example confused? It seems to be that subjective experience has something to do with morality, and in such a way that having less of it would make you less morally significant.
1Emile
Possibly "something to do with morality", yes, but moral worth isn't equal to subjective experience to the point that you can use that to calculate the ratio between "how much some animal is worth" and "how much a human is worth". Or, maybe it is, but we'd need an actual argument, not just assuming it's so.

Think like a cognitive scientist and AI programmer.

Is it possible to think "like an AI programmer" without being an AI programmer ? If the answer is "no", as I suspect it is, then doesn't this piece of advice basically say, "don't be a philosopher, be an AI programmer instead" ? If so, then it directly contradicts your point that "philosophy is not useless".

To put it in a slightly different way, is creating FAI primarily a philosophical challenge, or an engineering challenge ?

3TimS
Creating AI is an engineering challenge. Making FAI requires an understanding of what we mean by Friendly. If you don't think that is a philosophy question, I would point to the multiplicity of inconsistent moral theories throughout history to try to convince you otherwise.
0Bugmaster
Thanks, that does make sense. But, in this case, would "thinking like an AI programmer" really help you answer the question of "what we mean by Friendly" ? Of course, once we do get an answer, we'd need to implement it, which is where thinking like an AI programmer (or actually being one) would come in handy. But I think that's also an engineering challenge at that point. FWIW, I know there are people out there who would claim that friendliness/morality is a scientific question, not a philosophical one, but I myself am undecided on the issue.
3Vaniver
If you don't think like an AI programmer, you will be tempted to use concepts without understanding them well enough to program them. I don't think that's reduced to the level of 'engineering challenge.'
0Bugmaster
Are you saying that it's impossible to correctly answer the question "what does 'friendly' mean ?" without understanding how to implement the answer by writing a computer program ? If so, why do you think that ? Edit: added "correctly" in the sentence above, because it's trivially possible to just answer "bananas !" or something :-)
7DSimon
I don't think the division is so sharp as all that. Rather, what Vanvier is getting at, I think, is that one is capable of correctly and usefully answering the question "What does 'Friendy' mean?" in proportion to one's ability to reason algorithmically about subproblems of Friendliness.
2Bugmaster
I see, so you're saying that a philosopher who is not familiar with AI might come up with all kinds of philosophically valid definitions of friendliness, which would still be impossible to implement (using a reasonable amount of space and time) and thus completely useless in practice. That makes sense. And (presumably) if we assume that humans are kind of similar to AIs, then the AI-savvy philosopher's ideas would have immediate applications, as well. So, that makes sense, but I'm not aware of any philosophers who have actually followed this recipe. It seems like at least a few such philosophers should exist, though... do they ?
0DSimon
Yes, or more sneakily, impossible to implement due to a hidden reliance on human techniques for which there is as-yet no known algorithmic implementation. Programmers like to say "You don't truly understand how to perform a task until you can teach a computer to do it for you". A computer, or any other sort of rigid mathematical mechanism, is unable to make the 'common sense' connections that a human mind can make. We humans are so good at that sort of thing that we often make many such leaps in quick succession without even noticing! Implementing an idea on a computer forces us to slow down and understand every step, even the ones we make subconsciously. Otherwise the implementation simply won't work. One doesn't get as thorough a check when explaining things to another human. Philosophy in general is enriched by an understanding of math and computation, because it provides a good external view of the situation. This effect is of course only magnified when the philosopher is specifically thinking about how to represent human mental processes (such as volition) in a computational way.
2Bugmaster
I agree with most of what you said, except for this: Firstly, this is an argument for studying "human techniques", and devising algorithmic implementations, and not an argument for abandoning these techniques. Assuming the techniques are demonstrated to work reliably, of course. Secondly, if we assume that uploading is possible, this problem can be hacked around by incorporating an uploaded human into the solution.
1DSimon
Indeed, I should have been more specific; not all processes used in AI need to be analogous to humans, of course. All I meant was that it is very easy, when trying to provide a complete spec of a human process, to accidentally lean on other human mental processes that seem on zeroth-glance to be "obvious". It's hard to spot those mistakes without an outside view. To a degree, though I suspect that even in an uploaded mind it would be tricky to isolate and copy-out individual techniques, since they're all likely to be non-locally-cohesive and heavily interdependent.
0Bugmaster
Right, that makes sense. True, but I wasn't thinking of using an uploaded mind to extract and study those ideas, but simply to plug the mind into your overall architecture and treat it like a black box that gives you the right answers, somehow. It's a poor solution, but it's better than nothing -- assuming that the Singularity is imminent and we're all about to be nano-recycled into quantum computronium, unless we manage to turn the AI into an FAI in the next 72 hours.
0Vaniver
Endorsed.
3lessdazed
An analogy: http://eccc.hpi-web.de/report/2011/108/ Need it be primarily one or the other? But if I must pick one, I pick philosophy.
0Bugmaster
I'm afraid I don't see how this article is analogous. The article points out that computational complexity puts a very real limit on what can be computed in practice. Thus, even if you'd proved that something is computable in principle, it may not be computable in our current Universe, with its limited lifespan. You can apply computational complexity to practical problems (f.ex., devising an optimal route for inspecting naval buoys) as well as to theoretical ones (f.ex., discarding the hypothesis that the human brain is a giant lookup table). But these are still engineering and scientific concerns, not philosophical ones. I still don't understand why. If you want to know the probability of FAI being feasible at all, you're asking a scientific question; in order to answer it, you'll need to formulate a hypothesis or two, gather evidence, employ Bayesian reasoning to compute the probability of your hypothesis being true, etc. If, on the other hand, you are trying to actually build an FAI, then you are solving a specific engineering problem; of course, determining whether FAI is feasible or not would be a great first step. So, I can see how you'd apply science or engineering to the problem, but I don't see how you'd apply philosophy.
0lessdazed
To fill in the content the term "FAI" stands for, science isn't enough. Engineering is by guess and check, I suppose, but not really.
0Bugmaster
Sorry, I couldn't parse your comment at all; I'm not sure what you mean by "content". My hunch is that you meant the same thing as TimS, above; if so, my reply to him should be relevant. If not, my apologies, but could you please explain what you meant ?
0lessdazed
I meant what I think he did, so you got it.

This feels more like a style guide than a "vision of how to do philosophy".

4Perplexed
I agree, though it might be redeemed by (1) an argument why this style is the best for doing philosophy successfully, and (2) an explanation of how success at doing philosophy ought to be measured and why anyone should seek this kind of success. The question that Prase asks, nearby, seems to be related.
1lukeprog
All throughout the 'style guide', I gave reasons for why these suggestions matter. Then in penultimate paragraph, I repeated these reasons.
6Wei Dai
I'd phrase the complaint this way: the "vision" part said much about how to communicate philosophical results once you've obtained them, but little about how to obtain those results in the first place. Out of the 11 items, only two (6 and 8) are about the latter instead of the former. Of course how to obtain philosophical results is a much harder problem, so you can't really be blamed for not having a huge amount to say on that. It's really just an expectation management issue. If you declare a "vision of how to do philosophy", people will naturally expect more than writing tips.
0lukeprog
Yes, I understand, but the subject of how to do philosophy is, like, half of Less Wrong. That's why I kept talking about dissolving semantic problems, reductionism, thinking like a cognitive scientist and an AI programmer. Those are all part of my vision of how to do philosophy, and I talked about them in the post and linked to articles on those subjects, but of course I can't repeat all of that content in this little blog post. Don't be fooled by the count of items on the list devoted to style vs. content. Item #6 is really, really important, and covered in detail throughout the archives of Less Wrong.
4Wei Dai
In that case, the problem is ironically one of style. Given that #6 is really, really important, you didn't indicate its importance in any way stylistically. It's listed smack in a middle of a bunch of writing tips. It's not bolded or italicized. It doesn't link or cite any other articles.
2lukeprog
Sheesh you guys are picky. :) Seriously though, I've improved the original post in response. Thanks.

Seems like an appropriate article to relay a bit of wisdom from E.T. Jaynes.

Jaynes quotes a colleague: “Philosophers are free to do whatever they please, because they don’t have to do anything right.”

Use your rationality training, but avoid language that is unique to Less Wrong. All these terms and ideas have standard names outside of Less Wrong

Most, probably not all. Universal statements like this are brittle and rarely correct.

3lukeprog
True, dat. Fixed. Thanks.

justification bottoms out in the lens that can see its flaws

This statement seems misleading, since justification doesn't actually "hit bottom", doesn't stop. For contrast, a quotation from the post:

So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I'm not halting the chain of examination at the point that I encounter Occam's Razor, or my brain, or some other unquestionable. The chain of examination continues - but it continues, unavoidably, using my current brain and my current grasp on

... (read more)
1SilasBarta
I think that sentence was written just to include the names of the articles when linking them, not because it should be taken literally.
0Vladimir_Nesov
I don't expect Luke misunderstood the posts in question to the point of making this error, so I'm not talking about his intention behind writing the statement. I'm merely pointing out that, as written, it's somewhat misleading, whatever the circumstances that shaped it.
1lukeprog
I'm struggling to come up with a very quick way to say this more accurately in the post. Can you think of anything?
4Vaniver
Does "justification rests in the lens that can see its flaws" work?
2lukeprog
I chose this one, thanks.
1Vladimir_Nesov
Say, "The process of justification should never stop, not even flaws in the lens itself are to be overlooked."

Why would it be bad for philosophy to work (primarily) with intuitions? And why would philosophy need empirical evidence? (Relating to the point in the linked post on criticism of dualists not having any evidence). Empirical evidence is not what is (primarily) used in mathematics. If everything could be solved with empirical evidence, there would be no need for philosophy. I don’t see how scientific evidence is better than intuition. Or even possible without them... In case you mean not only empirical evidence but also logical/ mathematical (?) evidence: W

... (read more)
0TAG
Before saying intuitions are bad, you need to show that you can manage without them -- entirely.

as long as you are human there is no final victory.

Hm, that makes a nifty quote.

One thing I mean by saying that philosophers could benefit from 'thinking like AI programmers' is that forcing yourself to think about the algorithm that would generate a certain reality can guard against superstition, because magic doesn't reduce to computer code.

I recently came across Leibniz saying much the same thing in a passage where he imagines a future language of symbolic logic that had not yet been invented:

The characters would be quite different from what has been imagined up to now... The characters of this script should serve invention and j

... (read more)
3Vladimir_Nesov
I appreciate this disclaimer.
0RHollerith
What I take Leibniz to have meant was that when he uses math he is much less prone to self-deception and to mistakenly believing he's had an insight than when he uses natural language, so he tried (and failed) to extend math so that he could use it to talk about or think about all of the things he uses language to talk about, including human and personal things. Gottlob Frege, the creator of predicate logic, had a similar ambition. Note that creating FAI that will extrapolate the volition of the humans requires using math (broadly construed) or formal language to talk about some human things. In particular, you must formally define "human", "volition" and the extrapolation process. The fact that Leibniz and Frege did not get very far with their ambition (although the creation of predicate logic strikes me as some progress) suggests that for us to teach ourselves how to do that might require nontrivial effort -- although I tend to think that we have a head start in some of our mathematical tools. In particular the AIXI formalism (and to a lesser extent) some of the more intellectually-deep traditions we have for designing programming languages and writing programs strike me as superior to any of the "head starts" (including predicate logic) that Leibniz or Frege (who died in 1925) had at their disposal. (Pearl's technical explanation of causality is another things that sort of seems to me that it might possibly somehow assist in this enterprise.) SIAI has not included me in their private or not-completely-public discussions of Friendliness theory to any significant degree, so they might have insights that render my speculations here obsolete.
1RHollerith
Another person who seems to have had the same general ambition as Liebniz and Frege is the Free Software Foundation's lawyer, the man who with Richard Stallman created the General Public License. Eben Moglen. Here's Moglen in 2000:

What is an example of a magical category being used in philosophy? (That is, a convenient handle that I can use to represent the term, 'magical category' when I read it).

1lukeprog
Yudkowsky gives some good examples. Or, consider "objectification." Really, they are ubiquitous in philosophy.
6Tyrrell_McAllister
I'm not sure that it's fair to apply the "magical categories" critique to philosophers who discuss "objectification". Nussbaum would have committed the fallacy of magical categories if she had thought that her discussion would suffice to teach an AI to recognize instances of objectification. But the most that she would purport to have done is to teach humans in her intellectual community how to recognize instances of objectification. So she is allowed the "anthropomorphic optimism" that would be fallacious if she were trying to train an AI. And probably, after reading her article, you could do a very reliable job of categorizing (what she would call) instances of objectification.
0lukeprog
Fair enough; it's a magical category in one sense, and not a magical category in another sense.
0byrnema
In what sense is it a magical category?
6Strange7
Would it qualify as ironic if "magical categories" turned out to be a member of the set of all sets that contain themselves as members?
1Jack
I'm not sure I believe in non-magical categories.
0byrnema
I guess what is ironic is that if "magical categories" are themselves magical, we could never know that they are. Further, not knowing the meaning of a magical category (not even knowing if the meaning is knowable) it is possible that the set of all sets that contain themselves is magical. I'm trying to guess from the context, but I think that being a magical category means that there is no universal algorithm that could be applied to determine if an object x is contained within it. Suppose that this is the definition and that being a magical category strongly means that there is also no algorithm to determine if an object x is not contained within it. All this to quip that if magical categories are magical, then they are contained in the set of all sets containing themselves. If magical categories are strongly magical, they are contained in and contain the set of sets containing themselves. (Since using the property of strongness, it would be impossible to determine if the set-of-sets-containing-themselves are magical or not, making the set-of-sets-containing-themselves magical.)
4byrnema
Those examples don't have citations.* I would like to see how magical categories actually appear in an argument in a philosophy article. This is how I like to handle assimilating generalizations.. I will accept a generalization as true, but I tie it to an actual example. That way, if the generalization is later challenged I can look to see if the context/meaning/framing is different. I am also curious as to whether there is any self awareness of this problem of magical categories in philosophy. * I see now that your post did. However, I still haven't studied enough of your post to gather the details of the magical category there.

Philosophy was for a long time the leading discipline and a cornerstone of the rationalistic ideals you lesswrong folks seem to follow - from Aristoteles to Kant to Wittgenstein. But then there is a growing number of philosophers who start questioning rationality itself or the attempt to create a "system" to explain everything. (From Nietzsche over Freud to Derrida or Foucault). 

For people who seem so concerned to ,figure it all out' based on rational thinking it baffles me, how little effort I see, to understand what fundamental flaws others have see... (read more)

I'm curious how many people here think of rationalism as synonymous with something like Quinean Naturalism (or just naturalism/physicalism in general). It strikes me that naturalism/physicalism is a specific view one might come to hold on the basis of a rationalist approach to inquiry, but it should not be mistaken for rationalism itself. In particular, when it comes to investigating foundational issues in epistemology/ontology a rationalist should not simply take it as a dogma that naturalism answers all those questions. Quine's Epistemology Naturalized i... (read more)

2TAG
There's a way of doing rationality which is maximally open and undogmatic, but that isnt the Less Wrong way. Theres a way of doing naturalism, where you make first sure that science has a firm epistemic foundation and only then accept its results, and that's not the Less Wrong way either. If you look at this passage .,it generalises. Logic and probability and interpretation and theorisation and all that, are also outputs of the squishy stuff in your head. So it seems that epistemology is not first philosophy, because it is downstream of neuroscience.
1Polytopos
I find this claim interesting. I’m not entirely sure what you intend by the word “downstream” but I will interpret it as saying that logic and probability are epistemically justified by neuroscience. In particular, I understand this to include that claim a priori intuition unverified by neuroscience is not sufficient to justify mathematical and logical knowledge. If by "downstream" you have some other meaning in mind, please clarify. However, I will point out that you can't simply mean causally downstream, i.e., the claim that intuition is caused by brain stuff, because a merely causal link does not relate neuroscience to epistemology (I am happy to expand on this point if necessary, but I'll leave it for now). So given my reading of what you wrote, the obvious question to ask is, do we have to know neuroscience to do mathematics rationally? This would be news to Bayse who lived in the 18th century when there wasn’t much neuroscience to speak of. Your view implies that Bayse (or Euclid for that matter) were unjustified epistemically in their mathematical reasoning because they didn’t understand the neural algorithms underlying their mathematical inferences. If this is what you are claiming, I think it’s problematic on a number of levels. First, on it faces a steep initial plausibility problem in that it implies mathematics as a field is unjustified for most of its thousands of years of history until some research in empirical science validates it. That is of course possible, but I think most rationalists would balk at seriously claiming that Euclid didn't know anything about geometry because of his ignorance of cognitive algorithms. But a second deeper problem affects the claim even if one leaves off historical considerations and only looks at the present state of knowledge. Even today when we do know a fair amount about the brain and cognitive mechanisms, the idea that math and logic are epistemically grounded in this knowledge is viciously circular. Any sophist

As a scientist, not a philosopher, I still don't see much virtue in writing "simply". This is a particulary Anglo-Saxon tradition, whereas I (and most of the German-Russian tradition, AFAIK) have always felt that when you try writing simply you lose at least speed of the train of thought and quite likely some of your arguments' power. "No math - no science" is a specific example, but not the only one.

Sorry, I'm not a professional philosopher but did study it at university and still retain an interest in it. I was interested to read this statement. "Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless.".

I take that to mean you consider Tractatus Logico-Philosophicus to be his better work. I do too and have been mocked for saying so when I was a student. I was taught by some very famous professors at a well placed university but I wasn't much of a student, not the ... (read more)

Peter Hacker is not somebody who thinks "philosophy should be useless." Of the list of "basics" that you cite Peter Hacker would agree that "things are made of atoms", "that many questions don't need to be answered but instead dissolved" and "that language is full of tricks." He also explicitly states that "Philosophical Foundations of Neuroscience" should be judged on its usefulness (which is why methodological concerns are relegated to the back pages). Indeed, it seems you equate dissolving prob... (read more)

1lukeprog
You're right. I mis-remembered Hacker's positions. I've updated the original post. Thanks for the correction.

lukeprog wrote "philosophers are 'spectacularly bad' at understanding that their intuitions are generated by cognitive algorithms." I am pretty confident that minds are physical/chemical systems, and that intuitions are generated by cognitive algorithms. (Furthermore, many of the alternatives I know of are so bizarre that given that such an alternative is the true reality of my universe, the conditional probability that rationality or philosophy is going to do me any good seems to be low.) But philosophy as often practiced values questioning ever... (read more)

0quen_tin
I agree, and I really doubt philosophers fail to deeply question their own intuitions.

It may just be my physician's bias, but "diseased" seems like a very imprecise term. The title would be more informative and more widely quoted with another word choice. In medicine you would not find that word in an article title.

There needs to be more cross-talk between philosophy and science. It is not an "either or" choice; we need an amalgam of the two. As a scientist I object strongly to your statement "Second, if you want to contribute to cutting-edge problems, even ones that seem philosophical, it's far more productive to study math and science than it is to study philosophy." Combined approaches are what is needed, not abandonment of philosophy.

5FAWS
It's a callback to an earlier less wrong article

It is not easy to overcome millions of years of brain evolution [...]

Since evolution, in particular, formed our moral inclination and our reasoning ability, this statement sounds a bit unfair/one-sided.

0lukeprog
What do you mean?
-1Vladimir_Nesov
From The Gift We Give To Tomorrow:
3lukeprog
I don't know what you're getting at. Is there a problem with the statement, "It's not easy to overcome millions of years of brain evolution"?
1Vladimir_Nesov
You don't want to "overcome" a lot of what millions of years of brain evolution have formed, only some things.
-8XiXiDu

Look at your intuitions from the outside, as cognitive algorithms.

Which Less Wrong post do I need to read to find out how to do that? Also is there a hard definition of an AI programmer?

The difference between much of mainstream philosophy and LessWrongian philosophy: http://www.lulztruck.com/43901/the-thinker-and-the-doer/

-1Peterdjones
Out of the way! The Singularity is coming! http://www.dismuse.com/wp-content/uploads/2010/10/Glacier2_p.jpg
[-]zaph00

This is my viewpoint as a philosophical laymen. I've liked a lot of the philosophy I've read, but I'm thinking about what the counter-proposal to what your post might be, and I don't know that it wouldn't result in a better state of affairs. I don't believe we'd have to stop reading writers from prior eras, or keep reinventing the wheel for "philosophical" questions. But why not just say, from here on out, the useful bits of philosophy can be categorized into other disciplines, and the general catch all term is no longer warranted? Philosophy cov... (read more)

2mytyde
The decision of what disciplines belong to "science" or "humanitees", "art" or "engineering" is significantly a political decision. Indeed, it is a political question which disciplines exist in which organization and how they fit together. Rationalist philosophers just need to call themselves "Psychologists of Quantitative Reasoning" in order to get funding. In the current political era, it is fashionable to claim 'objectivity' in one's profession despite frequently inquiring into non-empirical matters. This claim of objectivity often serves to hide one's personal biases which, if made explicit, might otherwise be useful in interpretation of research. The drive to be unconcerned with the political implications of one's work is the ideal paradigm for economic exploitation of a class of highly-educated scientists by institutions and people who control how funding is utilized to enables, disables, or actualize research and engineering. Fox News is a perfect example of brutally skewing scientific evidence towards political ends "How Roger Ailes Built the Fox News Fear Factory" http://www.rollingstone.com/politics/news/how-roger-ailes-built-the-fox-news-fear-factory-20110525 (For those of you who would: instead of voting me down because you dislike these ideas, how about trying to engage with them?)

The traditional definition of philosophy (in Greek) implied that philosophy's purpose was not to convey information, but to produce a transformation in the individual who practices it. In that sense, it is not supposed to be "useless", but it may appear so to someone who is looking to it for "information" about reality. By this standard, very little of what goes on in academic Philosophy departments today would qualify.

-2mytyde
I would charge that the same 'institutionalization' which has neutered psychology has changed philosophy into a funding-chaser. Psychology was invented as a means of studying society so that the social situation could be improved: Freud was a socialist. Because many disciplines have moved to institutions, they have less freedom to pursue research and less freedom to depart from the views of their institutions. Also, because funding is dependent on people who have ulterior motives in what they choose to fund, it would be almost impossible for a school of psychology to develop which says, for instance "there's something seriously wrong with our society" because they would be hard-pressed to find research funding. That the general population surrenders so much initiative to scientists who are so strongly influenced by veiled politics is the true tragedy of our time.
0wedrifid
That sounds more likely "Sociology". If you are actually trying to talk about Psychology then your claim seems wrong.
0mytyde
No, my claim is literal. The role of the discipline 'psychology' has shifted over time away from what we now consider 'sociology' and towards an individualistic approach to mental health. The assumption didn't used to be that mental problems were profoundly unique to the individual, but now mainstream psychology does not take into account the sociological factors which affect mental health in all situations. Some sources to elaborate the transformation of the discipline are historiologists & sociologists like Immanuel Wallerstein and Michel Foucault, but there are plenty of non-mainstream psychologists who still practice holistic psychology like Helene Shulman & Mary Watkins.
-2MugaSofer
Really? I often hear dire warnings about how our society e.g. contributes to suicides by publicizing them. These are generally billed as coming from experts in the field. Full disclosure: I live in Ireland, it may be different in other countries. [EDIT: typos)

"3. Philosophy has grown into an abnormally backward-looking discipline."

Indeed. One of the salutory roles that philosophy served until about the 18th century (think e.g. "natural philosophy") was to serve as an intellectual context within new disciplines could emerge and new problems could be formulated into coherent complexes of issues that became their own academic disciplines.

In a world where cosmology and quantum physics and neuroscience and statistics and scientific research methods and psychology and "law and whatever"... (read more)

[-]Liosis-10

The philosophers I study under criticise the sciences for not being rigorous enough. The problem goes both ways. The sciences often do not understand the basic concepts from which they are functioning. A good scientist will also have a rudimentary understanding of philosophy, in order to fiddle with the background epistemology of their work.

You are correct in thinking that Continental philosophy is not continuous with the sciences, because it is the core of the humanities and as such being continuous with the sciences would be unnatural for it. I still thi... (read more)

The philosophers I study under criticise the sciences for not being rigorous enough.

Acid test 1: Are they complaining about experimenters using arbitrary subjective "statistical significance" measures instead of Bayesian likelihood functions?

Acid test 2: Are they chiding physicists for not decisively discarding single-world interpretations of quantum mechanics?

Acid test 3: Are all of their own journals open-access?

It may be ad hominem tu quoque, but any discipline that doesn't pass the three acid tests has not impressed me with its superiority to our modern, massively flawed academic science.

9jimrandomh
(2) appears to reject any discipline that ignores quantum mechanics entirely, or which pays attention to quantum mechanics but whose practitioners consider themselves too confused about it to challenge the consensus position. (3) appears to reject almost all of academia. In particular, it rejects disciplines stuck at the common equilibrium of closed-access journals combined with authors publishing the same articles on their own web pages.
3quen_tin
Acid test (1) and (2): this is where dogma starts.
-1Broggly
I get the problem with (2), although mostly because I haven't thought about quantum mechanics enough to have an opinion, but (1) is no more dogma than "DNA is transcribed to mRNA which is then translated as an amino acid sequence". There are lots of good reasons to investigate the actual likelihood of the null and alternative hypotheses rather than just assuming it's about 95% likely it's all just a coincidence Of course, until this becomes fairly standard doing so would mean turning your paper into a meta-analysis as well as the actual experiment, which is probably hard work and fairly boring.
-1Will_Newsome
ETA: The following comment is mostly off-base due to the reason pointed out in JGWeissman's reply. Mea culpa. Ugh, it's not like many worlds is even the most elegant interpretation: http://arxiv.org/abs/1008.1066 . Talk of MWI is kind of misleading if people haven't already thought about spatially infinite universes for more than 5 minutes, which they mostly haven't. I realize that world-eater supporters are almost definitely wrong, but I'm really suspicious of putting people into the irrational bin because they've failed according to a metric that is knowably fundamentally flawed. I doubt the utility lost via setting a precedent (even if you're damn well sure they're wrong in this case) of actually figuring out ways a person could have fundamentally correct epistemology is more than the utility lost by disregarding everyone and going all Only Sane Man. But my experience is with SIAI and not SL4. Maybe I'd think differently if I was Quirrell.
6JGWeissman
The proposed theory does not seem to be an alternative to MW QM so much as a possible answer to "What adds up to MW QM?". In this light, does pushing MW over Collapse really warrant an "ugh" response?
-2[anonymous]
[insert pun about philosophers dropping acid]
6Emile
This doesn't do much to convince me; for example in these bits you could substitute "philosophy" with "theology", and it would sound the same: The bit about "take what you can" and "every piece comes with a centuries long dialogue" especially could be said of a lot of things (law, for example) and it's not clear why those are good things in themselves.

Philosophy is usually negative. Change my mind.

1Teerth Aloke
Does the work of FHI come under Philosophy?
1Hi there
No

What's weird is that you begin criticizing continental philosophy. Then you say that philosophers do not understand how their brain work, and what their intuition is (linking to an article which explains that our intuition of reality is not reality). But one of the main topic of continental philosophy, long before cognitive science existed, was to argue that we are in a sense trapped inside our cognitive situation with no way out, and for that reason, we cannot know what reality-in-itself is. It feels like you rediscovered Kant... I agree that continental ... (read more)

7cousin_it
These two statements are only superficially similar. If some of our intuitions are sometimes wrong, that doesn't imply that none of our perceptions can give any information about reality.
3quen_tin
They are very similar. Kant does not claim that we have no information about reality, and the linked article does not only say that we are sometimes wrong with our intuition... This statement for example is very "Kantian" : Before you can question your intuitions, you have to realize that what your mind's eye is looking at is an intuition - some cognitive algorithm, as seen from the inside - rather than a direct perception of the Way Things Really Are.
5Tyrrell_McAllister
Kant says that we can know about the representations that appear in the manifold of appearances provided to us by our senses. But, in his view, we can know nothing, zip, zilch, nada, about whatever it is that stands behind those sensory representations. In a sense, Kant takes the map/territory distinction to an extreme. For Kant, the territory is so distinct from the map that we know nothing about the territory at all. All of our knowledge is only about the map.
1quen_tin
* That is also what the linked article seems to entail. The statement I quoted, as I understand it, says that every information we have about reality is the result of "some cognitive algorithm" (=the representations that appears (...) provided by our senses) * The map is certainly a kind of information about the territory (though we cannot know it with certainty). Strictly speaking, Kant does not say we have no information about reality, he says we cannot know if we have or not.
1Tyrrell_McAllister
I don't think that Kant makes the distinction between "knowing" and "having information about" that you and I would make. If he doesn't outright deny that we have any information about the world beyond our senses, he certainly comes awfully close. On A380, Kant writes, And, on A703/B731, he writes, (Emphasis added. These are from the Guyer–Wood translation.)
0Fivehundred
Does anyone smell irony in this whole discussion? Considering the OP specifically derided the whole "discussion of old, dead guys" thing? Ah, I wish this wasn't a three year old post. I have no idea how this site works yet, so who knows whose attention I'll attract by doing this?
1Tyrrell_McAllister
At least the person whose comment you're replying to sees your reply, so you weren't speaking entirely into the void :).
0quen_tin
Ok, it depends what you mean by "information about". My understanding is that we have no information on the nature of reality, which does not mean that we have no information from reality.
1Tyrrell_McAllister
I agree that we get information from reality. And I think that we agree that our confidence that we get information from reality is far less murky than our concept of "the nature of reality". Kant, being a product of his times, doesn't seem to think this way, though. Maybe, if you explained the modern information-theoretic notion of "information" to Kant, he would agree that we get information about external reality in that sense. But I don't know. It's hard to imagine what a thinker like Kant would do in an entirely different intellectual environment from the one in which he produced his work. I'm inclined to think that, for Kant, the noumena are something to which it is not even possible to apply the concept of "having information about".
0TheAncientGeek
Suggestion: knowledge of what a thing is in itself , is like information that is not coded in any particular scheme.
0[anonymous]
I suppose it's a virtue of that interpretation that 'information that cannot be coded in any particular scheme' is a conceptual impossibility (assuming that's what you meant).
0TheAncientGeek
Yes. You can make such an interpretation of the ding-an-such. For my money, that lessens its impact.
1cousin_it
If you are a cognitive algorithm X that receives input Y, this allows you to "know" a nontrivial fact about "reality" (whatever it is): namely, that it contains an instance of algorithm X that receives input Y. The same extends to probabilistic knowledge: if in one "possible reality" most instances of your algorithm receive input Y and in another "possible reality" most of them receive input Z, then upon seeing Y you come to believe that the former "possible reality" is more likely than the latter. This is a straightforward application of LW-style thinking, but it didn't occur to Kant as far as I know.
4quen_tin
If I am a cognitive algorithm X that reveives input Y, I don't necessarily know what an algorithm is, what an input is, and so on. One could argue that all I know is 'Y'. I don't necessarily have any idea of what a "possible reality" is. I might not have a concept of "possibility" nor of "reality". Your way of thinking presupposes many metaphysical concepts that have been questioned by philosophers, including Kant. I am not saying that this line of reasoning is invalid (I suspect it is a realist approach, which is a fair option). My personal feeling is that Kant is upstream of that line of reasoning.
0cousin_it
But I do know what an algorithm is. Can someone be so Kantian as to distrust even self-contained logical reasoning, not just sensations? In that case how did they come to be a Kantian?
3Vladimir_M
Do you? I find the unexamined use of this particular concept possibly the most problematic component of what you call "LW-style thinking." (Another term that commonly raises my red flags here is "pattern.")
1cousin_it
What do you find dubious about the use of this concept on LW?
5Vladimir_M
To take a concrete example, the occasional attempts to delineate "real" computation as distinct from mere look-up tables seem to me rather confused and ultimately nonsensical. (Here, for example, is one such attempt, and I commented on another one here.) This strongly suggests deeper problems with the concept, or at least our present understanding of it. Interestingly, I just searched for some old threads in which I commented on this issue, and I found this comment where you also note that presently we lack any real understanding of what constitutes an "algorithm." If you've found some insight about this in the meantime, I'd be very interested to hear it.
1[anonymous]
I don't see that the concept of a computation excludes a lookup table. A lookup table is simply one far end of a spectrum of possible ways to implement some map from inputs to outputs. And if I were writing a program that mapped inputs to outputs, implementing it as a lookup table is at least in principle always one of the options. Even a program that interacted constantly with the environment could be implemented as a lookup table, in principle. In practice, lookup tables can easily become unwieldy. Imagine a chess program implemented as a lookup table that maps each possible state of the board to a move. It would be staggeringly huge. But I don't see why we wouldn't consider it a computation. One of your links concerns the idea that a lookup table couldn't possibly be conscious. But the topic of consciousness is a kind of mind poison, because it is tied to strong, strong delusions which corrupt everything they touch. Thinking clearly about a topic once consciousness and the self have been attached to it virtually impossible. For example, the topic of fission - of one thing splitting into two - is not a big deal as long as you're talking about ordinary things like a fork in the road, or a social club splitting into two social clubs. But if we imagine you splitting into two people (via a Star Trek transporter accident or what have you), then all of sudden it becomes very hard to think about clearly. A lot of philosophical energy has been sucked into wrapping our heads around the problem of personal identity.
0Vladimir_M
Yes. In my view, this continuity is best observed through graph-theoretic properties of various finite state machines that implement the same mapping of inputs to outputs (since every computation that occurs in reality must be in the form of a finite state machine). From this perspective, the lookup table is a very sparse graph with very many nodes, but there's nothing special about it otherwise.
-1Eugine_Nier
The reason people are concerned with the concept of consciousness, is that they have terms in their utility functions for the welfare of conscious beings. If you have some idea how to write out a reasonable utility function without invoking consciousness I'd love to hear it. (Adjust this challenge appropriately if your ethical theory isn't consequentialist.)
0[anonymous]
I think it is largely because consciousness is so important to people that it is hard to think straight about it, and about anything tied to it. Similarly, the typical person loves Mom, and if you say bad things about Mom then they'll have a hard time thinking straight, and so it will be hard for them to dispassionately evaluate statements about Mom. But what this means is that if someone wants to think straight about something, then it's dangerous to tie it to Mom. Or to consciousness.
1cousin_it
Nope, no new insights yet... I agree that this is a problem, or more likely some underlying confusion that we don't know how to dissolve. It's on my list of problems to think about, and I always post partial results to LW, so if something's not on my list of submitted posts, that means I've made no progress. :-(
0[anonymous]
Granted, our concepts are often unclear.The Socratic dialogs demonstrate that, when pressed, we have trouble explaining our concepts. But that doesn't mean that we don't know what things are well enough to use the concepts. People managed to communicate and survive and thrive, probably often using some of the very concepts that Socrates was able to shatter with probing questions. For example, a child's concepts of "up" and "down" unravel slightly when the child learns that the planet is a sphere, but that doesn't mean that, for everyday use, the concepts aren't just fine.
1AlephNeil
(I know the exchange isn't primarily about Kant, but...) Kant certainly isn't a "distrusting logical reasoning" kind of guy. He takes for granted that "analytic" (i.e. purely deductive) reasoning is possible and truth-preserving. His mission is to explain (in light of Hume's problem) how "synthetic a priori knowledge" is possible (with a secondary mission of exposing all previous work on metaphysics as nonsense). "Synthetic a priori knowledge" includes mathematics (which he doesn't regard as just a variety or application of deductive logic), our knowledge of space and time, and Newtonian science. His solution is essentially to argue that having our sensory presentations structured in space and time, and perceiving causal relations among them, is universally necessary in order for consciousness to exist at all. Since we are conscious, we can know a priori that the necessary conditions for consciousness obtain. [Disclaimer: This quick thumbnail sketch doesn't pretend to be adequate. Neither am I convinced that the theory even makes sense.] What Kant says we cannot know is how things ("really") are, considered independently of the universal and necessary conditions for the possibility of experience. As far as I can tell, this boils down to "it's not possible to know the answers to questions that transcend the limits of possible experience". For instance, according to Kant we cannot know whether the universe is finite or infinite, whether it has a beginning in time, whether we have free will, or whether God exists. It's important to understand that Kant is an "empirical realist", which means that the objects of experience - the coffee cups, rocks and stars around us - really do exist and we can acquire knowledge of them and their spatiotemporal and causal relations. However, if the universe could be considered 'as it is in itself' - independently of our minds - those spatiotemporal and causal relations would disappear (rather like how co-ordinates disappear when you
0quen_tin
The nature of logical reasoning is actually a deep philosophical question... You know what an algorithm is, but do you know if you are an algorithm? I am not sure to understand why you need algorithm at all. Maybe your point is "If you are a human being X that receive an input Y, this allows you to know a nontrivial fact about reality (...)". I tend to agree with that formulation, but again, this supposes some concepts that do not go without saying, and in particular, it supposes a realist approach. Idealist philosophers would disagree. I can understand that your idea is to build models of reality, then use a Bayesian approach to validate them. There is a lot to say about this (more than I could say in a few lines). For example : are you able to gather all your "inputs"? What about the qualitative aspects: can you measure them? If not, how can you ever be sure that your model is complete? Are the ideas you have about the world part of your "inputs'? How do you disentangle them from what comes from outside, how do you disentangle your feelings, memory and actual inputs? Is there a direct correspondance between your inputs and scientific data, or do you have presupositions on how to interpret the data? For example, don't you need to have an idea of what space/time is in order to measure distances and durations? Where does this idea comes from? Your brain? Reality? A bit of both? Don't we interpret any scientific data at the light of the theory itself, and isn't there a kind of circularity? etc.
1cousin_it
This is why I talked about algorithms. When a human being says "I am a human being", you may quibble about it being "observational" or "apriori" knowledge. But algorithms can actually have apriori knowledge coded in, including knowledge of their own source code. When such an algorithm receives inputs, it can make conclusions that don't rely on "realist" or "idealist" philosophical assumptions in any way, only on coded apriori knowledge and the inputs received. And these conclusions would be correct more or less by definition, because they amount to "if reality contains an instance of algorithm X receiving input Y, then reality contains an instance of algorithm X receiving input Y". Your second paragraph seems to be unrelated to Kant. You just point out that our reasoning is messy and complex, so it's hard to prove trustworthy from first principles. Well, we can still consider it "probably approximately correct" (to borrow a phrase from Leslie Valiant), as jimrandomh suggested. Or maybe skip the step-by-step justifications and directly check your conclusions against the real world, like evolution does. After all, you may not know everything about the internal workings of a car, but you can still drive one to the supermarket. I can relate to the idea that we're still in the "stupid driver" phase, but this doesn't imply the car itself is broken beyond repair.
-3quen_tin
I don't think relying on algorithm solves the issue, because you still need someone to implement and interpret the algorithm. I agree with your second point: you can take a pragmatist approach. Actually, that's a bit how science work. But still you did not prove in anyway that your model is a complete and definitive description of all there is nor that it can be strictly identifiable with "reality", and Kant's argument remains valid. It would be more correct to say that a scientific model is a relational model (it describes the relations between things as they appear to observers and their regularities).
1cousin_it
You can be the algorithm. The software running in your brain might be "approximately correct by design", a naturally arising approximation to the kind of algorithms I described in previous comments. I cannot examine its workings in detail, but sometimes it seems to obtain correct results and "move in harmony with Bayes" as Eliezer puts it, so it can't be all wrong.
-6quen_tin
0jimrandomh
All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.
-3[anonymous]
Each of those questions have several known and unknown answers... Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying "I feel 53% happy" does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a "probabilistic" meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
-5quen_tin
2Jack
Kant is actually the last philosopher part of both the analytic and continental canon; neither embracing nor rejecting his positions is emblematic of one school or the other. This particular bit of skepticism has plenty of precursors in early philosophy anyway.
1quen_tin
Agreed.
[+]marcad-50