Continuation ofGrasping Slippery Things
Followup toPossibility and Could-ness, Three Fallacies of Teleology

When I try to hit a reduction problem, what usually happens is that I "bounce" - that's what I call it.  There's an almost tangible feel to the failure, once you abstract and generalize and recognize it.  Looking back, it seems that I managed to say most of what I had in mind for today's post, in "Grasping Slippery Things".  The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f."  Where realizable contains the full mystery of "possible" - but you've made it into a basic symbol, and added some other symbols: the illusion of formality.

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray - so far astray that I simply can't make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.

The proliferation of modal logics in philosophy is a good illustration of one major reason:  Modern philosophy doesn't enforce reductionism, or even strive for it.

Most philosophers, as one would expect from Sturgeon's Law, are not very good.  Which means that they're not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms.  Reductionism is, in modern times, an unusual talent.  Insights on the order of Pearl et. al.'s reduction of causality or Julian Barbour's reduction of time are rare.

So what these philosophers do instead, is "bounce" off the problem into a new modal logic:  A logic with symbols that embody the mysterious, opaque, unopened black box.  A logic with primitives like "possible" or "necessary", to mark the places where the philosopher's brain makes an internal function call to cognitive algorithms as yet unknown.

And then they publish it and say, "Look at how precisely I have defined my language!"

In the Wittgensteinian era, philosophy has been about language - about trying to give precise meaning to terms.

The kind of work that I try to do is not about language.  It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

That's what I think post-Wittgensteinian philosophy should be about - cognitive science.

But this kind of reductionism is hard work.  Ideally, you're looking for insights on the order of Julian Barbour's Machianism, to reduce time to non-time; insights on the order of Judea Pearl's conditional independence, to give a mathematical structure to causality that isn't just finding a new way to say "because"; insights on the order of Bayesianism, to show that there is a unique structure to uncertainty expressed quantitatively.

Just to make it clear that I'm not claiming a magical and unique ability, I would name Gary Drescher's Good and Real as an example of a philosophical work that is commensurate with the kind of thinking I have to try to do.  Gary Drescher is an AI researcher turned philosopher, which may explain why he understands the art of asking, not What does this term mean?, but What cognitive algorithm, as seen from the inside, would generate this apparent mystery?

(I paused while reading the first chapter of G&R.  It was immediately apparent that Drescher was thinking along lines so close to myself, that I wanted to write up my own independent component before looking at his - I didn't want his way of phrasing things to take over my writing.  Now that I'm done with zombies and metaethics, G&R is next up on my reading list.)

Consider the popular philosophical notion of "possible worlds".  Have you ever seen a possible world?  Is an electron either "possible" or "necessary"?Clearly, if you are talking about "possibility" and "necessity", you are talking about things that are not commensurate with electrons - which means that you're still dealing with a world as seen from the inner surface of a cognitive algorithm, a world of surface levers with all the underlying machinery hidden.

I have to make an AI out of electrons, in this one actual world.  I can't make the AI out of possibility-stuff, because I can't order a possible transistor.  If the AI ever thinks about possibility, it's not going to be because the AI noticed a possible world in its closet.  It's going to be because the non-ontologically-fundamental construct of "possibility" turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things.  Which is to say that algorithms which make use of a "possibility" label, applied at certain points, will turn out to capture an exploitable regularity of the one real world.  This is the kind of knowledge that Judea Pearl writes about.  This is the kind of knowledge that AI researchers need.  It is not the kind of knowledge that modern philosophy holds itself to the standard of having generated, before a philosopher gets credit for having written a paper.

Philosophers keep telling me that I should look at philosophy.  I have, every now and then.  But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.  The work that has been done - the products of these decades of modern debate - is, by and large, just not commensurate with the kind of analysis AI needs.  I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time - not that professional philosophers would be likely to regard me as an authority on whose life has been a waste of time.  But if there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it.

And:  Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong.  Philosophy doesn't resolve things, it compiles positions and arguments.  And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai saysToo slow!  It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct.  But philosophy, which hasn't come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn't seem very likely to build complex correct structures of conclusions.

Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence.  Parfit comes to mind; and I haven't read much Dennett, but Dennett does seem to be trying to do the same sort of thing that I try to do; and of course there's Gary Drescher.  If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading.  But I don't know who, besides a few heroes, would be able to compile such a repository - who else would see a modal logic as an obvious bounce-off-the-mystery.

New Comment
62 comments, sorted by Click to highlight new comments since:
[-]poke20

It's true that contemporary philosophy is still very much obsessed with language despite attempts by practioners to move on. Observation is talked about in terms of observation sentences. Science is taken to be a set of statements. Realism is taken to be the doctrine that there are objects to which our statements refer. Reductionism is the ability to translate a sentence in one field into a sentence in another. The philosophy of mind concerns itself with finding a way to reconcile the lack of sentence-like structures in our brain with a perverse desire for sentence-like structures. But cognitive science is itself a development of this odd way of thinking about the world; sentences become algorithms and everything carries on the same. I don't think you're really too far removed from this tradition.

Talking in terms of sentences is not reifying them; Cognitive science still uses sentences, which are not insulated from interpretational problems.

@ EY: I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time

Well, your buddy Robin Hanson has proved mathematically that my life has been a waste of time in his Doctors kill series of posts. I accept the numbers. Screw the philosophers; now it's their turn. It's all chemical neurotransmitters. Next: the lawyers.

You write that "Philosophy doesn't resolve things, it compiles positions and arguments". I think that philosophy should be granted as providing something somewhat more positive than this: It provides common vocabularies for arguments. This is no mean feat, as I think you would grant, but it is far short of resolving arguments which is what you need.

As you've observed, modal logics amount to arranging a bunch of black boxes in very precisely stipulated configurations, while giving no indication as to the actual contents of the black boxes. However, if you mean to accuse the philosophers of seeing no need to fill the black boxes, then I think you go too far. Rather, it is just an anthropological fact that the philosophers cannot agree on how to fill the black boxes, or even on what constitutes filling a box. The result is that they are unable to generate a consensus at the level of precision that you need. Nonetheless, they at least generate a consensus vocabulary for discussing various candidate refinements down to some level, even if none of them reach as deep a level as you need.

I don't mean to contradict your assertion that (even) analytic philosophy doesn't provide what you need. I mean rather to emphasize what the problem is: It isn't exactly that people fail to see the need for reductionistic explanations. Rather the problem is that no one seems capable of convincing anyone else that his or her candidate reduction should be accepted to the exclusion of all others. It may be that the only way for someone to win this kind of argument is to build an actual functioning AI. In fact, I'm inclined to think that this is the case. If so, then, in my irrelevant judgement, you are working with just about the right amount of disregard for whatever consensus results might exist with the analytic philosophical tradition.

Alright, I am going to bite on this.

E writes: "The proliferation of modal logics in philosophy is a good illustration of one major reason: Modern philosophy doesn't enforce reductionism, or even strive for it."

The usual justification for skepticism about reductionism as a methodology had to do with the status of the bridge laws: those analytic devices which reduced A to B, whether A was a set of sentences, observations, etc. Like climbing the ladders in the Tractatus, they seemed to have no purpose, once used.

They weren't part of the reductive language, yet the they were necessary for the reductive project.

Carnap was probably the last philosopher to try for a systemic reduction, and his attempts floundered on well known problems, circa 1940.

E writes: "Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world? Is an electron either "possible" or "necessary"?"

Kripke's essay on possible worlds makes it clear that there is nothing mysterious about possible worlds, they are simply states of information. Nothing hard.

E writes: " If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading."

Professional philosophers are not scientists, but rather keep alive unfashionable arguments that scientists and technicians wrongly believe have been "solved", as opposed to ignored.

You are not suited for philosophical abstraction because you primarily want to build something. Get on with it, then and stop talking about foundations -which may not exist. Just do it.

Well of course one standard response to such complaints is: "If you think you can do better, show us." Not just better in a one-off way, but a better tradition that could continue itself. If you think you have done better and are being unfairly ignored, well then that is a different conversation.

[-]J.-20

I read this blog for Hanson's posts, but unfortunately you are one of his co-bloggers. I wouldn't be surprised if you delete this or fail to post it, but whatever. Anyways, I occasionally read something you write, and I am struck by how dismissive you are of contemporary philosophy, usually treating it as a strawman or cartoon.

Can you please put your money where your mouth is and publish a philosophical paper in a good journal (such as Philosophical Review, Nous, Philosophy and Phenomenological Research, Journal of Philosophy, Ethics, Mind, or Phil Studies?) Lots of philosophers would love your approach. (I think you will fail to publish anything, not because the discipline is biased against you, but because you are at best a seventh rate thinker self-deceived into thinking he's a second rate thinker. I'm not saying that to be abusive, but, really, to be frank.)

Once you do this, I will begin taking you seriously. Until then, I consider you a very smart crank.

P.S., since you frequently write on topics other than your specialty (the singularity), such as moral realism, reductionism, etc., please make your publication one of these topics.

Are your feelings only confined to philosophy, modern or otherwise? I feel the same sense of 'modal logic' everywhere – art, politics, even technology – conversations, arguments, and discussions seem endlessly disconnected, related languages speaking past one another.

I think Tyrrell nails it – philosophy mainly provides common vocabularies. And I must agree with him – it is no mean feat.

I highly recommend the various works of Daniel Dennett – having read him before reading you, I feel prepared for exactly your favored type of argument – dissolving confusion by rejecting false dichotomies and rigorously separating layers.

The universe is endlessly amazing, and I feel blessed by being so curious. I think it's miraculous that philosopher's are as good as they are!

I confess that I'm confused. Why does the "proliferation" of modal logics imply that philosophers do not strive for reductionism? Why think that having several modal logics is a bad thing? These logics were developed originally as purely formal syntactic systems with different sets of axioms. In a sense, decrying the proliferation of modal logics is akin to decrying the proliferation of non-Euclidean geometries. There were modal logics long before philosophers ever spoke of possible worlds, which, unless you're one of the few convinced by David Lewis, philosophers take simply to be a useful heuristic when speaking of possibility and necessity. How can one talk about a purely causal model with some notion of necessity? That would be a purely causal model without any notion of causality. It strikes me that even the AI theorist would like to discuss causation, consistency of models, logical implication, maybe even moral obligation. These are all modal notions, but unfortunately, they're not logically equivalent. We shouldn't fall into a trap of being reductionists purely for the sake of the reduction.

I agree on Pearl's accomplishment.

I have read Dennet, and he does a good job of explaining what Consciousness is and how it could arise out of non-conscious parts. William Calvin was trying to do the same thing with how wetware (in the form that he knew it at the time) could do something like thinking. Jeff Hawkins had more details of how the components of the brain work and interact, and did a more thorough job of explaining how the pieces must work together and how thought could emerge from the interplay. There is definitely material in "On Intelligence" that could help you think about how thought could arise out of purely physical interactions.

I'll have to look into Drescher.

Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong. Philosophy doesn't resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai says: Too slow!

Still, I hope your Friendliness structure can cope with the case where zombies are possible. Well, I guess that one wouldn't make any difference - so I should say I hope you're also trying to minimize the number of philosophical problems you have to be right about.

Sometimes I enjoy these postings, sometimes I am puzzled. They often are so self-referential (links are mostly to older postings of the same author) and ranting that I wonder whether I am being had. I don't doubt anyone's good intentions. I am just documenting my belief that Eliezer's state is binary: either the next Wittgenstein or a world-class delusional crank.

I've made similar dismissals of philosophy's fruits at this blog and elsewhere. That was supposed to make me a nihilist, philistine psychopath. As I recall, Eliezer did not agree with my analogy to theology and astrology.

What do you think of the philosophy faculty of MIT and Cal-Tech? I ask because I suspect the faculty there selects for philosophers that would be most usual to hard scientists and engineers (and for hard science and engineering students).

http://www.mit.edu/~philos/faculty.html

http://www.hss.caltech.edu/humanities/faculty

  1. I'm curious to hear Nick Bostrom's response to this.

  2. Something like modal logic is needed to automate solutions to things like this: Blue-eyed Monks Though you might be right about the proliferation of modal logics.

  3. You made some similar points here: Where Philosophy Meets Science And Robin Hanson followed up here: On Philosophers

  4. Both times it was pointed out that Paul Graham has some similar complaints about philosophy here: How to Do Philosophy

Daniel Dennett is smart and usually right - but I find his writing style pretty yawn-inducing. I'm not very impressed by his detour into religion, either. Rather like Dawkins, it seems like he's been dragged down into the gutter by the creationists.

Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong. Philosophy doesn't resolve things, it compiles positions and arguments.

This would be why I never finished that philosophy degree. Academic philosophy does not seem particularly interested in solving the world's problems. Tyrrell McAllister has a good point on the value of providing a way of discussing things, but if there is not even in principle a way of deciding what would constitute filling the black box, the discipline will keep juggling the boxes.

There must be some merit in games of language and logic, but they remain that: games. Sudoku and World of Warcraft are similarly structured games, and you could argue seriously about whether an issue of Games Magazine improves the world more or less than any scholarly journal J. mentioned.

That said, starting with Sturgeon's Law, we already knew the majority was waste paper. What is your probability that the good 10% is not worth the search cost to find it?

As a meta-Overcoming Bias comment, I think this post is necessary for Eliezer. When he discusses philosophical issues, there are a half-dozen of us who cite a hundred-year history of work on, for example, meta-ethics. I must interpret this post as a case for rational ignorance, "I am not going to read all that because it is obviously waste paper," as opposed to "I am familiar with that but I have rejected it" (or the latter with very small values of "familiar"). So this is one of those one-link responses.

We can meditate on whether it resolves the issue rather than giving a feeling of resolution. With respect to philosophy, I often find surprisingly little progress since Hume (on questions of interest to me). When an OB post arrives at a standard argument, maybe via a different door, I expect it to be able to engage standard critiques. "All standard critiques are meaningless black box juggling exercises until proven otherwise" is perhaps a viable heuristic, but it feels convenient.

This also feels a bit like the "outside view" Eliezer criticizes Robin for using to make predictions.

You're right that he should be able to engage standard critiques, Zubon, but if my (negligible) experience with the philosophy of free will is any indication, many "standard critiques" are merely exercises in wooly thinking. It's reasonable for him to step back and say, "I don't have time to deal with this sort of thing."

Zubon: This also feels a bit like the "outside view" Eliezer criticizes Robin for using to make predictions.

Problem is not in using outside view, but in using outside view that doesn't really apply to what it's being applied to, in trying to infer properties from surface similarities that don't indicate that objects have similar causal structure. If you are studying a single object, statistics of arbitrarily surface level provides valid ground for predictions, if this single object doesn't change its causal structure while under study.

Well of course one standard response to such complaints is: "If you think you can do better, show us." Not just better in a one-off way, but a better tradition that could continue itself.
Can do? It's already been done, long ago - we call it 'science'.

Do not confuse technicians and stylists with those that apply the scientific method. Among those that do, some of the greatest of them made greater 'philosophical' progress while working and writing on matters only tangentially related to their nominal fields than countless generations of so-called philosophers who supposedly dedicated themselves to the issues.

Even an amateur scientist can quickly develop working resolutions to questions that philosophy has held up as eternal.

By this point, even an extraordinarily-unobservant thinker should have realized that philosophy isn't about finding the answer to questions - it's about posturing as profound while mouthing questions, then talking with others to mutually demonstrate the intellectual importance of the topic and thus those that discuss it. It's a form of status-masturbation.

[-]J.-10

Caledonian, aside from the continental school, could you please give some examples of people trying to posture to be profound? In philosophy graduate programs today, you are explicitly told not to posture.

Also, could you give an example of a philosophical problem that science has solved. E.g., What makes right actions right? What makes a society just? What makes mathematical claims true?

Your polemic is embarrassing. Your post was a form of masturbation.

[-]asdf20

J.: Zeno's Paradox was solved by mathematicians (honorable members of the scientific community even if you think mathematics is not part of science).

"it feels like I'm telling philosophers that their life's work has been a waste of time."

If my immediate interest is to trigger a subject's saliva reflex, it would be a much better use of my time to vividly describe to the subject the sensations of biting into a lemon than it would to inquire after the algorithms that give rise to lemony sensations.

I am reductionist, but I can't quite imagine an intellectual life that abstracted away all conscious interest in phenomenological structure in favor of monomaniacal attention to the base structure. Then again, there's no accounting for taste. (Or is there?)

[+]Anna2-110
[+]Anna2-170

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray - so far astray that I simply can't make use of their years and years of dedicated work.

Yes, much modern philosophy has gone astray. But some hasn't. I would cite, for example, the thinking of critical rationalists such as Karl Popper, William Warren Bartley, David Deutsch, and David Miller.

Moreover I maintain that critical rationalism ought to be of use to you. First, it contains cogent criticism of inductivism and crypto-inductivism and one who understands these criticisms should see why Bayescraft is sterile. This knowledge is not only useful, it can't be ignored. Second, critical rationalism, and not Bayescraft, is our best current theory of knowledge and how we come to know things. Best theories are useful not only in themselves but also for the problems they contain.

[-]sark10

What did you think of that part of EY's bayes intro where he reduces Falsificationism to a special case of Bayesianism?

There is a tradition of philosophy with value.

Many famous and modern philosophers are distractions from this. The same was true in the past. Each generation, most philosophers did not carry on the important, mainstream (in hindsight) tradition.

If you can't tell which is which, to me that suggests you could learn something by studying philosophy. Once you do understand what's what, then you can read exclusively good philosophy. For example, you'd know to ignore Wittgenstein, as the future will do. But the worthlessness of some philosophers does not stop people like William Godwin or Xenophanes from having valuable things to say (and the more recent philosophers who are carrying on their tradition).

Re: It contains cogent criticism of inductivism and crypto-inductivism and one who understands these criticisms should see why Bayescraft is sterile.

Uh, surely that's not the correct moral. It's like arguing that physics is sterile because of solipsism.

Tim, you wrote here that:

A perfectly rational agent who denies the validity of induction would be totally unimpressed by Bayesian arguments.

Have you changed your mind? Do you now deny that Bayescraft relies on induction?

Kripke's essay on possible worlds makes it clear that there is nothing mysterious about possible worlds, they are simply states of information. Nothing hard.

Good for Kripke, then. I've often found that the major people in a field really do deserve their reputations, and I haven't asserted that good philosophy is impossible, just that the field has failed to systematize it enough to make it worthwhile reading.

However, you do not solve an AI problem by calling something a "state of information". Given that there's only one real world, how are these "possible worlds" formulated as cognitive representations? I can't write an AI until I know this.

However, can you give me an immediate and agreed-upon answer to the question, "Is there a possible world where zombies exist?" Considering the questions that follow from that, will make you realize how little of the structure of the "possible worlds" concept follows just from saying, "it is a state of information".

Did Kripke mark his work as unfinished for failing to answer such questions? Or did he actually try to answer them? Now that would earn serious respect from me, and I might go out and start looking through Kripke's stuff.

Robin: Well of course one standard response to such complaints is: "If you think you can do better, show us." Not just better in a one-off way, but a better tradition that could continue itself. If you think you have done better and are being unfairly ignored, well then that is a different conversation.

Robin, my response here is mainly to philosophers who say, "We did all this work on metaethics, why are you ignoring us?" and my answer is: "The work you did is incommensurable with even the kind of philosophy that an AI researcher needs, which is cognitive philosophy and the reduction of mentalistic thinking to the non-mental; go read Gary Drescher for an example of the kind of mental labor I'm talking about. Some of you may have done such work, but that's no help to me if I have to wade through all of philosophy to find it. Even your compilations of arguments are little help to me in actually solving AI problems, though when I need to explain something I will often check the Stanford Encyclopedia of Philosophy to see what the standard arguments are. And I finally observe that if you, as a philosopher, have not gone out and studied cognitive science and AI, then you really have no right to complain about people 'ignoring relevant research', and more importantly, you have no idea what I'm looking for." This is my response to the philosophers who feel slighted by my travels through what they feel should be their territory, without much acknowledgment.

However, with all that said - if I was trying to build a tradition that would continue itself, these posts on Overcoming Bias would form a large part of how I did it, though I would be much more interested in making them sound more impressive (which includes formalizing/declarifying their contents and publishing them in journals) and I would assign a higher priority to e.g. writing up my timeless decision theory.

Also, could you give an example of a philosophical problem that science has solved.

Considering that science developed out of a style of philosophical thought called 'natural philosophy', every question science has addressed has been a philosophical one.

The real problem is that when actual progress is made on a 'philosophical' question, we associate it with the branch of science that made the progress. Turing and Godel were mathematicians, Schroedinger was a physicist (and one of his most impressive insights was in the intersection of biology and information theory), Fermi a physicist, etc.

The only things that remain in the category of philosophy are those that are utterly useless and fail to expand our understanding of any aspect of the world. It's a simple selection effect - the gold is sifted out while the dross remains.

Turing alone resolved more questions that were traditionally considered to be within with bounds of 'philosophy' as you refer to it than anyone I can think of offhand.

Re: Bayesianism and induction.

Bayesianism is a formalisation of induction. The philosophical problems with the foundations of inductive reasoning are equally problems with the foundations of Bayesianism. These problems are essentially unchanged since Hume's era:

Rather than unproductive radical skepticism about everything, Hume said that he was actually advocating a practical skepticism based on common sense, wherein the inevitability of induction is accepted. Someone who insists on reason for certainty might, for instance, starve to death, as they would not infer the benefits of food based on previous observations of nutrition. - http://en.wikipedia.org/wiki/Problem_of_induction
[-]JB300

J said: I read this blog for Hanson's posts, but unfortunately you are one of his co-bloggers

AND

but because you are at best a seventh rate thinker self-deceived into thinking he's a second rate thinker.

don't you think that Robin must think EY is at least a second rate thinker, or else he wouldn't let himself be associated with such a lowly seventh rate thinker...

i completely understand if you don't think EY is a worthwhile guy to read, no prob there...but then why read Hanson also? if they are colleagues and co-bloggers there must be something about EY that Robin thinks is first rate, no?

then why read Hanson also? if they are colleagues and co-bloggers there must be something about EY that Robin thinks is first rate, no?

Not necessarily. Hanson might be a good thinker who is also a personal opportunist who'll do anything to enhance his status, where co-publishing with Yudkowsky helped put Hanson's blog on the map. Hanson could have "admired" Yudkowsky for his fan-club building capacities rather than for the high quality of his thinking.

Tim,

Re: Bayesianism and induction.

Given your concession that Bayesianism is a formalisation of induction, I don't understand your original criticism that me saying inductivism renders Bayesian sterile is like saying solipsism renders physics sterile.

Here's a definition from David Deutsch's "The Fabric of Reality:

Crypto-Inductivist: Someone who believes that the invalidity of inductive reasoning raises a serious philosophical problem, namely the problem of how to justify relying on scientific theories.

Crypto-inductivists have an "induction shaped" gap in their scheme of things.

Critical rationalism really did solve the problem of induction: It has no "induction shaped" gap.

I'm guessing from your Hume quote that you think it did so by resorting to radical skepticism, but if you think this you are mistaken.

Eliezer, I recommend you to read Dennett's "Artificial Intelligence as Psychology and as Philosophy", in his collection of essays Brainstorms. It may be a bit dated, but it makes a very nice case for a division of territory between AI, Psy and Phi, and how each of them can help the others.

You can download that chapter here.

Eliezer, I don't think your comments would slight sensible philosophers, since many professional philosophers themselves make comparable or more biting criticisms about the discipline (Rorty, Dennett, Unger, now the experimental philosophy movement, et al., going back to the positivists, and, if you like, the Pyrrhonists and atomists). I'm afraid not only have philosophers already written extensively on meta-ethics, but they've also generated an extensive literature on anti-philosophy. They've been there, done that -- too! I think Tyrell McAllister is quite right to say that since philosophy largely consists of folks who can't agree on the most workable models, your functional interests will tend to be frustrated by philosophy. Like your estimable hero Dick Feynman (who, according to Len Mlodinow, averred that "philosophy is bullshit"), it'd be better for you simply to get on with your tasks at hand, and not expect much help from philosophy -- to find the worthwhile stuff you'd have to become one. Maybe you can do that after the FAI builds you an immortal corporeal form.

Re: Critical rationalism

I do not rate Popper's contributions in this area very highly - e.g. see here.

Science without induction is a complete joke. Popper didn't eliminate induction, he just swept it under a consensual rug.

Tim,

Re: Critical rationalism

Critical rationalism is similar to evolutionary adaptation (though there are some important differences). Do you think evolution depends on induction, or would you admit that there are knowledge-generation processes that do not require induction in any way, shape, or form?

Our knowledge of evolutionary theory depends on induction. Without induction, you can't establish the uniformity of nature. You have no grounds for believing that what happened yesterday is any guide to what may happen tomorrow. Without induction, science is totally screwed. Popper's epistemology was not science without induction:

Wesley C. Salmon critiques Popper's falsifiability by arguing that in using corroborated theories, induction is being used. Salmon stated, "Modus tollens without corroboration is empty; modus tollens with corroboration is induction."

What on Earth is evolution, if not the keeping of DNA sequences that worked last time? It's less efficient than human induction and stupider, because it works only with DNA strings and is incapable of noticing simpler and more fundamental generalizations like physics equations. But of course it's a crude form of inductive optimization. What else would it be? There are no knowledge-generating processes without some equivalent of an inductive prior or an assumption of regularity. The maths establishing this often go under the name of No-Free-Lunch theorems.

Evolution does not increase a species' implicit knowledge of the niche by replicating genes. Mutation (evolution's conjectures) creates potential new knowledge of the niche. Selection decreases the "false" implicit conjectures of mutations and previous genetic models of the niche.

So induction does not increase the implicit knowledge of gene sequences.
Trial (mutation) and error (falsification) of implicit theories does. This is the process that the critical rationalist says happens but more efficiently with humans.

"What on Earth is evolution, if not the keeping of DNA sequences that worked last time?

It's also replication and variation.

It's less efficient than human induction and stupider, because it works only with DNA strings and is incapable of noticing simpler and more fundamental generalizations like physics equations. But of course it's a crude form of inductive optimization. What else would it be?

That seems like an argument from "failure of imagination". Quite simply, evolution is trial and error.

There are no knowledge-generating processes without some equivalent of an inductive prior or an assumption of regularity.

This is just question begging, as I think you are aware. How did we come by the knowledge of induction? Did we induce it? Impossible! So, therefore, there must be at least one way to knowledge that doesn't involve induction.

This stuff is all old hat. Philosophers of the 20th century like Popper and Bartley realized that the whole induction quagmire is caused by people looking for justified sources of knowledge. They concluded that justificationism is a mistake and replaced it with critical rationalism. Now there are bad scholars who claim that critical rationalism sneaks induction in through the back door. But that is just bad scholarship.

It's a shame to still be wasting time on induction in the 21st century. Rather than rehashing old problems, shouldn't we be building on what the best of 20th century philosophy gave us?

The maths establishing this often go under the name of No-Free-Lunch theorems.

Were the assumptions of these theorems inductively justified?

If a modal logic can hide a mystery inside a black box, and everything outside the black box behaves consistently, that would be an incredibly useful achievement. You would have isolated the mystery.

This post demonstrates a deep misunderstanding of modal logics, and of the notions of possibility and necessity. one would expect that misunderstanding given that Eli can't really get himself to read philosophy. For example:

"I have to make an AI out of electrons, in this one actual world. I can't make the AI out of possibility-stuff, because I can't order a possible transistor."

What? What kind of nonsense is this? No contemporary philosophers would ever say that you can make something out of "possibility stuff", whatever the hell that is is supposed to be.

Or this:

"It's going to be because the non-ontologically-fundamental construct of "possibility" turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things."

Eli, everything that is actual is trivially possible, according to every single contemporary analytic philosopher. I have no idea what you mean by "fundamentally possible", but I doubt you mean anything useful by it. If x exists, then it's possible that x exists. If x is an actual object, then x is a possible object. If you want, you can treat those claims as axioms. What's your beef with them? Surely you don't think, absurdly, that if x actually exists then it's not possible that x exists?

One also has to wonder what your beef with meaning is. I mean, surely you mean something and mean to communicate something when you string lots of letters together. Is there nothing you mean by "reductionism"? If you don't mean anything by using that linguistic term, then nobody should pay attention to you.,

Eli, everything that is actual is trivially possible, according to every single contemporary analytic philosopher. I have no idea what you mean by "fundamentally possible", but I doubt you mean anything useful by it. If x exists, then it's possible that x exists. If x is an actual object, then x is a possible object. If you want, you can treat those claims as axioms. What's your beef with them? Surely you don't think, absurdly, that if x actually exists then it's not possible that x exists?

Allow me to attempt to translate (BTW, that a claim is so absurd is evidence it is not being made. Just sayin'.):

EY is not saying that some actual things are not possible. He is saying that things that are not actual, yet "possible", are exactly the same, as far as the universe is concerned, as things that are not actual and not "possible". Specifically, they are all nonexistent. Hence possibility is not fundamental in any ontological sense.

The general gist of the whole post is complaining that for all their precise logic, the people who invented modal logic have still not understood possibility and necessity. They formalized the intuitions about how possibility and necessity work, but didn't solve what they actually are (which is: labels applied by a decision-making algorithm).

He is saying that things that are not actual, yet "possible", are exactly the same, as far as the universe is concerned, as things that are not actual and not "possible". Specifically, they are all nonexistent. Hence possibility is not fundamental in any ontological sense.

But the laws of the universe demarcate possible things from impossible things: so can you dismiss the reality of possibilities without dismissing the reality of laws?

Modal logic doesn't tell you if some sentence is possible or necessary, it tells you what sentences must have what modal values given some other sentences with some prespecified modal values. Just like Komolgorov doesn't tell you that the probability of a die landing on any face is 1/6, and that it can't land on two values, it just tells you that given that, the probability of the die landing on an even value is 1/2.

Komolgorov and Bayes seem to me to be guilty of the same sort of bouncing, but i think Bayes and Komolgorov are clearly useful tools for the study of rationality. Modal logic does not define possibility, and it certainly does not reduce the notion of modality to anything, but it does constrain the assigning of modal values to fields of sentences. Any philosopher that argued otherwise is prolly a noob.

But, in general I agree with you. I am a philosopher, or at least that's my major, and i agree that: It is only by extraordinary competence that philosophers ever produce useful reductions; that's something I hope to change by going into the field. And btw, I plan on using your work all the time to help me make that happen. So would it bother you, or seem strange, if i called you a philosopher, Eliezer? Cause I honestly say that your one of my favorite philosophers, if not my favorite, often enough, and i would find it funny if my favorite philosopher, didn't even consider himself a philosopher at all, and wasn't all that intimate with the literature. It's a fact I'd like to know for personal amusement.

Philosophers are scientists, they're just really bad scientists for the most part. This is due to the fact that they draw their power from the couple thousand years of moderately interesting mistakes that we call "the history of western philosophy". What makes philosophers different from any other group of scientists, is simply the targets of inquiry they specialize (or try to specialize) in. The same thing that makes a biologist different from a physicist. Some philosophers have done well, but they had to invent too much of the art for themselves; not enough of their power came from the cumulative learning of their predecessors being passed verbally. Often the scriptures have done more to lead new students astray, than to lead them to victory. This sort of staggeringly slow progress, taking thousands of years, and rarely ever leading to professional consensus, can be starkly contrasted with the rapid progress of the rather young science of biology.

We are all Bayesian here, right? Let's cut to the chase. Either philosophers will find predictive hypothesis spaces that make empirically testable predictions and manage to update their belief values for those hypotheses with Bayesian evidence, or the field of philosophy is, and always was, as doomed as the field of astrology. Some philosophers do of course do this sometimes, since some philosophers are sometimes right.

The problem philosophy faces is that it hasn't been able to reliably teach its students how to do the bayes dance in philosophy, the way biology has been able to teach its students to do the bayes dance in biology. What i suggest that we philosophers do, is take a good long look at top notch biology (or physics, or psychology, or mathematics, or computer science, or astronomy, or geology, or economics, or any other science progressing faster than wax melts) training and philosophy training, and figure out what's going on in the biology training community, that isn't going on in the philosophy training community. Then we try to bridge the gap.

Philosophy is hard, but so is super symmetry, and for much the same reasons. If the bayes dance can handle the rest of science, I get the feeling it shouldn't get stumped here. There are solvable problems of philosophy, they are just really hard, and really hard scientific problems, require really good science to get solved; not moderate science, or good enough science — really good science. It is no wonder that philosophy has steadily progressed at the pace of a snail for the last 2000 years; its students have been given Plato in the absence of Bayes.

I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion began in 1974. So that's my background.

The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

What would I answer to the question whether my work is about language? I'd say it's both about language and algorithms, but it's not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn't sound very possible, so I'd say the goals of RP are relevant.

But this kind of reductionism is hard work.

I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.

Modern philosophy doesn't enforce reductionism, or even strive for it.

Well... I wouldn't say RP enforces reductionism or that it doesn't enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that - it's not a logical contradiction - but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don't want to do so, because it's not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I'm not sure what it could be used for. My intention is that things like "reductionism" are placed within RP instead of placing RP into a box labeled "reductionism".

RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I'm not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I'm not sure what Eliezer means with "reductive". It seems like yet another philosophical concept. I'd better check if it's defined somewhere on LW...

And then they publish it and say, "Look at how precisely I have defined my language!"

I'm not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That's why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.

Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world?

I think that's pretty cogent criticism. I've found the same kind of things troublesome.

Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.

I understand how Eliezer feels. I guess I don't even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it's not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn't studied the MOQ, I might very well now be laughing at Langan's CTMU with many others, because I wouldn't understand what that thing is he is a bit awkwardly trying to express.

I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?

  • Ten pages?
  • Hundred pages?
  • Thousand pages?
  • Does it contain no formulae or few formulae?
  • Does it contain a lot of formulae?

I've read academic publications to the point that I don't believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don't believe many scholars think there really can be such a thing. They are interested of "refining" the debate somehow. They don't treat it as some matter that needs to be solved because it actually means something.

This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.

I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?

Ten pages? Hundred pages? Thousand pages? Does it contain no formulae or few formulae? Does it contain a lot of formulae?

I'll go with 61 pages and quite a few formulae.

Jaynes quoted a colleague: “Philosophers are free to do whatever they please, because they don’t have to do anything right.”

Philosophers lack the feedback loop from reality that an engineer trying to build a mind has. Most of the heated philosophical squawking about minds will be rendered irrelevant once we start building them.

One of the reasons Dennett usually makes sense is he tries to know the science involved.

Just the other day I was watching Dennett: http://www.youtube.com/watch?v=2hBQCBpyu74&feature=g-hist

At around 6:00, he's saying how he sees the job of philosophers as matching up the manifest image of the world with the scientific image of the world. I think that kind of philosophy will always be needed.

Reductionism is, in modern times, an unusual talent. Insights on the order of Pearl et. al.'s reduction of causality or Julian Barbour's reduction of time are rare.

Ye-e-e-s. But it is not at all clear whether Barbours reduction works. (See Fay Dowker;s criticisms in the appendices, for instance). It's not a reduction in the sense that "heat is molecular motion" is a universally accepted, succesful reduction.

Asking "is this reductive" and nothing else is not a good way to do philosophy.

The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f." Where realizable contains the full mystery of "possible" - but you've made it into a basic symbol, and added some other symbols: the illusion of formality.

Can you keep on "reducing" -- unpacking the meanings of terms -- without hitting a bedrock? Is there anyone who doesn't know what "can" and could" mean? Can you not co-define a set of words in terms of each other, coherentisically, without prejuice as to what is fundamental?

what is the basis for the position that knowledge of the world must come from analytical/probabilistic models? I'm not questioning the "correctness" of your view, only wondering your basis for it. It seems awfully convenient that a type of model that yields conclusions is in fact the correct one -- put another way, why is the availability of a clear methodology that gives you answers indicative of its universal applicability in attaining knowledge?

traditional philosophy, as you correctly point out, has failed to bridge its theory to practice -- but perhaps that is the flaw of the users and not the theory. rationalists generally believe the use of probabilities is sound methodology, but the problems regarding decision-making are a flaw of the practitioners. though I appreciate you likely disagree, perhaps we have the same problem with philosophy. Though there are no clear answers, the models of thought they provide could effectively apply in practical situations, its just that no philosopher has been able to get there.

You might be interested to look at David Corfield's book Modal Homotopy Type Theory. In the chapter on modal logic, he shows how all the different variants of modal logic can be understood as monads/comands. This allows us to understand modality in terms of "thinking in a context", where the context (possible worlds) can be given a rigorous meaning categorically and type theoretically (using slice categories).

The kind of work that I try to do is not about language.  It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.

And as we all know, language has nothing to do with cognitive science.