Continuation of: Grasping Slippery Things
Followup to: Possibility and Could-ness, Three Fallacies of Teleology
When I try to hit a reduction problem, what usually happens is that I "bounce" - that's what I call it. There's an almost tangible feel to the failure, once you abstract and generalize and recognize it. Looking back, it seems that I managed to say most of what I had in mind for today's post, in "Grasping Slippery Things". The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f." Where realizable contains the full mystery of "possible" - but you've made it into a basic symbol, and added some other symbols: the illusion of formality.
There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray - so far astray that I simply can't make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.
The proliferation of modal logics in philosophy is a good illustration of one major reason: Modern philosophy doesn't enforce reductionism, or even strive for it.
Most philosophers, as one would expect from Sturgeon's Law, are not very good. Which means that they're not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms. Reductionism is, in modern times, an unusual talent. Insights on the order of Pearl et. al.'s reduction of causality or Julian Barbour's reduction of time are rare.
So what these philosophers do instead, is "bounce" off the problem into a new modal logic: A logic with symbols that embody the mysterious, opaque, unopened black box. A logic with primitives like "possible" or "necessary", to mark the places where the philosopher's brain makes an internal function call to cognitive algorithms as yet unknown.
And then they publish it and say, "Look at how precisely I have defined my language!"
In the Wittgensteinian era, philosophy has been about language - about trying to give precise meaning to terms.
The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries - in a word, about cognitive science.
That's what I think post-Wittgensteinian philosophy should be about - cognitive science.
But this kind of reductionism is hard work. Ideally, you're looking for insights on the order of Julian Barbour's Machianism, to reduce time to non-time; insights on the order of Judea Pearl's conditional independence, to give a mathematical structure to causality that isn't just finding a new way to say "because"; insights on the order of Bayesianism, to show that there is a unique structure to uncertainty expressed quantitatively.
Just to make it clear that I'm not claiming a magical and unique ability, I would name Gary Drescher's Good and Real as an example of a philosophical work that is commensurate with the kind of thinking I have to try to do. Gary Drescher is an AI researcher turned philosopher, which may explain why he understands the art of asking, not What does this term mean?, but What cognitive algorithm, as seen from the inside, would generate this apparent mystery?
(I paused while reading the first chapter of G&R. It was immediately apparent that Drescher was thinking along lines so close to myself, that I wanted to write up my own independent component before looking at his - I didn't want his way of phrasing things to take over my writing. Now that I'm done with zombies and metaethics, G&R is next up on my reading list.)
Consider the popular philosophical notion of "possible worlds". Have you ever seen a possible world? Is an electron either "possible" or "necessary"?Clearly, if you are talking about "possibility" and "necessity", you are talking about things that are not commensurate with electrons - which means that you're still dealing with a world as seen from the inner surface of a cognitive algorithm, a world of surface levers with all the underlying machinery hidden.
I have to make an AI out of electrons, in this one actual world. I can't make the AI out of possibility-stuff, because I can't order a possible transistor. If the AI ever thinks about possibility, it's not going to be because the AI noticed a possible world in its closet. It's going to be because the non-ontologically-fundamental construct of "possibility" turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things. Which is to say that algorithms which make use of a "possibility" label, applied at certain points, will turn out to capture an exploitable regularity of the one real world. This is the kind of knowledge that Judea Pearl writes about. This is the kind of knowledge that AI researchers need. It is not the kind of knowledge that modern philosophy holds itself to the standard of having generated, before a philosopher gets credit for having written a paper.
Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers. The work that has been done - the products of these decades of modern debate - is, by and large, just not commensurate with the kind of analysis AI needs. I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time - not that professional philosophers would be likely to regard me as an authority on whose life has been a waste of time. But if there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it.
And: Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong. Philosophy doesn't resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai says: Too slow! It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct. But philosophy, which hasn't come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn't seem very likely to build complex correct structures of conclusions.
Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence. Parfit comes to mind; and I haven't read much Dennett, but Dennett does seem to be trying to do the same sort of thing that I try to do; and of course there's Gary Drescher. If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading. But I don't know who, besides a few heroes, would be able to compile such a repository - who else would see a modal logic as an obvious bounce-off-the-mystery.
I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I've developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It's not very "old" stuff. The formation of that tradition of discussion began in 1974. So that's my background.
What would I answer to the question whether my work is about language? I'd say it's both about language and algorithms, but it's not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn't sound very possible, so I'd say the goals of RP are relevant.
I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.
Well... I wouldn't say RP enforces reductionism or that it doesn't enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that - it's not a logical contradiction - but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don't want to do so, because it's not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I'm not sure what it could be used for. My intention is that things like "reductionism" are placed within RP instead of placing RP into a box labeled "reductionism".
RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I'm not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I'm not sure what Eliezer means with "reductive". It seems like yet another philosophical concept. I'd better check if it's defined somewhere on LW...
I'm not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That's why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.
I think that's pretty cogent criticism. I've found the same kind of things troublesome.
I understand how Eliezer feels. I guess I don't even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it's not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn't studied the MOQ, I might very well now be laughing at Langan's CTMU with many others, because I wouldn't understand what that thing is he is a bit awkwardly trying to express.
I'd like to illustrate the stagnation of academic philosophy with the following thought experiment. Let's suppose someone has solved the problem of induction. What is the solution like?
I've read academic publications to the point that I don't believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don't believe many scholars think there really can be such a thing. They are interested of "refining" the debate somehow. They don't treat it as some matter that needs to be solved because it actually means something.
This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.
I'll go with 61 pages and quite a few formulae.