Philosophy: A Diseased Discipline
Part of the sequence: Rationality and Philosophy
Eliezer's anti-philosophy post Against Modal Logics was pretty controversial, while my recent pro-philosophy (by LW standards) post and my list of useful mainstream philosophy contributions were massively up-voted. This suggests a significant appreciation for mainstream philosophy on Less Wrong - not surprising, since Less Wrong covers so many philosophical topics.
If you followed the recent very long debate between Eliezer and I over the value of mainstream philosophy, you may have gotten the impression that Eliezer and I strongly diverge on the subject. But I suspect I agree more with Eliezer on the value of mainstream philosophy than I do with many Less Wrong readers - perhaps most.
That might sound odd coming from someone who writes a philosophy blog and spends most of his spare time doing philosophy, so let me explain myself. (Warning: broad generalizations ahead! There are exceptions.)
Failed methods
Large swaths of philosophy (e.g. continental and postmodern philosophy) often don't even try to be clear, rigorous, or scientifically respectable. This is philosophy of the "Uncle Joe's musings on the meaning of life" sort, except that it's dressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to 'prove' things irrelevant to the actual scientific data or the equations used.
Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are "spectacularly bad" at understanding that their intuitions are generated by cognitive algorithms.
Less Wrong Rationality and Mainstream Philosophy
Part of the sequence: Rationality and Philosophy
Despite Yudkowsky's distaste for mainstream philosophy, Less Wrong is largely a philosophy blog. Major topics include epistemology, philosophy of language, free will, metaphysics, metaethics, normative ethics, machine ethics, axiology, philosophy of mind, and more.
Moreover, standard Less Wrong positions on philosophical matters have been standard positions in a movement within mainstream philosophy for half a century. That movement is sometimes called "Quinean naturalism" after Harvard's W.V. Quine, who articulated the Less Wrong approach to philosophy in the 1960s. Quine was one of the most influential philosophers of the last 200 years, so I'm not talking about an obscure movement in philosophy.
Let us survey the connections. Quine thought that philosophy was continuous with science - and where it wasn't, it was bad philosophy. He embraced empiricism and reductionism. He rejected the notion of libertarian free will. He regarded postmodernism as sophistry. Like Wittgenstein and Yudkowsky, Quine didn't try to straightforwardly solve traditional Big Questions as much as he either dissolved those questions or reframed them such that they could be solved. He dismissed endless semantic arguments about the meaning of vague terms like knowledge. He rejected a priori knowledge. He rejected the notion of privileged philosophical insight: knowledge comes from ordinary knowledge, as best refined by science. Eliezer once said that philosophy should be about cognitive science, and Quine would agree. Quine famously wrote:
The stimulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?
But isn't this using science to justify science? Isn't that circular? Not quite, say Quine and Yudkowsky. It is merely "reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else." Luckily, the brain is the lens that sees its flaws. And thus, says Quine:
Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.
Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it."
When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
You're in Newcomb's Box
Part 1: Transparent Newcomb with your existence at stake
Related: Newcomb's Problem and Regret of Rationality
Omega, a wise and trustworthy being, presents you with a one-time-only game and a surprising revelation.
"I have here two boxes, each containing $100," he says. "You may choose to take both Box A and Box B, or just Box B. You get all the money in the box or boxes you take, and there will be no other consequences of any kind. But before you choose, there is something I must tell you."
Omega pauses portentously.
"You were created by a god: a being called Prometheus. Prometheus was neither omniscient nor particularly benevolent. He was given a large set of blueprints for possible human embryos, and for each blueprint that pleased him he created that embryo and implanted it in a human woman. Here was how he judged the blueprints: any that he guessed would grow into a person who would choose only Box B in this situation, he created. If he judged that the embryo would grow into a person who chose both boxes, he filed that blueprint away unused. Prometheus's predictive ability was not perfect, but it was very strong; he was the god, after all, of Foresight."
Do you take both boxes, or only Box B?
David Chalmers' "The Singularity: A Philosophical Analysis"
David Chalmers is a leading philosopher of mind, and the first to publish a major philosophy journal article on the singularity:
Chalmers, D. (2010). "The Singularity: A Philosophical Analysis." Journal of Consciousness Studies 17:7-65.
Chalmers' article is a "survey" article in that it doesn't cover any arguments in depth, but quickly surveys a large number of positions and arguments in order to give the reader a "lay of the land." (Compare to Philosophy Compass, an entire journal of philosophy survey articles.) Because of this, Chalmers' paper is a remarkably broad and clear introduction to the singularity.
Singularitarian authors will also be pleased that they can now cite a peer-reviewed article by a leading philosopher of mind who takes the singularity seriously.
Below is a CliffsNotes of the paper for those who don't have time to read all 58 pages of it.
The Singularity: Is It Likely?
Chalmers focuses on the "intelligence explosion" kind of singularity, and his first project is to formalize and defend I.J. Good's 1965 argument. Defining AI as being "of human level intelligence," AI+ as AI "of greater than human level" and AI++ as "AI of far greater than human level" (superintelligence), Chalmers updates Good's argument to the following:
- There will be AI (before long, absent defeaters).
- If there is AI, there will be AI+ (soon after, absent defeaters).
- If there is AI+, there will be AI++ (soon after, absent defeaters).
- Therefore, there will be AI++ (before too long, absent defeaters).
By "defeaters," Chalmers means global catastrophes like nuclear war or a major asteroid impact. One way to satisfy premise (1) is to achieve AI through brain emulation (Sandberg & Bostrom, 2008). Against this suggestion, Lucas (1961), Dreyfus (1972), and Penrose (1994) argue that human cognition is not the sort of thing that could be emulated. Chalmers (1995; 1996, chapter 9) has responded to these criticisms at length. Briefly, Chalmers notes that even if the brain is not a rule-following algorithmic symbol system, we can still emulate it if it is mechanical. (Some say the brain is not mechanical, but Chalmers dismisses this as being discordant with the evidence.)
Theists are wrong; is theism?
Many folk here on LW take the simulation argument (in its more general forms) seriously. Many others take Singularitarianism1 seriously. Still others take Tegmark cosmology (and related big universe hypotheses) seriously. But then I see them proceed to self-describe as atheist (instead of omnitheist, theist, deist, having a predictive distribution over states of religious belief, et cetera), and many tend to be overtly dismissive of theism. Is this signalling cultural affiliation, an attempt to communicate a point estimate, or what?
I am especially confused that the theism/atheism debate is considered a closed question on Less Wrong. Eliezer's reformulations of the Problem of Evil in terms of Fun Theory provided a fresh look at theodicy, but I do not find those arguments conclusive. A look at Luke Muehlhauser's blog surprised me; the arguments against theism are just not nearly as convincing as I'd been brought up to believe2, nor nearly convincing enough to cause what I saw as massive overconfidence on the part of most atheists, aspiring rationalists or no.
It may be that theism is in the class of hypotheses that we have yet to develop a strong enough practice of rationality to handle, even if the hypothesis has non-negligible probability given our best understanding of the evidence. We are becoming adept at wielding Occam's razor, but it may be that we are still too foolhardy to wield Solomonoff's lightsaber Tegmark's Black Blade of Disaster without chopping off our own arm. The literature on cognitive biases gives us every reason to believe we are poorly equipped to reason about infinite cosmology, decision theory, the motives of superintelligences, or our place in the universe.
Due to these considerations, it is unclear if we should go ahead doing the equivalent of philosoraptorizing amidst these poorly asked questions so far outside the realm of science. This is not the sort of domain where one should tread if one is feeling insecure in one's sanity, and it is possible that no one should tread here. Human philosophers are probably not as good at philosophy as hypothetical Friendly AI philosophers (though we've seen in the cases of decision theory and utility functions that not everything can be left for the AI to solve). I don't want to stress your epistemology too much, since it's not like your immortal soul3 matters very much. Does it?
Added: By theism I do not mean the hypothesis that Jehovah created the universe. (Well, mostly.) I am talking about the possibility of agenty processes in general creating this universe, as opposed to impersonal math-like processes like cosmological natural selection.
Added: The answer to the question raised by the post is "Yes, theism is wrong, and we don't have good words for the thing that looks a lot like theism but has less unfortunate connotations, but we do know that calling it theism would be stupid." As to whether this universe gets most of its reality fluid from agenty creators... perhaps we will come back to that argument on a day with less distracting terminology on the table.
1 Of either the 'AI-go-FOOM' or 'someday we'll be able to do lots of brain emulations' variety.
2 I was never a theist, and only recently began to question some old assumptions about the likelihood of various Creators. This perhaps either lends credibility to my interest, or lends credibility to the idea that I'm insane.
3 Or the set of things that would have been translated to Archimedes by the Chronophone as the equivalent of an immortal soul (id est, whatever concept ends up being actually significant).
Unsolved Problems in Philosophy Part 1: The Liar's Paradox
Graham Priest discusses The Liar's Paradox for a NY Times blog. It seems that one way of solving the Liar's Paradox is defining dialethei, a true contradiction. Less Wrong, can you do what modern philosophers have failed to do and solve or successfully dissolve the Liar's Paradox? This doesn't seem nearly as hard as solving free will.
This post is a practice problem for what may become a sequence on unsolved problems in philosophy.
If a tree falls on Sleeping Beauty...
Several months ago, we had an interesting discussion about the Sleeping Beauty problem, which runs as follows:
Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.
Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”
In the end, the fact that there were so many reasonable-sounding arguments for both sides, and so much disagreement about a simple-sounding problem among above-average rationalists, should have set off major alarm bells. Yet only a few people pointed this out; most commenters, including me, followed the silly strategy of trying to answer the question, and I did so even after I noticed that my intuition could see both answers as being right depending on which way I looked at it, which in retrospect would have been a perfect time to say “I notice that I am confused” and backtrack a bit…
And on reflection, considering my confusion rather than trying to consider the question on its own terms, it seems to me that the problem (as it’s normally stated) is completely a tree-falling-in-the-forest problem: a debate about the normatively “correct” degree of credence which only seemed like an issue because any conclusions about what Sleeping Beauty “should” believe weren’t paying their rent, were disconnected from any expectation of feedback from reality about how right they were.
The conscious tape
This post comprises one question and no answers. You have been warned.
I was reading "How minds can be computational systems", by William Rapaport, and something caught my attention. He wrote,
Computationalism is - or ought to be - the thesis that cognition is computable ... Note, first, that I have said that computationalism is the thesis that cognition is computable, not that it is computation (as Pylyshyn 1985 p. xiii characterizes it). ... To say that cognition is computable is to say that there is an algorithm - more likely, a collection of interrelated algorithms - that computes it. So, what does it mean to say that something 'computes cognition'? ... cognition is computable if and only if there is an algorithm ... that computes this function (or functions).
Rapaport was talking about cognition, not consciousness. The contention between these hypothesis is, however, only interesting if you are talking about consciousness; if you're talking about "cognition", it's just a choice between two different ways to define cognition.
When it comes to consciousness, I consider myself a computationalist. But I hadn't realized before that my explanation of consciousness as computational "works" by jumping back and forth between those two incompatible positions. Each one provides part of what I need; but each, on its own, seems impossible to me; and they are probably mutually exclusive.
Alien parasite technical guy
Custers & Aarts have a paper in the July 2 Science called "The Unconscious Will: How the pursuit of goals operates outside of conscious awareness". It reviews work indicating that people's brains make decisions and set goals without the brains' "owners" ever being consciously aware of them.
A famous early study is Libet et al. 1983, which claimed to find signals being sent to the fingers before people were aware of deciding to move them. This is a dubious study; it assumes that our perception of time is accurate, whereas in fact our brains shuffle our percept timeline around in our heads before presenting it to us, in order to provide us with a sequence of events that is useful to us (see Dennett's Consciousness Explained). Also, Trevina & Miller repeated the test, and also looked at cases where people did not move their fingers; and found that the signal measured by Libet et al. could not predict whether the fingers would move.
Fortunately, the flaws of Libet et al. were not discovered before it spawned many studies showing that unconscious priming of concepts related to goals causes people to spend more effort pursuing those goals; and those are what Custers & Aarts review. In brief: If you expose someone, even using subliminal messages, to pictures, words, etc., closely-connected to some goals and not to others, people will work harder towards those goals without being aware of it.
Metaphilosophical Mysteries
Creating Friendly AI seems to require us humans to either solve most of the outstanding problems in philosophy, or to solve meta-philosophy (i.e., what is the nature of philosophy, how do we practice it, and how should we program an AI to do it?), and to do that in an amount of time measured in decades. I'm not optimistic about our chances of success, but out of these two approaches, the latter seems slightly easier, or at least less effort has already been spent on it. This post tries to take a small step in that direction, by asking a few questions that I think are worth investigating or keeping in the back of our minds, and generally raising awareness and interest in the topic.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)