Friendly AI and the limits of computational epistemology
Very soon, Eliezer is supposed to start posting a new sequence, on "Open Problems in Friendly AI". After several years in which its activities were dominated by the topic of human rationality, this ought to mark the beginning of a new phase for the Singularity Institute, one in which it is visibly working on artificial intelligence once again. If everything comes together, then it will now be a straight line from here to the end.
I foresee that, once the new sequence gets going, it won't be that easy to question the framework in terms of which the problems are posed. So I consider this my last opportunity for some time, to set out an alternative big picture. It's a framework in which all those rigorous mathematical and computational issues still need to be investigated, so a lot of "orthodox" ideas about Friendly AI should carry across. But the context is different, and it makes a difference.
Begin with the really big picture. What would it take to produce a friendly singularity? You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").
Now let's consider how SI will approach these goals.
The evidence says that the working ontological hypothesis of SI-associated researchers will be timeless many-worlds quantum mechanics, possibly embedded in a "Tegmark Level IV multiverse", with the auxiliary hypothesis that algorithms can "feel like something from inside" and that this is what conscious experience is.
The true morality is to be found by understanding the true decision procedure employed by human beings, and idealizing it according to criteria implicit in that procedure. That is, one would seek to understand conceptually the physical and cognitive causation at work in concrete human choices, both conscious and unconscious, with the expectation that there will be a crisp, complex, and specific answer to the question "why and how do humans make the choices that they do?" Undoubtedly there would be some biological variation, and there would also be significant elements of the "human decision procedure", as instantiated in any specific individual, which are set by experience and by culture, rather than by genetics. Nonetheless one expects that there is something like a specific algorithm or algorithm-template here, which is part of the standard Homo sapiens cognitive package and biological design; just another anatomical feature, particular to our species.
Having reconstructed this algorithm via scientific analysis of human genome, brain, and behavior, one would then idealize it using its own criteria. This algorithm defines the de-facto value system that human beings employ, but that is not necessarily the value system they would wish to employ; nonetheless, human self-dissatisfaction also arises from the use of this algorithm to judge ourselves. So it contains the seeds of its own improvement. The value system of a Friendly AI is to be obtained from the recursive self-improvement of the natural human decision procedure.
Finally, this is all for naught if seriously unfriendly AI appears first. It isn't good enough just to have the right goals, you must be able to carry them out. In the global race towards artificial general intelligence, SI might hope to "win" either by being the first to achieve AGI, or by having its prescriptions adopted by those who do first achieve AGI. They have some in-house competence regarding models of universal AI like AIXI, and they have many contacts in the world of AGI research, so they're at least engaged with this aspect of the problem.
Upon examining this tentative reconstruction of SI's game-plan, I find I have two major reservations. The big one, and the one most difficult to convey, concerns the ontological assumptions. In second place is what I see as an undue emphasis on the idea of outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers. This is supposed to be a way to finesse philosophical difficulties like "what is consciousness anyway"; you just simulate some humans until they agree that they have solved the problem. The reasoning goes that if the simulation is good enough, it will be just as good as if ordinary non-simulated humans solved it.
I also used to have a third major criticism, that the big SI focus on rationality outreach was a mistake; but it brought in a lot of new people, and in any case that phase is ending, with the creation of CFAR, a separate organization. So we are down to two basic criticisms.
First, "ontology". I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse, for two reasons. First, like anyone else, their ventures into AI will surely begin with programs that work within very limited and more down-to-earth ontological domains. Second, at least some of the AI's world-model ought to be obtained rationally. Scientific theories are supposed to be rationally justified, e.g. by their capacity to make successful predictions, and one would prefer that the AI's ontology results from the employment of its epistemology, rather than just being an axiom; not least because we want it to be able to question that ontology, should the evidence begin to count against it.
For this reason, although I have campaigned against many-worlds dogmatism on this site for several years, I'm not especially concerned about the possibility of SI producing an AI that is "dogmatic" in this way. For an AI to independently assess the merits of rival physical theories, the theories would need to be expressed with much more precision than they have been in LW's debates, and the disagreements about which theory is rationally favored would be replaced with objectively resolvable choices among exactly specified models.
The real problem, which is not just SI's problem, but a chronic and worsening problem of intellectual culture in the era of mathematically formalized science, is a dwindling of the ontological options to materialism, platonism, or an unstable combination of the two, and a similar restriction of epistemology to computation.
Any assertion that we need an ontology beyond materialism (or physicalism or naturalism) is liable to be immediately rejected by this audience, so I shall immediately explain what I mean. It's just the usual problem of "qualia". There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality. The problematic "belief in materialism" is actually the belief in the completeness of current materialist ontology, a belief which prevents people from seeing any need to consider radical or exotic solutions to the qualia problem. There is every reason to think that the world-picture arising from a correct solution to that problem will still be one in which you have "things with states" causally interacting with other "things with states", and a sensible materialist shouldn't find that objectionable.
What I mean by platonism, is an ontology which reifies mathematical or computational abstractions, and says that they are the stuff of reality. Thus assertions that reality is a computer program, or a Hilbert space. Once again, the qualia are absent; but in this case, instead of the deficient ontology being based on supposing that there is nothing but particles, it's based on supposing that there is nothing but the intellectual constructs used to model the world.
Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are. And thus computation has been the way in which materialism has tried to restore the mind to a place in its ontology. This is the unstable combination of materialism and platonism to which I referred. It's unstable because it's not a real solution, though it can live unexamined for a long time in a person's belief system.
An ontology which genuinely contains qualia will nonetheless still contain "things with states" undergoing state transitions, so there will be state machines, and consequently, computational concepts will still be valid, they will still have a place in the description of reality. But the computational description is an abstraction; the ontological essence of the state plays no part in this description; only its causal role in the network of possible states matters for computation. The attempt to make computation the foundation of an ontology of mind is therefore proceeding in the wrong direction.
But here we run up against the hazards of computational epistemology, which is playing such a central role in artificial intelligence. Computational epistemology is good at identifying the minimal state machine which could have produced the data. But it cannot by itself tell you what those states are "like". It can only say that X was probably caused by a Y that was itself caused by Z.
Among the properties of human consciousness are knowledge that something exists, knowledge that consciousness exists, and a long string of other facts about the nature of what we experience. Even if an AI scientist employing a computational epistemology managed to produce a model of the world which correctly identified the causal relations between consciousness, its knowledge, and the objects of its knowledge, the AI scientist would not know that its X, Y, and Z refer to, say, "knowledge of existence", "experience of existence", and "existence". The same might be said of any successful analysis of qualia, knowledge of qualia, and how they fit into neurophysical causality.
It would be up to human beings - for example, the AI's programmers and handlers - to ensure that entities in the AI's causal model were given appropriate significance. And here we approach the second big problem, the enthusiasm for outsourcing the solution of hard problems of FAI design to the AI and/or to simulated human beings. The latter is a somewhat impractical idea anyway, but here I want to highlight the risk that the AI's designers will have false ontological beliefs about the nature of mind, which are then implemented apriori in the AI. That strikes me as far more likely than implanting a wrong apriori about physics; computational epistemology can discriminate usefully between different mathematical models of physics, because it can judge one state machine model as better than another, and current physical ontology is essentially one of interacting state machines. But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
In a phrase: to use computational epistemology is to commit to state-machine materialism as your apriori ontology. And the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can. Something about the ontological constitution of consciousness makes it possible for us to experience existence, to have the concept of existence, to know that we are experiencing existence, and similarly for the experience of color, time, and all those other aspects of being that fit so uncomfortably into our scientific ontology.
It must be that the true epistemology, for a conscious being, is something more than computational epistemology. And maybe an AI can't bootstrap its way to knowing this expanded epistemology - because an AI doesn't really know or experience anything, only a consciousness, whether natural or artificial, does those things - but maybe a human being can. My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology. But transcendental phenomenology is very unfashionable now, precisely because of apriori materialism. People don't see what "categorial intuition" or "adumbrations of givenness" or any of the other weird phenomenological concepts could possibly mean for an evolved Bayesian neural network; and they're right, there is no connection. But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea. Fortunately, 21st-century physics, if not yet neurobiology, can provide alternative hypotheses in which complexity of state originates from something other than concatenation of parts - for example, entanglement, or from topological structures in a field. In such ideas I believe we see a glimpse of the true ontology of mind, one which from the inside resembles the ontology of transcendental phenomenology; which in its mathematical, formal representation may involve structures like iterated Clifford algebras; and which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.
Of course this is why I've talked about "monads" in the past, but my objective here is not to promote neo-monadology, that's something I need to take up with neuroscientists and biophysicists and quantum foundations people. What I wish to do here is to argue against the completeness of computational epistemology, and to caution against the rejection of phenomenological data just because it conflicts with state-machine materialism or computational epistemology. This is an argument and a warning that should be meaningful for anyone trying to make sense of their existence in the scientific cosmos, but it has a special significance for this arcane and idealistic enterprise called "friendly AI". My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story. A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads. You need to do the impossible one more time, and make your plans bearing in mind that the true ontology is something more than your current intellectual tools allow you to represent.
"Where Am I?", by Daniel Dennett
”Where Am I?” is a short story by Daniel Dennett from his book Brainstorms: Philosophical Essays on Mind and Psychology. Some of you might already be familiar with it.
The story is a humorous semi-science fiction one, where Dennett gets a job offer form Pentagon that entails moving his brain into a vat, without actually moving his point of view. Later on it brings up questions about uploading and what it would mean in terms of diverging perspectives and so on. Aside from being a joy to read, it offers solutions to a few hurdles about the nature of consciousnesses and personal identity.
Suppose, I argued to myself, I were now to fly to California, rob a bank, and be apprehended. In which state would I be tried: in California, where the robbery took place, or in Texas, where the brains of the outfit were located? Would I be a California felon with an out-of-state brain, or a Texas felon remotely controlling an accomplice of sorts in California? It seemed possible that I might beat such a rap just on the undecidability of that jurisdictional question, though perhaps it would be deemed an interstate, and hence Federal, offense.
Oh, mainstream philosophy.
http://chronicle.com/article/Is-Death-Bad-for-You-/131818/
Summary: Shelly Kagan, Yale philosophy professor, discusses the argument that death isn't bad for you, because when we are dead we won't care. He hunts around for justification, doesn't find anything satisfactory, or even paint a clear picture of what "satisfactory" would look like, and ends up conveying mostly mysteriousness to the audience.
There are a variety of right ways to approach this argument. One good goal is to understand what's going on in someone's head when they say that death is bad for you.
Reading the article, a bell rang for me about all this discussion of "possible worlds" - for example, the idea of feeling pity for people who don't exist. We usually don't interact with people who don't exist, so what process has led us to compare these different worlds against each other?
The answer is a decision-making process. "Possible worlds" doesn't mean spawning any physical universes - it's a convenient shorthand for imagined possible worlds, which we (in our capacity as intelligent apes) compare against each other, usually as part of a consequentialist decision process.
Once you start looking, you see the fingerprints of decision-making all over the article. It's the machinery that generates these possible worlds to think about, and the context that colors them. So I think noticing that "possible worlds <- us imagining possible worlds as part of our decision-making" is a good relationship for understanding topics like this.
Edit for clarity: The basic idea is that death being bad is, at its root, a function of the decision-making bits of our brains. This can be seen not just from a priori claims about "low utility = bad," but from the structure of what Shelly Kagan hunts around for, which mainly involves choices between possible worlds.
Be careful with thought experiments
Thagard (2012) contains a nicely compact passage on thought experiments:
Grisdale’s (2010) discussion of modern conceptions of water refutes a highly influential thought experiment that the meaning of water is largely a matter of reference to the world rather than mental representation. Putnam (1975) invited people to consider a planet, Twin Earth, that is a near duplicate of our own. The only difference is that on Twin Earth water is a more complicated substance XYZ rather than H2O. Water on Twin Earth is imagined to be indistinguishable from H2O, so people have the same mental representation of it. Nevertheless, according to Putnam, the meaning of the concept water on Twin Earth is different because it refers to XYZ rather than H2O. Putnam’s famous conclusion is that “meaning just ain’t in the head.”
The apparent conceivability of Twin Earth as identical to Earth except for the different constitution of water depends on ignorance of chemistry. As Grisdale (2010) documents, even a slight change in the chemical constitution of water produces dramatic changes in its effects. If normal hydrogen is replaced by different isotopes, deuterium or tritium, the water molecule markedly changes its chemical properties. Life would be impossible if H2O were replaced by heavy water, D2O or T2O; and compounds made of elements different from hydrogen and oxygen would be even more different in their properties. Hence Putnam’s thought experiment is scientifically incoherent: If water were not H2O, Twin Earth would not be at all like Earth. [See also Universal Fire. --Luke]
This incoherence should serve as a warning to philosophers who try to base theories on thought experiments, a practice I have criticized in relation to concepts of mind (Thagard, 2010a, ch. 2). Some philosophers have thought that the nonmaterial nature of consciousness is shown by their ability to imagine beings (zombies) who are physically just like people but who lack consciousness. It is entirely likely, however, that once the brain mechanisms that produce consciousness are better understood, it will become clear that zombies are as fanciful as Putnam’s XYZ. Just as imagining that water is XYZ is a sign only of ignorance of chemistry, imagining that consciousness is nonbiological may well turn out to reveal ignorance rather than some profound conceptual truth about the nature of mind. Of course, the hypothesis that consciousness is a brain process is not part of most people’s everyday concept of consciousness, but psychological concepts can progress just like ones in physics and chemistry. [See also the Zombies Sequence. --Luke]
Request for feedback: paper on fine-tuning and the multiverse hypothesis
A while back, I posted in the "What are you working on?" thread about a paper I was working on. A few people wanted to see it once I have a complete draft, and I'm of course independently interested in obtaining feedback before I move on with it.
The paper doesn't presuppose much philosophical jargon that isn't easily googleable, I think. Math-wise, you need to be somewhat comfortable with basic conditional probabilities. I'm interested in finding out about any math errors, other non sequiturs, and other flaws in my discussion. I'd also like to find out about general impressions, such as what I should have spilled more or less ink on. Some notation is unfinished (subscripts, singular/plural first person, etc.), but it's thoroughly readable.
ABSTRACT: According to a standard form of the fine-tuning argument, the apparent anthropic fine-tuning of the physical constants and boundary conditions of our universe confirms the multiverse hypothesis. According to the inverse gambler’s fallacy objection, this view is mistaken: although the multiverse hypothesis makes the existence of a life-permitting universe more probable than it would be on a single-universe theory, it does not make it any more probable that our universe should be life-permitting, and thus is not confirmed by our total evidence. We examine recent replies to this objection and conclude that they all fall short, usually due to a shared weakness. We then show how a synthetic reply, obtained by combining independent insights from the literature, can overcome the weakness afflicting its predecessors.
If you'd like a slightly more detailed description before deciding whether or not to read the whole thing, see my post.
Here is the actual paper: DOCX PDF (on some computers, italicized Times New Roman looks weird in the PDF)
EDIT 5/9/12: Current draft (edited, shortened to 13.5K words) is here:
DOCX: http://bit.ly/Jc4pXr
PDF: http://bit.ly/Jdc7z3
NOTE: The paper occasionally makes use of the notion of a person as a metaphysical individual. Roughly and likely inaccurately, this is the concept of an individual essence that can only be instantiated once in a possible world and is partly independent of the physical pattern it inhabits (i.e. you can have different possible worlds that are physically identical but contain different individuals -- I think this is what Eliezer refers to as "the philosophical notion of indexical identity apart from pattern identity"). I personally find this concept unmotivated to say the least; it figures in the paper only because some of the arguments discussed rely on it; and it is inessential for my proposed reply. If you're going to weight in on this, I'd rather you make suggestions as to how I could gracefully express that I find the concept unhelpful while still engaging with the arguments.
In Defense of Ayn Rand
In Defense of Ayn Rand
WARNING: Do not read the footnotes if have not read Atlas Shrugged, they contain primarily quotes from the book. They don't reveal much in terms of plot (except for #6) so read them if you feel daring.
Preface: This is NOT a defense of objectivism nor is it a defense of the cultish nature of followers of objectivism. This is a defense of Ayn Rand the woman and a response to her portrayal in the essay The Guardians of Ayn Rand. I realize that the essay was making a point about cults and not primarily a criticism of Ayn Rand, but since she was the focal point of the ideas and the piece is a part of the Sequences, I felt it necessary to write this. There are enough people who criticize Ayn Rand, and the literature of her critics is vast - but to spit on her contributions with little reference to any factual details and with a huge emphasis on her personal life just did not seem to fit with the spirit of this website. If Rand really was as poor of a thinker as she is being portrayed, some evidence would be very much appreciated. It was originally a comment, but it became way too long. I am NOT an expert on objectivism whatsoever. Please correct me on any inaccuracies.
Note: I am using the word rationality as Rand uses it.
"And yet Ayn Rand acknowledged no superior, in the past, or in the future yet to come. Rand, who began in admiring reason and individuality, ended by ostracizing anyone who dared contradict her. Shermer: "[Barbara] Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. 'When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks.""
Rand's choice of companions was governed by her life philosophy, which with respect to relationships was akin to a business deal, selfishly trading her willingness to interact with an individual for that person's virtue1. If she did not find another's virtue a sufficient payment for her companionship, she would not interact with them.
The part about Rand's professed superiority just seems like a blatant falsehood. All of her writings are based (it says so on the back of the books) on the existence of heroes in humanity. How can she acknowledge no superior ever and still feel comfortable with her ideas? Even further, her whole philosophy is based on seeing reality exactly as it is 2.
Her excommunication of Branden is a fine example of her irrationality in her private life. It is all that was needed in The Guardians of Ayn Rand to make the point. Clearly, it shows Rand's inability to match her actions with her words, and shows an irrational example of her tendency to ostracize people (I think there are justifications of her actions, and I believe her journal writings shed a different light on the situation, but I will agree with the analysis on this point unless some very strong evidence to the contrary appears).
But she rationally ostracized people who disagreed with her because she herself has said that she completely and fully embodies her philosophy - if you disagree with her, you disagree with her philosophy. Since her philosophy is so entrenched in the actions of an individual, it is no wonder why she would choose to ostracize those who disagree with her from her personal circle of companions! The core of her philosophy rests on the assertion that no man should live for another and that no man should take steps to fake reality on account of another person3.
One's rational perception of the world is of the utmost importance; Rand's conclusion that one person will not become her soulmate is the result of her rational perception of his actions. Her knowledge of music might not be the same knowledge held by a composer, but that is of no consequence in determining the reality of a situation4. One's choice of musical preference seems to be, in Rand's eyes, reflective of the values they uphold. This is her rational view of reality which she has arrived at through conscious perception and thought; someone else might think it is the right perception while another might not. If confronted with this, she would most likely (from my readings of her philosophy) seek to justify her assertion through proof based on her own rational perception of the world. Refusing to do so would be an example of an irrational action on her part5.
Eliezer presented proof of his assertion that her actions are not justifiable with only a couple of anecdotes that reveal no context. The description of her actions are taken from a biography written by the wife of Nathaniel Branden, who had a significant personal conflict with Rand. This may or may not be important, but I think it is worth mentioning.
The observation that she chose to crush those of whom she disapproves only refers to her influence in her own personal circle of companions (and of course she has done it elsewhere, though I have not seen event where such an action has contradicted her philosophy besides the Branden affair). Her right to do so is implicit in her philosophy and is encouraged, yet her actions are portrayed as a failure to recognize a cognitive bias rather than a factual failure in her philosophy. Many aspects2 of Rand's philosophy are consequent with the ideas in the sequences too (though the similarities stop with respect to Aristotle).
"It's noteworthy, I think, that Ayn Rand's fictional heroes were architects and engineers; John Galt, her ultimate, was a physicist; and yet Ayn Rand herself wasn't a great scientist. As far as I know, she wasn't particularly good at math. She could not aspire to rival her own heroes. Maybe that's why she began to lose track of Tsuyoku Naritai".
Rand's fictional heroes were not just architects and engineers, and the point about her not being a great scientist is irrelevant with regard to the nature and purpose of her philosophy. The top comment also sheds light on the facts:
Eliezer: "As far as I know, [Rand] wasn't particularly good at math."
A relevant passage from Barbara Branden's biography of Rand:
"The subject [Rand] most enjoyed during her high school years, the one subject of which she never tired, was mathematics. 'My mathematics teacher was delighted with me. When I graduated, he said, "It will be a crime if you don't go into mathematics." I said only, "That's not enough of a career." I felt that it was too abstract, it had nothing to do with real life. I loved it, but I didn't intend to be an engineer or to go into any applied profession, and to study mathematics as such seemed too ivory tower, too purposeless—and I would say so today.' Mathematics, she thought, was a method. Like logic, it was an invaluable tool, but it was a means to an end, not an end in itself. She wanted an activity that, while drawing on her theoretical capacity, would unite theory and its practical application. That desire was an essential element in the continuing appeal that fiction held for her: fiction made possible the integration of wide abstract principles and their direct expression in and application to man's life." (Barbara Branden, The Passion of Ayn Rand, page 35 of my edition)
– Z.M Davis
And she did tell her followers (and even people who weren't her followers) to study science. She even gave a speech at MIT in the 60's entitled "To Young Scientists" (You can find the transcript somewhere, though you may have to pay for it). She also wrote an eyewitness account of the Apollo 11 launch that vehemently shows her appreciation and awe of the products of science. If that isn't an encouragement to study science, I don't know what is6.
This analysis is not fair. There is nothing fair about representing a figure in an incredibly poor light in order to emphasize a point about cults. Using her very public affair and the cultish nature of her followers would have been sufficient, but attacking her actions without mention of the underlying philosophy guiding them was unnecessary and, at many points (more evidence in the comments of the essay), factually incorrect. The tone of the essay was also incredibly arrogant, portraying Rand as some delusional crackpot and downplaying her accomplishments:
"Ayn Rand fled the Soviet Union, wrote a book about individualism that a lot of people liked, got plenty of compliments, and formed a coterie of admirers. Her admirers found nicer and nicer things to say about her (happy death spiral), and she enjoyed it too much to tell them to shut up. She found herself with the power to crush those of whom she disapproved, and she didn't resist the temptation of power"
I mean, come on! For someone who consistently encourages a charitable reading of his writing, this usage of Rand as an example of irrationality and poor judgement is disheartening. At the very least, some semblance of respect for her accomplishments would not be out of place.
Afterthought: It is my opinion that the treatment of Ayn Rand's personal life was not in the spirit of rational discussion. However, as is most often the case with Eliezer's writings, the ideas in the essay for which Rand was supposed to be a foil to were incredibly thought provoking. In particular, the philosophical implications of closed vs open systems. Here is an excerpt of an essay I found on the Ayn Rand Institute's website, defending objectivism as a closed system, that gives some much needed context absent from the previous discussion:
IN HIS LAST PARAGRAPH, Kelley states that Ayn Rand’s philosophy, though magnificent, “is not a closed system.” Yes, it is. Philosophy, as Ayn Rand often observed, deals only with the kinds of issues available to men in any era; it does not change with the growth of human knowledge, since it is the base and precondition of that growth. Every philosophy, by the nature of the subject, is immutable. New implications, applications, integrations can always be discovered; but the essence of the system—its fundamental principles and their consequences in every branch—is laid down once and for all by the philosophy’s author. If this applies to any philosophy, think how much more obviously it applies to Objectivism. Objectivism holds that every truth is an absolute, and that a proper philosophy is an integrated whole, any change in any element of which would destroy the entire system.
In yet another expression of his subjectivism in epistemology, Kelley decries, as intolerant, any Objectivist’s (or indeed anyone’s) “obsession with official or authorized doctrine,” which “obsession” he regards as appropriate only to dogmatic viewpoints. In other words, the alternative once again is whim or dogma: either anyone is free to rewrite Objectivism as he wishes or else, through the arbitrary fiat of some authority figure, his intellectual freedom is being stifled. My answer is: Objectivism does have an “official, authorized doctrine,” but it is not dogma. It is stated and validated objectively in Ayn Rand’s works.
“Objectivism” is the name of Ayn Rand’s achievement. Anyone else's interpretation or development of her ideas, my own work emphatically included, is precisely that: an interpretation or development, which may or may not be logically consistent with what she wrote. In regard to the consistency of any such derivative work, each man must reach his own verdict, by weighing all the relevant evidence. The “official, authorized doctrine,” however, remains unchanged and untouched in Ayn Rand’s books; it is not affected by any interpreters.
The Constitution and the Declaration of Independence state the “official” doctrine of the government of the United States, and no one, including the Supreme Court, can alter the meaning of this doctrine. What the Constitution and the Declaration are to the United States, Atlas Shrugged and Ayn Rand’s other works are to Objectivism. Objectivism, therefore, is “rigid,” “narrow,” “intolerant” and “closed-minded.” If anyone wants to reject Ayn Rand’s ideas and invent a new viewpoint, he is free to do so—but he cannot, as a matter of honesty, label his new ideas or himself “Objectivist.”
Objectivism is not just “common sense”; it is a revolutionary philosophy, which is a fact we do not always keep in mind. Ayn Rand challenges every fundamental that men have accepted for millennia. The essence of her revolution lies in her concept of “objectivity,” which applies to epistemology and to ethics, i.e., to cognition and to evaluation. At this early stage of history, a great many people, though bright and initially drawn to Ayn Rand, are still unable (or unwilling) fully to grasp this central concept. They accept various ideas from Ayn Rand out of context, without digesting them by penetrating to the foundation; thus they never uproot all the contradictory ideas they have accepted, the ones which guided the formation of their own souls and minds. Such people are torn by an impossible conflict: they have one foot (or toe) in the Objectivist world and the rest of themselves planted firmly in the conventional world. People like this do not mind being controversial so long as they are fashionable or “in”; i.e., so long as they can be popular in their subculture, or politically powerful or academically respectable; to attain which status, they will “tolerate” (or show “compassion” for) whatever they have to.
The real enemy of these men is not Ayn Rand; it is reality. But Ayn Rand is the messenger who brings them the hated message, which, somehow, they must escape or dilute (some of them, I think, never even get it). The message is that they must conform to reality 24 hours a day and all the way down.
Definitely a more apt example of the cultish nature of objectivism, though it has its merits; good fodder for discussion.
Footnotes:
1 "A trader does not ask to be paid for his failures, nor does he ask to be loved for his flaws. A trader does not squander his body as fodder or his soul as alms. Just as he does not give his work except in trade for material values, so he does not give the values of his spirit-his love, his friendship, his esteem-except in payment and in trade for human virtues, in payment for his own selfish pleasure, which he receives from men he can respect." - John Galt
2 “Your mind is your only judge of truth–and if others dissent from your verdict, reality is the court of final appeal.” — John Galt
3 “People think that a liar gains a victory over his victim. What I've learned is that a lie is an act of self-abdication, because one surrenders one's reality to the person to whom one lies, making that person one's master, condemning oneself from then on to faking the sort of reality that person's view requires to be faked.” — Hank Rearden
4 "By refusing to say 'It is' you are refusing to say 'I am'. By suspending your judgment, you are negating your person. When a man declares: 'Who am I to know?' he is declaring: 'Who am I to live?'" - John Galt <\font>
5 “You don't have to see through the eyes of others, hold onto yours, stand on your own judgment, you know that what is, is–say it aloud, like the holiest of prayers, and don't let anyone tell you otherwise.” — Dagny Taggart
6 Also, the main character of Atlas Shrugged was a physicist, invented a motor that harnessed the power of static electricity, and then went on to save the damn country. That's not encouragement to study science?
Art vs. science
It struck me this morning that a key feature that distinguishes art from science is that art is studied in the context of the artist, while science is not. When you learn calculus, mechanics, or optics, you don't read Newton. Science has content that can be abstracted out of one context - including the context of its creation - and studied and used in other contexts. This is a defining characteristic. Whereas art can't be easily removed from its context - one could argue art is context. When we study art, we study the original work by a single artist, to get that artist's vision.
(This isn't a defining characteristic of art - it wasn't true until the twelfth century, when writers and artists began signing their works. In ancient Greece, through the Middle Ages in Europe, the content, subject, or purpose of art was considered primary, in the same way that the content of science is today. "Homer's" Iliad was a collaborative project, in which many authors (presumably) agreed that the story was the important thing, not one author's vision of it, and (also presumably) added to it in much the way that science is cumulative today. Medieval art generally glorified the church or the state.)
However, because this is the way western society views art today, we can use this as a test. Is it art or science? Well, is its teaching organized around the creators, or around the content?
Philosophy and linguistics are somewhere between art and science by this test. So is symbolic AI, while data mining is pure science.
A defense of formal philosophy
Gregory Wheeler has written an eloquent new defense of formal philosophy.
Quotes:
...formal epistemology is an interdisciplinary research program that includes work by philosophers, mathematicians, computer scientists, statisticians, psychologists, operations researchers, and economists which aims to give mathematical and sometimes computational representations of, along with sound strategies for reasoning about, knowledge, belief, judgment and decision making.
...
Why... bother being so formal? [Rich] Thomason, commenting on philosophers who view formal methods as a distraction to real philosophical advancement, observed that the only real advantage that we have over the great philosophers of the past are the new methods that we have at our disposal. Probability. First-order logic. Calculus. The number zero. It is hard to imagine improving on Aristotle without resorting to methods that were simply unavailable to him. Knowing just this much about history, a better question is this: why limit your options?
...
The problem with aspiring to counterexample-proof philosophy without taking into account either formal or empirical constraints is that the exercise can quickly devolve into a battle of wits rather than a battle of ideas. And the problem is only compounded by pseudo-formal philosophy — the unfortunate practice of using formal logic informally — because this encourages philosophers to describe rather than define the fundamental operations of their theories. Memories are ‘accessed in the right way’; justified beliefs are ‘based’ on one’s ‘evidence’; coherent beliefs ‘hang together’. But, like a bump in a rug carefully pushed from one corner of a crowded room to another, this reliance on pseudo-formalisms to avoid any and all counterexamples inevitably means that the hard, unsolved philosophical problems are artfully avoided rather than addressed. At its worst, rampant counterexample avoidance turns philosophy into little more than a performance art.
But, one way to arrest this slide is by constraining epistemological theories by a combination of empirical evidence and formal models. For if you replace those fudged terms with a formal model, or a provably correct algorithm, and hem in imagination by known empirical constraints, then if a theory is successful in explaining a range of cases, that hard won success can be weighed against the theory’s failings. In other words, if we set aspirations for epistemology higher than conceptual analysis, that will open more room to judge success and failure than the all-or-nothing stakes of counterexample avoidance.
See also: An Overview of Formal Epistemology.
[LINK] "The nirvana would be if the questions raised by Oprah Winfrey would be answered by the faculty at Harvard."
I once very politely raised the thought that one reason philosophy departments have been cut is the fault of philosophers. The answer always comes back: 'The point of philosophy is to ask questions, not to give answers.' I can't help but think 'No. It can't be!' Imagine if you applied that question to other areas – is the purpose of rocket science to ask questions about rockets?
[LINK] Being No One (~50 min talk on the self-model in your brain)
Summary: This is a ~50 minute talk (plus some introductory ado) by Thomas Metzinger on the problem of the experiencing, subjective self (why it exists, what it even means, how it arises). Not to be too cliché, but he attacks the problem by dissolving the question, and the solution he arrives at sounds a lot like how an algorithm feels from inside.
Using several examples from neuroscience (particularly the many illuminating failure modes of the brain), he explains how the brain models the self and its place in the center of experiential space. He discusses the limitations of our access to our own cognitive systems, and how those limitations force us to be naive realists.
I hesitate to summarize further, because there is a lot of value in hearing the entire argument. (I will say that he gets a little cute at the end, but that doesn't detract from the excellent content.)
Link: Being No One on Youtube.
(Normally I think LWers dislike the talk format because it's inherently time-consuming, but I'd say this one is information dense and well worth your time.)
"Personal Identity and Uploading", by Mark Walker
“Personal Identity and Uploading”, Mark Walker is the next JET paper. Abstract:
Objections to uploading may be parsed into substrate issues, dealing with the computer platform of upload and personal identity. This paper argues that the personal identity issues of uploading are no more or less challenging than those of bodily transfer often discussed in the philosophical literature. It is argued that what is important in personal identity involves both token and type identity. While uploading does not preserve token identity, it does save type identity; and even qua token, one may have good reason to think that the preservation of the type is worth the cost.
"Misbehaving Machines: The Emulated Brains of Transhumanist Dreams", Corry Shores
“Misbehaving Machines: The Emulated Brains of Transhumanist Dreams”, by Corry Shores (grad student; Twitter, blog) is another recent JET paper. Abstract:
Enhancement technologies may someday grant us capacities far beyond what we now consider humanly possible. Nick Bostrom and Anders Sandberg suggest that we might survive the deaths of our physical bodies by living as computer emulations. In 2008, they issued a report, or “roadmap,” from a conference where experts in all relevant fields collaborated to determine the path to “whole brain emulation.” Advancing this technology could also aid philosophical research. Their “roadmap” defends certain philosophical assumptions required for this technology’s success, so by determining the reasons why it succeeds or fails, we can obtain empirical data for philosophical debates regarding our mind and selfhood. The scope ranges widely, so I merely survey some possibilities, namely, I argue that this technology could help us determine
- if the mind is an emergent phenomenon,
- if analog technology is necessary for brain emulation, and
- if neural randomness is so wild that a complete emulation is impossible.
Philosophy that can be "taken seriously by computer scientists"
I've long held CMU's philosophy department in high regard. One of their leading lights, Clark Glymour, recently published a short manifesto, which Brian Leiter summed up as saying that "the measure of value for philosophy departments is whether they are taken seriously by computer scientists."
Selected quote from Glymour's manifesto:
Were I a university administrator facing a contracting budget, I would not look to eliminate biosciences or computer engineering. I would notice that the philosophers seem smart, but their writings are tediously incestuous and of no influence except among themselves, and I would conclude that my academy could do without such a department... But not if I found that my philosophy department retrieved a million dollars a year in grants and fellowships, and contained members whose work is cited and used in multiple subjects, and whose faculty taught the traditional subject well to the university’s undergraduates.
Also see the critique here, but I'd like to have Glymour working on FAI.
Inverse p-zombies: the other direction in the Hard Problem of Consciousness
402. "Nothing is so certain as that I possess consciousness." In that case, why shouldn't I let the matter rest? This certainty is like a mighty force whose point of application does not move, and so no work is accomplished by it.
403. Remember: most people say one feels nothing under anaesthetic. But some say: It could be that one feels, and simply forgets it completely.
--Wittgenstein, Zettel (1929-1948)
I offer for LW's consideration the interesting 2008 paper "Inverse zombies, anesthesia awareness, and the hard problem of unconsciousness" (Mashour & LaRock; NCBI); the abstract:
Philosophical (p-) zombies are constructs that possess all of the behavioral features and responses of a sentient human being, yet are not conscious. P-zombies are intimately linked to the hard problem of consciousness and have been invoked as arguments against physicalist approaches. But what if we were to invert the characteristics of p-zombies? Such an inverse (i-) zombie would possess all of the behavioral features and responses of an insensate being, yet would nonetheless be conscious. While p-zombies are logically possible but naturally improbable, an approximation of i-zombies actually exists: individuals experiencing what is referred to as "anesthesia awareness." Patients under general anesthesia may be intubated (preventing speech), paralyzed (preventing movement), and narcotized (minimizing response to nociceptive stimuli). Thus, they appear--and typically are--unconscious. In 1-2 cases/1000, however, patients may be aware of intraoperative events, sometimes without any objective indices. Furthermore, a much higher percentage of patients (22% in a recent study) may have the subjective experience of dreaming during general anesthesia. P-zombies confront us with the hard problem of consciousness--how do we explain the presence of qualia? I-zombies present a more practical problem--how do we detect the presence of qualia? The current investigation compares p-zombies to i-zombies and explores the "hard problem" of unconsciousness with a focus on anesthesia awareness.
Problems of the Deutsch-Wallace version of Many Worlds
The subject has already been raised in this thread, but in a clumsy fashion. So here is a fresh new thread, where we can discuss, calmly and objectively, the pros and cons of the "Oxford" version of the Many Worlds interpretation of quantum mechanics.
This version of MWI is distinguished by two propositions. First, there is no definite number of "worlds" or "branches". They have a fuzzy, vague, approximate, definition-dependent existence. Second, the probability law of quantum mechanics (the Born rule) is to be obtained, not by counting the frequencies of events in the multiverse, but by an analysis of rational behavior in the multiverse. Normally, a prescription for rational behavior is obtained by maximizing expected utility, a quantity which is calculated by averaging "probability x utility" for each possible outcome of an action. In the Oxford school's "decision-theoretic" derivation of the Born rule, we somehow start with a ranking of actions that is deemed rational, then we "divide out" by the utilities, and obtain probabilities that were implicit in the original ranking.
I reject the two propositions. "Worlds" or "branches" can't be vague if they are to correspond to observed reality, because vagueness results from an object being dependent on observer definition, and the local portion of reality does not owe its existence to how we define anything; and the upside-down decision-theoretic derivation, if it ever works, must implicitly smuggle in the premises of probability theory in order to obtain its original rationality ranking.
Some references:
"Decoherence and Ontology: or, How I Learned to Stop Worrying and Love FAPP" by David Wallace. In this paper, Wallace says, for example, that the question "how many branches are there?" "does not... make sense", that the question "how many branches are there in which it is sunny?" is "a question which has no answer", "it is a non-question to ask how many [worlds]", etc.
"Quantum Probability from Decision Theory?" by Barnum et al. This is a rebuttal of the original argument (due to David Deutsch) that the Born rule can be justified by an analysis of multiverse rationality.
"Vulnerable Cyborgs: Learning to Live with our Dragons", Mark Coeckelbergh
"Vulnerable Cyborgs: Learning to Live with our Dragons", Mark Coeckelbergh (university); abstract:
Transhumanist visions appear to aim at invulnerability. We are invited to fight the dragon of death and disease, to shed our old, human bodies, and to live on as invulnerable minds or cyborgs. This paper argues that even if we managed to enhance humans in one of these ways, we would remain highly vulnerable entities given the fundamentally relational and dependent nature of posthuman existence. After discussing the need for minds to be embodied, the issue of disease and death in the infosphere, and problems of psychological, social and axiological vulnerability, I conclude that transhumanist human enhancement would not erase our current vulnerabilities, but instead transform them. Although the struggle against vulnerability is typically human and would probably continue to mark posthumans, we had better recognize that we can never win that fight and that the many dragons that threaten us are part of us. As vulnerable humans and posthumans, we are at once the hero and the dragon.
Bostrom has written a tale about a dragon that terrorizes a kingdom and people who submit to the dragon rather than fighting it. According to Bostrom, the “moral” of the story is that we should fight the dragon, that is, extend the (healthy) human life span and not accept aging as a fact of life (Bostrom 2005, 277). And in The Singularity is Near (2005) Kurzweil has suggested that following the acceleration of information technology, we will become cyborgs, upload ourselves, have nanobots in our bloodstream, and enjoy nonbiological experience. Although not all transhumanist authors explicitly state it, these ideas seem to aim toward invulnerability and immortality: by means of human enhancement technologies, we can transcend our present limited existence and become strong, invulnerable cyborgs or immortal minds living in an eternal, virtual world.
...However, in this paper, I will ask neither the ethical-normative question (Should we develop human enhancement techniques and should we aim for invulnerability?) nor the hermeneutical question (How can we best interpret and understand transhumanism in the light of cultural, religious, and scientific history?). Instead, I ask the question: If and to the extent that transhumanism aims at invulnerability, can it – in principle – reach that aim? The following discussion offers some obvious and some much less obvious reasons why posthumans would remain vulnerable, and why human vulnerability would be transformed rather than diminished or eliminated...However, to focus only on a defense or rejection of what is valuable in humans would leave out of sight the relation between (in)vulnerability and posthuman possibilities. It would lead us back to the ethical-normative questions (Is human enhancement morally acceptable? Is vulnerability something to be valued? Is the transhumanist project acceptable or desirable?), which is not what I want to do in this paper. Moreover, ethical arguments that present the problem as if we have a choice between “natural” humanity and “artificial” posthumanity are based on essentialist assumptions that make a sharp distinction between “what we are” (the natural) and technology (the artificial), whereas this distinction is at least questionable. Perhaps there is no fixed human nature apart from technology, perhaps we are “artificial by nature” (Plessner 1975). If this is so, then the problem is not whether or not we want to transcend the human but how we want to shape that posthuman existence. Should we aim at invulnerability and if so, can we? As indicated before, here I limit the discussion to the “can” question.
Breaking down the potential improvements:
Physical vulnerability
Not only could human enhancement make us immune to current viruses; it could also offer other “immunities,” broadly understood...However, the project of total vulnerability or even overall reduction of vulnerability is bound to fail. If we consider the history of medical technology, we observe that for every disease new technology helps to prevent or cure, there is at least one new disease that escapes our techno-scientific control. We can win one battle, but we can never win the war. There will be always new diseases, new viruses, and, more generally, new threats to physical vulnerability. Consider also natural disasters caused by floods, earthquakes, volcanic eruptions, and so on.
Moreover, the very means to fight those threats sometimes create new threats themselves. This can happen within the same domain, as is the case with antibiotics that lead to the development of more resistant bacteria, or in another domain, as is the case with new security measures in airports, which are meant as protections against physical harm by terrorism but might pose new (health?) risks. Paradoxically, technologies that are meant to reduce vulnerability often create new ones. This is also true for posthuman technologies. For example, posthumans would also be vulnerable to at least some of the risks Bostrom calls “existential risks” (Bostrom 2002), which could wipe out posthumankind. Nanotechnology or nuclear technology could be misused, a superintelligence could take over and annihilate humankind, or technology could cause (further) resource depletion and ecological destruction. Military technologies are meant to protect us but they can become a threat, making us vulnerable in a new way. We wanted to master nature in order to become less dependent on it, but now we risk destroying the ecology that sustains us. And of course there are many physical threats we cannot foresee – not even in the near future.
Material and immaterial vulnerability
Consider computer viruses. Here the story is similar to the story of biological viruses: there are ongoing cycles of threats, counter-measures, and new threats. We can also consider physical damage to computers, although that is much less common. In any case, if we extend ourselves with software and hardware, this creates additional vulnerabilities. We must cope with “software” vulnerability and “hardware” vulnerability. If humans and posthumans live in an “infosphere” (see for example Floridi 2002), this is not a sphere of immunity. Perhaps our vulnerability becomes less material, but we cannot escape it. For instance, a virtual body in a virtual world may well be shielded from biological viruses, but it is vulnerable to at least three kinds of threats.
- First, there are threats within the virtual world itself (consider for instance virtual rape), which constitutes virtual vulnerability.
- Second, the software programme that provides a platform for the virtual world might be damaged, for example by means of a cyber attack. This can lead to the “death” of the virtual character or entity.
- Third, all these processes depend on (material) hardware. The world wide web and its wired and wireless communications rest on material infrastructures without which the web would be impossible. Therefore, if posthumans uploaded themselves into an infosphere and dispensed with their biological bodies, they would not gain invulnerability and immortality but merely transform their vulnerability.
Bodily vulnerability
Minds need bodies. This is in line with contemporary research in cognitive science, which argues that “embodiment” is necessary since minds can develop and function only in interaction with their environment (Lakoff and Johnson 1999 and others). This direction of thought is also taken in contemporary robotics, for example when it recognizes that manipulation plays an important role in the development of cognition (Sandini et al. 2004). In his famous 1988 book on “mind children” Moravec argued that true AI can be achieved only if machines have a body (Moravec 1988)...Thus, uploading and nano-based cyborgization would not dispense with the body but transform it into a virtual body or a nano-body. This would create vulnerabilities that sometimes resemble the vulnerabilities we know today (for instance virtual violence) but also new vulnerabilities.
Metaphysical vulnerability
With this atomism comes that atomist view of death: there is always the possibility of disintegration; neither physical-material objects nor information objects exist forever. Information can disintegrate and the material conditions for information are vulnerable to disintegration as well. Thus, at a fundamental level everything is vulnerable to disintegration, understood by atomism as a re-organization of elementary particles. This “metaphysical” vulnerability is unavoidable for posthumans, whatever the status of their elementary particles and the organs and systems constituted by these particles (biological or not). According to their own metaphysics, the cyborgs and inforgs that transhumanists and their supporters wish to create would be only temporal orders that have only temporary stability – if any.
Note, however, that recently both Floridi and contemporary physics seem to move toward a more ecological, holistic metaphysics, which suggests a different definition of death. In information ecologies, perhaps death means the absence of relations, disconnection. Or it means: deletion, understood ecologically and holistically as the removal out of the whole. But in the light of this metaphysics, too, there seems no reason why posthumans would be able to escape death in this sense.
Existential and psychological vulnerabilities
This gives rise to what we may call “indirect” or “second-order” vulnerabilities. For instance, we can become aware of the possibility of disintegration, the possibility of death. We can also become aware of less threatening risks, such as disease. There are many first-order vulnerabilities. Awareness of them renders us extra vulnerable as opposed to beings who lack such an ability to take distance from ourselves. From an existential-phenomenological point of view (which has its roots in work by Heidegger and others), but also from the point of view of common sense psychology, we must extend the meaning of vulnerability to the sufferings of the mind. Vulnerability awareness itself constitutes a higher-order vulnerability that is typical of humans. In posthumans, we could only erase this vulnerability if we were prepared to abandon the particular higher form of consciousness that we “enjoy.” No transhumanist would seriously consider that solution to the problem.
Social and emotional vulnerability
If I depend on you socially and emotionally, then I am vulnerable to what you say or do. Unless posthumans were to live in complete isolation without any possibility of inter-posthuman communication, they would be as vulnerable as we are to the sufferings created by the social life, although the precise relation between their social life and their emotional make-up might differ...For example, in Houellebecq’s novel the posthumans have a reduced capacity to feel sad, but at the cost of a reduced capacity to desire and to feel joy. More generally, the lesson seems to be: emotional enhancement comes at a high price. Are we prepared to pay it? Even if we succeed in diminishing this kind of vulnerability, we might lose something that is of value to us. This brings me to the next kind of vulnerability.
Ethical-axiological vulnerability
We value not only people and our relationships with them; we are also attached to many other things in life. Caring makes us vulnerable (Nussbaum 1986). We develop ties out of our engagement with humans, animals, objects, buildings, landscapes, and many other things. This renders us vulnerable since it makes us dependent on (what we experience as) “external” things. We sometimes get emotional about things since we care and since we value. We suffer since we depend on external things...Posthumans could be cognitively equipped to follow this strategy, for instance by means of emotional enhancement that allows more self-control and prevents them forming too strong ties to things. If we really wanted to become invulnerable in this respect, we should create posthumans who no longer care at all about external things – including other posthumans. That would be “posthumans” who no longer have the ability to care and to value. They would “connect” to others and to things, but they would not really engage with them, since that would render them vulnerable. They would be perfectly rational Stoics, perhaps, but it would be odd to call them “posthumans” at all since the term “human” would lose its meaning. It is even doubtful if this extreme form of Stoicism would be possible for any entity that possesses the capacity of valuing and that engages with the world.
'Relational vulnerability'/'Conclusion: Heels and dragons'
The only way to make an entity invulnerable, it turns out, would be to create one that exists in absolute isolation and is absolutely independent of anything else. Such a being seems inconceivable – or would be a particularly strange kind of god. (It would have to be a “philosopher’s” god that could hardly stir any religious feelings. Moreover, the god would not even be a “first mover,” let alone a creator, since that would imply a relation to our world. It is also hard to see how we would be aware of its existence or be able to form an idea about it, given the absence of any relation between us and the god.) Of course we could – if ethically acceptable at all – create posthumans that are less vulnerable in some particular areas, as long as we keep in mind that there are other sources of vulnerability, that new sources of vulnerability will emerge, and that our measure to decrease vulnerability in one area may increase it in another area.
If transhumanists accept the results of this discussion, they should carefully reflect on, and redefine, the aims of human enhancement and avoid confusion about how these aims relate to vulnerability. If the aim is invulnerability, then I have offered some reasons why this aim is problematic. If their project has nothing to do with trying to reach invulnerability, then why should we transcend the human? Of course one could formulate no “ultimate” goals and choose less ambitious goals, such as more health and less suffering. For instance, one could use a utilitarian argument and say that we should avoid overall suffering and pain. Harris seems to have taken these routes (Harris 2007). And Bostrom frequently mentions “life extension” as a goal rather than “invulnerability” or “immortality.” But even in these “weakened” or at least more modest forms, the transhumanist project can be interpreted as a particularly hostile response to (human) vulnerability that probably has no parallel in human history.
...Furthermore, this paper suggests that if we can and must make an ethical choice at all, then it is not a choice between vulnerable humans and invulnerable posthumans, or even between vulnerability and invulnerability, but a choice between different forms of humanity and vulnerability. If implemented, human enhancement technologies such as mind uploading will not cancel vulnerability but transform it. As far as ethics is concerned, then, what we need to ask is which new forms of the human we want and how (in)vulnerable we wish to be. But this inquiry is possible only if we first fine-tune our ideas of what is possible in terms of enhancement and (in)vulnerability. To do this requires stretching our moral and technological imaginations.
Moreover, if I’m right about the different forms of posthuman vulnerability as discussed above, then we must dispense with the dragon metaphor used by Bostrom: vulnerability is not a matter of “external” dangers that threaten or tyrannize us, but that have nothing to do with what we are; instead, it is bound up with our relational, technological and transient kind of being – human or posthuman. If there are dragons, they are part of us. It is our tragic condition that as relational entities we are at once the heel and the arrow, the hero and the dragon.
Before criticizing it, I'd like to point to the introduction where the author lays out his mission: to discuss what problems cannot "in principle" be avoided, what vulnerabilities are "necessary". In other words, he thinks he is laying out fundamental limits, on some level as inexorable and universal as, say, Turing's Halting Theorem.
But he is manifestly doing no such thing! He lists countless 'vulnerabilities' which could easily be circumvented to arbitrary degrees. For example, the computer viruses he puts such stock on: there is no fundamental reason computer viruses must exist. There are many ways they could be eliminated starting from formal static proofs of security and functionality; the only fundamental limit relevant here would be Turing/Rice's theorem, which is applicable only if we wanted to run all possible programs, which we manifestly cannot and do not. Similar points apply to the rest of his software vulnerabilities.
I would also like to single out his 'Metaphysical vulnerability'; physicists, SF authors, and transhumanists have been, for decades, outlining a multitude of models and possibilities for true immortality, ranging from Dyson's eternal intelligences to Tipler's collapse to Omega point to baby blackhole-universes. To appeal to atomism is to already beg the question (why not run intelligence on waves or more exotic forms of existence, why this particle-chauvinism?).
This applies again and again - the author supplies no solid proofs from any field, and apparently lacks the imagination or background to imagine ways to circumvent or dissolve his suggested limits. They may be exotic methods, but they still exist; were the author to reply that to employ such methods would result in intelligences so alien as to no longer be human, then I should accuse him of begging the question on a even larger scale - of defining the human as desirable and, essentially, as that which is compatible with his chosen limits.
Since that question is at the heart of transhumanism, his paper offers nothing of interest to us.
"Ray Kurzweil and Uploading: Just Say No!", Nick Agar
A new paper has gone up in the November 2011 JET: "Ray Kurzweil and Uploading: Just Say No!" (videos) by Nick Agar (Wikipedia); abstract:
There is a debate about the possibility of mind-uploading – a process that purportedly transfers human minds and therefore human identities into computers. This paper bypasses the debate about the metaphysics of mind-uploading to address the rationality of submitting yourself to it. I argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.
The argument is a variant of Pascal's wager he calls Searle's wager. As far as I can tell, the paper contains mostly ideas he has already written on in his book; from Michael Hauskeller's review of Agar's Humanity's End: Why We Should Reject Radical Enhancement
Starting with Kurzweil, he gives a detailed account of the latter’s “Law of Accelerating Returns” and the ensuing techno-optimism, which leads Kurzweil to believe that we will eventually be able to get rid of our messy bodies and gain virtual immortality by uploading ourselves into a computer. The whole idea is ludicrous, of course, but Agar takes it quite seriously and tries hard to convince us that “it may take longer than Kurzweil thinks for us to know enough about the human brain to successfully upload it” (45) – as if this lack of knowledge was the main obstacle to mind-uploading. Agar’s principal objection, however, is that it will always be irrational for us to upload our minds onto computers, because we will never be able to completely rule out the possibility that, instead of continuing to live, we will simply die and be replaced by something that may be conscious or unconscious, but in any case is not identical with us. While this is certainly a reasonable objection, the way Agar presents it is rather odd. He takes Pascal’s ‘Wager’ (which was designed to convince us that believing in God is always the rational thing to do, because by doing so we have little to lose and a lot to win) and refashions it so that it appears irrational to upload one’s mind, because the procedure might end in death, whereas refusing to upload will keep us alive and is hence always a safe bet. The latter conclusion does not work, of course, since the whole point of mind-uploading is to escape death (which is unavoidable as long as we are stuck with our mortal, organic bodies). Agar argues, however, that by the time we are able to upload minds to computers, other life extension technologies will be available, so that uploading will no longer be an attractive option. This seems to be a curiously techno-optimistic view to take.
John Danaher (User:JohnD) examines the wager, as expressed in the book, further in 2 blog posts:
- "Should we Upload Our Minds? Agar on Searle's Wager (Part One)"
- "Should we Upload Our Minds? Agar on Searle's Wager (Part Two)"
After laying out what seems to be Agar's argument, Danaher constructs the game-theoretic tree and continues the criticism above:
The initial force of the Searlian Wager derives from recognising the possibility that Weak AI is true. For if Weak AI is true, the act of uploading would effectively amount to an act of self-destruction. But recognising the possibility that Weak AI is true is not enough to support the argument. Expected utility calculations can often have strange and counterintuitive results. To know what we should really do, we have to know whether the following inequality really holds (numbering follows part one):
But there’s a problem: we have no figures to plug into the relevant equations, and even if we did come up with figures, people would probably dispute them (“You’re underestimating the benefits of uploading”, “You’re underestimating the costs of uploading” etc. etc.). So what can we do? Agar employs an interesting strategy. He reckons that if he can show that the following two propositions hold true, he can defend (6).
- (6) Eu(~U) > Eu(U)
- (8) Death (outcome c) is much worse for those considering to upload than living (outcome b or d).
...2. A Fate Worse than Death?
- (9) Uploading and surviving (a) is not much better, and possibly worse, than not uploading and living (b or d).
On the face of it, (8) seems to be obviously false. There would appear to be contexts in which the risk of self-destruction does not outweigh the potential benefit (however improbable) of continued existence. Such a context is often exploited by the purveyors of cryonics. It looks something like this:
You have recently been diagnosed with a terminal illness. The doctors say you’ve got six months to live, tops. They tell you to go home, get your house in order, and prepare to die. But you’re having none of it. You recently read some adverts for a cryonics company in California. For a fee, they will freeze your disease-ridden body (or just the brain!) to a cool -196 C and keep it in storage with instructions that it only be thawed out at such a time when a cure for your illness has been found. What a great idea, you think to yourself. Since you’re going to die anyway, why not take the chance (make the bet) that they’ll be able to resuscitate and cure you in the future? After all, you’ve got nothing to lose.
This is a persuasive argument. Agar concedes as much. But he thinks the wager facing our potential uploader is going to be crucially different from that facing the cryonics patient. The uploader will not face the choice between certain death, on the one hand, and possible death/possible survival, on the other. No; the uploader will face the choice between continued biological existence with biological enhancements, on the one hand, and possible death/possible survival (with electronic enhancements), on the other.
The reason has to do with the kinds of technological wonders we can expect to have developed by the time we figure out how to upload our minds. Agar reckons we can expect such wonders to allow for the indefinite continuance of biological existence. To support his point, he appeals to the ideas of Aubrey de Grey. de Grey thinks that -- given appropriate funding -- medical technologies could soon help us to achieve longevity escape velocity (LEV). This is when new anti-aging therapies consistently add years to our life expectancies faster than age consumes them.
If we do achieve LEV, and we do so before we achieve uploadability, then premise (8) would seem defensible. Note that this argument does not actually require LEV to be highly probable. It only requires it to be relatively more probable than the combination of uploadability and Strong AI.
...3. Don’t you want Wikipedia on the Brain?
Premise (9) is a little trickier. It proposes that the benefits of continued biological existence are not much worse (and possibly better) than the benefits of Kurweil-ian uploading. How can this be defended? Agar provides us with two reasons.
The first relates to the disconnect between our subjective perception of value and the objective reality. Agar points to findings in experimental economics that suggest we have a non-linear appreciation of value. I’ll just quote him directly since he explains the point pretty well:
For most of us, a prize of $100,000,000 is not 100 times better than one of $1,000,000. We would not trade a ticket in a lottery offering a one-in-ten chance of winning $1,000,000 for one that offers a one-in-a-thousand chance of winning $100,000,000, even when informed that both tickets yield an expected return of $100,000....We have no difficulty in recognizing the bigger prize as better than the smaller one. But we don’t prefer it to the extent that it’s objectively...The conversion of objective monetary values into subjective benefits reveals the one-in-ten chance at $1,000,000 to be significantly better than the one-in-a-thousand chance at $100,000,000 (pp. 68-69).
How do these quirks of subjective value affect the wager argument? Well, the idea is that continued biological existence with LEV is akin to the one-in-ten chance of $1,000,000, while uploading is akin to the one-in-a-thousand chance of $100,000,000: people are going to prefer the former to the latter, even if the latter might yield the same (or even a higher) payoff.
I have two concerns about this. First, my original formulation of the wager argument relied on the straightforward expected-utility-maximisation-principle of rational choice. But by appealing to the risks associated with the respective wagers, Agar would seem to be incorporating some element of risk aversion into his preferred rationality principle. This would force a revision of the original argument (premise 5 in particular), albeit one that works in Agar’s favour. Second, the use of subjective valuations might affect our interpretation of the argument. In particular it raises the question: Is Agar saying that this is how people will in fact react to the uploading decision, or is he saying that this is how they should react to the decision?
One point is worth noting: the asymmetry of uploading with cryonics is deliberate. There is nothing in cryonics which renders it different from Searle's wager with 'destructive uploading', because one can always commit suicide and then be cryopreserved (symmetrical with committing suicide and then being destructively scanned / committing suicide by being destructively scanned). The asymmetry exists as a matter of policy: the cryonics organizations refuse to take suicides.
Overall, I agree with the 2 quoted people; there is a small intrinsic philosophical risk to uploading as well as the obvious practical risk that it won't work, and this means uploading does not strictly dominate life-extension or other actions. But this is not a controversial point and has already in practice been embraced by cryonicists in their analogous way (and we can expect any uploading to be either non-destructive or post-mortem), and to the extent that Agar thinks that this is a large or overwhelming disadvantage for uploading ("It is unlikely to be rational to make an electronic copy of yourself and destroy your original biological brain and body."), he is incorrect.
A response to "Torture vs. Dustspeck": The Ones Who Walk Away From Omelas
For those not familiar with the topic, Torture vs. Dustspecks asks the question: "Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?"
Most of the discussion that I have noted on the topic takes one of two assumptions in deriving their answer to that question: I think of one as the 'linear additive' answer, which says that torture is the proper choice for the utilitarian consequentialist, because a single person can only suffer so much over a fifty year window, as compared to the incomprehensible number of individuals who suffer only minutely; the other I think of as the 'logarithmically additive' answer, which inverts the answer on the grounds that forms of suffering are not equal, and cannot be added as simple 'units'.
What I have never yet seen is something akin to the notion expressed in Ursula K LeGuin's The Ones Who Walk Away From Omelas.If you haven't read it, I won't spoil it for you.
I believe that any metric of consequence which takes into account only suffering when making the choice of "torture" vs. "dust specks" misses the point. There are consequences to such a choice that extend beyond the suffering inflicted; moral responsibility, standards of behavior that either choice makes acceptable, and so on. Any solution to the question which ignores these elements in making its decision might be useful in revealing one's views about the nature of cumulative suffering, but beyond that are of no value in making practical decisions -- they cannot be, as 'consequence' extends beyond the mere instantiation of a given choice -- the exact pain inflicted by either scenario -- into the kind of society that such a choice would result in.
While I myself tend towards the 'logarithmic' than the 'linear' additive view of suffering, even if I stipulate the linear additive view, I still cannot agree with the conclusion of torture over the dust speck, for the same reason why I do not condone torture even in the "ticking time bomb" scenario: I cannot accept the culture/society that would permit such a torture to exist. To arbitrarily select out one individual for maximal suffering in order to spare others a negligible amount would require a legal or moral framework that accepted such choices, and this violates the principle of individual self-determination -- a principle I have seen Less Wrong's community spend a great deal of time trying to consider how to incorporate into Friendliness solutions for AGI. We as a society already implement something similar to this, economically: we accept taxing everyone, even according to a graduated scheme. What we do not accept is enslaving 20% of the population to provide for the needs of the State.
If there is a flaw in my reasoning here, please enlighten me.
LW Philosophers versus Analytics
By and large, I would bet money that the devoted, experienced, and properly sequenced LWer, is a better philosopher than the average current philosophy majors concentrating in the analytic tradition. I say this because I have regular philosophical conversations with both populations, and notice many philosophical desiderata lacking in my conversations with my classmates, from my school and others, that I find abundantly on this website. Those desiderata, which are roughly the twelve virtues. I find that though my classmates have healthy doses of curiosity, empiricism and even scholarship, they lack in, evenness, lightness, relinquishment, precision, perfectionism and true humility.
How could that be? LW has built a huge positivized reductionist metaphysics, and a Bayesian epistemology which can almost be read as a self improvement manual. These are unprecedented, and in some circles, outrageous truths. This is not to mention the original work that has been done in LW posts and comment trees on, meta-ethics, ethics, biases, mathematics, rationality, quantum physics, economics, self-hack, etc. We have here a self-updating reliably transmittable well oiled machine, the likes of which philosophy has only so rarely seen.
What is even more impressive to me about LW as a philosophical movement, is that it seems to be nearly self contained when it comes to philosophy. I mean most experienced LWers probably really haven't read very much Kant, maybe some Wittgenstein or Quine; but LWers can still somehow solve the problems philosophers spend their lives solving by building disconnected and competing philosophical systems specifically designed for each task, by the use of roughly one rather generally successful epistemology and metaphysics, which can be called together LWism.
So if you agree that LW does better philosophy than analytic philosophers, let's put our money where our mouths are, as our own philosophy suggests we should. I will post a series of discussion posts each concentrating only on one currentish question from academic philosophy. In each post, I will cover the essentials of the problem, as well as provide external resources on the problem. Each post will also include a list of posts from the sequences which are recommended before participation. Each question will be solved with a consensus of less than 2 to 1 odds amongst professional philosophers, i.e., if more than 2/3s of professional philosophers agree, we won't bother. So as to not waste our time with small fish.
You guys, will then in turn cooperate in comment trees to find solutions and decide amongst them, then I'll compare the LW solutions to the solutions given by a random sampling of vaguely successful analytic philosophers, (I will use a university search for my sampling). I will compare the ratio of types of solutions of the two populations, and look for solutions that happen in the one population that don't occur in the other, then I'll post the results, hopefully the next week. (edit): This process of comparison will be the hardest part of this project for me, and if anyone with training or experience in statistics might want to help me with this, please let me know, and we can work on the comparison and the report thereof together. My prediction is that we will be able to quickly reach a high consensus on many issues that analytics have not internally resolved.
The series will be called: the "Enthusiastic Youngsters Formally Tackle Analytic Problems Test" or "the Eyftapt series" [pronounced: afe-taped]. Alternatively Eyftapt could stand for the "Eliezer Yudkowsky and Friends Train Amazing Philosophers Test." Besides shedding moderate light on our philosophical-competence/toolbox juxtaposed to analytic philosophical-competence/toolbox, I'd also like to learn what LW training offers that analytics are currently missing. So that we can focus in on that kind of training for our own benefit, and so that we can offer some advice to the analytics. That is, assuming my prediction that we'll do better is correct. This will not be as easy as comparing solutions, and I may need much more data than what I'll get out of this series, but it couldn't hurt to have a bunch of LWers doing difficult philosophy added to the available data.
What do you guys and gals think, might you be interested in something like this? Mind you it would be in discussion posts, since the main point is to discuss an issue.
(I know some of you cats don't like "philosophy", just call it "arguing about systems and elucidating messy language and thought in order to answer questions" instead. That is what I think we do better.)
BTW, if you have some problem you think we should work on, or or if you think we would be really good at solving some problem or really bad at it compared to non-LW philosophy, message me or comment below, and I'll give you credit for the suggestion. These are the topics I am already decided on: universals/nominalism, correspondence/deflation/coherency, grue/induction, science realism/constructivism, what is math?, scientific underdetermination, a priori knowledge?, radical translation, analytic synthetic division, proper name/description, deduction induction division, modality and possible worlds, what does it mean for a grammatical sentence to be meaningless and how do you tell?, meta-philosophy, i.e., questions about philosophy, and finally, personal identity, roughly to be posted in that order.
(edited after first posting, I just realized it may be worth mention that):
I was not happy about coming to this view. I have always thought of myself as an aspiring analytic philosopher, and even got attached to the ascetics of analytic philosophy. I thought of analytic philosophy as the new science of philosophy that finally got it right. It bothered me to no end that I had been lead to have more faith in the philosophical maturity/competence of a bunch of amateurs on a blog, than in the experts and students of the field that I planned to spend the rest of my life on. I have committed myself to the methods of academic-analytic philosophy publicly in speeches and to my closest friends, colleagues, and family; to turn around in under a year and say that that was all naive enthusiasm, and that there's this blog of college kids that do it better, made me look very stupid in more than one eye, I cared and care about. More than once, I have dissolved a question in my philosophy and cog-sci classes into an obvious cognitive error, explained why we are built to make this error, and left the class with little to do. Professors have praised me for this, and had even started approaching me outside of class to ask me about where I got my analysis from; their faces often came to a sincere awe when I tell them: "I made it up myself, but all the methods I used are neatly organized, generalized, and exemplified in this text called the 'sequences' on this blog of youngsters called 'Less Wrong'. It's only a few hundred pages, kinda reads like G.E.B."
One day, a few months back, one of my professors who I am on a particularly friendly basis with asked me: "Every time we are in class and there is a question, you use this blog of yours, and it seems it gives you an answer for everything, so why are you still studying the analytics, instead of just studying your blog?" I think he meant to ask this question sardonically, but that is not how I took it. I took it as a serious question about how to optimize my time if my goal is to do good philosophy. Not having a good answer to this question, and craving one, probably more than anything, is what prompted me to think of doing this series.
I may be wrong, and it may be that LW has just as hard of a time forming consensus on the issues that analytics have a hard time with, though I doubt it. But I am much more confident, that for some reason, even though I have had very good training, have a very high GPA, have read every classic philosophy text I could get my hands on, and had been reading several modern philosophy journals, all before I even knew about LW, LW has done more for my philosophical maturity, competence, and persuasiveness, than the entirety of the rest of my training, and I wouldn't doubt that many others have had similar thoughts.
King Under The Mountain: Adventure Log + Soundtrack
With the help of many dedicated Less Wrongers (players muflax, Karl, Charlie, and Emile; musicians Mike Blume and Alicorn, technical support Ari Rahikkala) we have successfully completed what is, as far as I know, the first actual Dungeons and Discourse adventure anywhere. Except we're not calling it that, because I don't have the rights to use that name. Though it's not precisely rationality related, I hope it is all right if I post a summary of the adventure by popular demand.
Also, at some point it turned into a musical. The first half of the songs are only available as lyrics at the moment, but Alicorn and MBlume very kindly produced the second half as real music, which I've uploaded to YouTube and linked at the bottom of this post (skip to it now).
THE ADVENTURE
BACKGROUND
The known world has many sects and religions, but all contain shadowy legends of two primeval deities: Sophia, Goddess of Wisdom; and Aleithos, God of Truth. When Sophia announced her plan to create thinking, rational beings, Aleithos objected, declaring that they would fall into error and produce endless falsehoods. Sophia ignored her brother's objections and created humankind, who named the world after their goddess-mother. But Aleithos' fears proved well-founded: humankind fell into error and produced endless falsehoods, and their clamor drove the God of Truth insane.
The once mighty Aleithos fell from heaven, and all of his angelic servants turned into Paradox Beasts, arachnoid monstrosities that sought and devoured those who challenged the laws of logic. Over centuries, most of the Paradox Beasts were banished, but Aleithos himself remained missing. And though thousands of seekers set off to all the corners of the world in search of Truth, the Mad God keeps his own counsel, if He still even exists at all.
The Truth God's madness had one other effect; the laws of physics, once inviolable, turned fluid, and those sufficiently advanced in the study of Truth gained apparently magical abilities. With knowledge literally being power, great philosophers and scientists built mighty cities and empires.
In the middle of the Cartesian Plain at the confluence of the rivers Ordinate and Abcissa stands the mightiest of all, the imperial city of Origin. At the very center of the city stands the infinitely tall Z-Axis Tower, on whose bottom floor lives the all-seeing Wizard of 0=Z. Surrounding the Tower are a host of colleges and universities that attract the greatest scholars from all over Origin, all gathered in service to the great project to find Truth.
Into the city comes Lady Cerune Russell, an exotic noblewoman from far-off parts seeking great thinkers to join her on a dangerous adventure. Four scholars flock to her banner. Nomophilos the Elder the Younger (Emile) is a political scientist studying the central role of laws in creating a just society. Phaidros (muflax) is a zealous Protestant theologian trying to meld strains of thought as disparate as Calvinism, Gnosticism, and W.L. Craig's apologetics. Ephraim (Charlie) is a Darwinian biologist with strong leftist sympathies and an experimental streak that sometimes gets him in trouble. And Macx (Karl) is a quiet but very precise logician with a talent for puzzles.
Cerune explains to the Original scholars that she is the last living descendant of Good King Bertrand, historic ruler of the land of Russellia far to the west. Russellia was the greatest nation in the world until two hundred years ago, when a cataclysm destroyed the entire kingdom in a single day and night. Now the skies above Russellia are dark and filled with choking ash, monsters roam its plains, and the Good King is said to be locked in a magical undying sleep deep beneath the Golden Mountain in the kingdom's center. Though many have traveled to Russellia in search of answers, none have returned alive; Cerune, armed with secret information from the Turing Oracle which she refuses to share, thinks she can do better. The four Originals agree to protect her as she makes the dangerous journey to the Golden Mountain to investigate the mysterious disaster and perhaps lift the curse. Cerune gives them a day in Origin to prepare for the journey.
CHAPTER ONE: ORIGIN
The party skip the city's major attractions, including the Z-Axis Tower and the Hagia Sophia, in favor of more academic preparations: a visit to the library to conduct research, and a shopping trip to Barnes & Aristoi Booksellers, where they purchase reading material for the journey ahead. Here, they find a map of the lands on the road to Russellia, including the unpleasant-sounding Slough of Despotism and the Shadow City of Xar-Morgoloth, whose very name inexplicably chills the air when spoken aloud. After a long discussion on how this thermodynamic-defying effect could probably be used to produce unlimited free energy, they return to more immediate matters and head to the armory to pick up some weapons - a trusty isoceles triangle for Nomophilos, a bow for Macx - before the stores close for the evening. After a final night in Origin, they meet Cerune at the city gates and set off.
They originally intend to stick to the course of the Abcissa, but it is flooding its banks and Cerune recommends crossing the river into Platonia at the Pons Asinorum. After being attacked by a Euclidean Elemental charged with letting no one enter who does not know geometry, they reach the other bank and find a strange old man, raving incomprehensibly. His turns of phrase start to make sense only after the party realizes that he is speaking as if he - and all objects - have no consistent identity.
In his roundabout way, he identifies himself as Heraclitus, the Fire Mage, one of the four great Elemental Mages of Platonia. Many years ago, he crossed into Origin on some errand, only to be ambushed by his arch-enemy, the Water Mage Thales. Thales placed a curse on Heraclitus that he could never cross the same river twice, trapping him on the wrong side of the Abcissa and preventing his return to Platonia. In order to dispel the curse, Heraclitus finds a loophole in the curse: he convinces himself that objects have no permanent identity, and so he can never cross the same river twice since it is not the same river and he is not the same man. Accepting this thesis, he crosses the Abcissa without incident - only to find that his new metaphysics of identity prevents him from forming goals, executing long-term plans, or doing anything more complicated than sitting by the riverbank and eating the fish that swim by.
This sets off a storm of conversation, as each member of the party tries to set Heraclitus right in their own way; Phaidros by appealing to God as a final arbiter of identity, Macx and Nomophilos by arguing that duty is independent of identity and that Heraclitus has a duty to his family and followers. Unfortunately, they make a logical misstep and end out convincing Heraclitus that it is illogical from his perspective to hold conversation; this ends the debate. And as the five philosophers stand around discussing what to do, they are ambushed by a party of assassins, who shoot poisoned arrows at them from a nearby knoll.
Outnumbered and outflanked, the situation seems hopeless, until Macx notices several of the attackers confused and unwilling to attack. With this clue, he identifies them as Buridan's Assassins, who in the presence of two equally good targets will hesitate forever, unable to choose: he yells to his friends to stand with two or more adventurers equidistant from each assassin, and sure enough, this paralyzes the archers and allows the party some breathing space.
But when a second group of assassins arrives to join the first, the end seems near - until Heraclitus, after much pondering, decides to accept his interlocutors' arguments for object permanence and joins in the battle. His fire magic makes short work of the remaining assassins, and when the battle is over, he thanks them and gives a powerful magic item as a gift to each. Then he disappears in a burst of flame after warning his new friends to beware the dangers ahead.
The party searches the corpses of the assassins - who all carry obsidian coins marked PLXM - and then camp for the night on the fringe of the Slough of Despotism.
CHAPTER TWO: THE SLOUGH OF DESPOTISM
The Slough of Despotism is a swamp unfortunately filled with allegators, giant reptiles who thrive on moral superiority and on casting blame. They accuse our heroes of trespassing on their property; our heroes counter that the allegators, who do not have a state to enforce property rights, cannot have a meaningful concept of property. The allegators threaten to form a state, but before they can do so the party manages to turn them against each other by pointing out where their property rights conflict; while the allegators argue, the adventurers sneak off.
They continue through the swamp, braving dense vegetation, giant snakes, and more allegators (who are working on the whole state thing; the party tells them that they're too small and disorganized to be a real state, and that they would have to unite the entire allegator nation under a mutually agreed system of laws) before arriving at an old barrow tomb. Though four of the five adventurers want to leave well enough alone, Ephraim's experimental spirits gets the better of him, and he enters the mound. Its local Barrow Wight has long since departed, but he has left behind a suit of Dead Wight Mail, which confers powerful bonuses on Conservatives and followers of the Right-Hand Path. Nomophilos, the party's Conservative, is all set to take the Mail when Phaidros objects that it is morally wrong to steal from the dead; this sparks a fight that almost becomes violent before Nomo finally backs down; with a sigh of remorse, he leaves the magic item where he found it.
Beyond the barrow tomb lies the domain of the Hobbesgoblins, the mirror image of the Allegators in that they have a strong - some might say dictatorial - state under the rule of their unseen god-king, Lord-Over-All. They are hostile to any foreigners who refuse to swear allegiance to their ruler, but after seeing an idol of the god-king - a tentacled monstrosity bearing more than a passing resemblance to Cthulhu - our heroes are understandably reluctant to do so. As a result, the Hobbesgoblins try to refuse them passage through their capital city of Malmesbury on the grounds that, without being subordinated to Lord-Over-All or any other common ruler, the adventurers are in a state of nature relative to the Hobbesgoblins and may rob, murder, or otherwise exploit them. The Hobbesgoblins don't trust mere oaths or protestations of morality - but Nomophilos finally comes up with a compromise that satisfies them. He offers them a hostage in return for their good behavior, handing them his pet tortoise Xeno. This satisfies the Hobbesgoblins as assurance of their good behavior, and the party passes through Malmesbury without incident.
On the far side of Malmesbury they come to a great lake, around which the miasmas of the swamp seem to swirl expectantly. On the shore of the lake lives Theseus with his two ships. Theseus tells his story: when he came of age, he set off on a trading expedition upon his father's favorite ship. His father made him swear to return the ship intact, but after many years of travel, Theseus realized that every part of the ship had been replaced and repaired, so that there was not a single piece of the ship that was the same as when it had left port. Mindful of his oath, he hunted down the old pieces he had replaced, and joined them together into a second ship. But now he is confused: is it the first or the second ship which he must return to his father?
The five philosophers tell Theseus that it is the first ship: the ship's identity is linked to its causal history, not to the matter that composes it. Delighted with this answer, he offers the second ship to the adventurers, who sail toward the far shore.
Halfway across the lake, they meet an old man sitting upon a small island. He introduces himself as Thomas Hobbes, and says that his spies and secret police have told him everything about the adventurers since they entered the Slough. Their plan to save Russellia is a direct threat to his own scheme to subordinate the entire world under one ruler, and so he will destroy them. When the party expresses skepticism, his "island" rises out of the water and reveals itself to be the back of the monstrous sea creature, Leviathan, the true identity of the Hobbesgoblins' Lord-Over-All. After explaining his theory of government ("Let's Hear It For Leviathan", lyrics only) Hobbes and the monster attack for the game's first boss battle. The fight is immediately plagued by mishaps, including one incident where Phaidros's "Calvin's Predestined Hellfire" spell causes Hobbes to briefly turn into a Dire Tiger. When one of Leviathan's tentacles grab Cerune, she manifests a battle-axe of magic fire called the Axe of Separation and hacks the creature's arm off. She refuses to explain this power, but inspired by the small victory the party defeat Hobbes and reduce Leviathan into a state of Cartesian doubt; the confused monster vanishes into the depths, and the adventurers hurry to the other side and out of the Slough.
CHAPTER THREE: THE SHADOW CITY
Although our heroes make good time, they soon spot a detachment of Hobbesgoblins pursuing them. Afraid the goblins will be angry at the defeat of their god, the party hides; this turns out to be unnecessary, as the goblins only want Ephraim - the one who actually dealt the final blow against Leviathan - to be their new Lord-Over-All. Ephraim rejects the positions, and the party responds to the goblins' desperate pleading by suggesting a few pointers for creating a new society - punishing violence, promoting stability, reinforcing social behavior. The Hobbesgoblins grumble, but eventually depart - just in time for the party to be attacked by more of Buridan's Assassins. These killers' PLXM coins seem to suggest an origin in Xar-Morgoloth, the Shadow City, and indeed its jet-black walls now loom before them. But the city sits upon the only pass through the Central Mountains, so the party reluctantly enters.
Xar-Morgoloth turns out to be a pleasant town of white-washed fences and laughing children. In search of an explanation for the incongruity the five seek out the town's spiritual leader, the Priest of Lies. The Priest explains that although Xar-Morgoloth is superficially a nice place, the town is evil by definition. He argues that all moral explanations must be grounded in base moral facts that cannot be explained, whether these be respect for others, preference of pleasure over pain, or simple convictions that murder and theft are wrong. One of these base level moral facts, he says, is that Xar-Morgoloth is evil. It is so evil, in fact, that it is a moral imperative to keep people out of the city - which is why he sent assassins to scare them off.
Doubtful, the party seeks the mysterious visiting philosopher whom the Priest claimed originated these ideas: they find Immanuel Kant living alone on the outskirts of the city. Kant tells his story: he came from a parallel universe, but one day a glowing portal appeared in the sky, flinging him into the caves beyond Xar-Morgoloth. Wandering into Xar-Morgoloth, he tried to convince the citizens of his meta-ethical theories, but they insisted they could ground good and evil in basic moral intuitions instead. Kant proposed that Xar-Morgoloth was evil as a thought experiment to disprove them, but it got out of hand.
When our heroes challenge Kant's story and blame him for the current state of the city, Kant gets angry and casts Parmenides' Stasis Hex, freezing them in place. Then he announces his intention to torture and kill them all. For although in this world Immanuel Kant is a moral philosopher, in his own world (he explains) Immanuel Kant is a legendary villain and figure of depravity ("I'm Evil Immanuel Kant", lyrics only). Cerune manifests a second magic weapon, the Axe of Choice, to break the Stasis Hex, and the party have their second boss battle, which ends in defeat for Evil Kant. Searching his home, they find an enchanted Parchment of Natural Law that causes the chill in the air whenever the city's name is spoken.
Armed with this evidence, they return to the Priest of Lies and convince him that his moral theory is flawed. The Priest dispels the shadow over the city, recalls his assassins, and restores the town name to its previous non-evil transliteration of Summerglass. He then offers free passage through the caverns that form the only route through the Central Mountains.
CHAPTER FOUR: THE CAVERNS OF ABCISSA
Inside the caverns, which are nearly flooded by the overflowing Abcissa River, the party encounter an army of Water Elementals, leading them to suspect that they may be nearing the headquarters of Heraclitus' arch-enemy, Thales. The Water Elementals are mostly busy mining the rock for gems and magic artifacts, but one of them is sufficiently spooked by Phaidros to cast a spell on him, temporarily turning him to water. This is not immediately a disaster - Phaidros assumes a new form as a water elemental but keeps his essential personality - except that in an Ephraimesque display of overexperimention, Phaidros wonders what would happen if he temporarily relaxed the morphogenic field that holds him in place - as a result, he loses his left hand, a wound which stays in place when he reverts back to his normal form a few hours later. A resigned Phaidros only quotes the Bible: ("And if your hand offend you, cut it off: it is better for you to enter into life maimed, than having two hands to go into hell" - Mark 9:43) and trusts in the Divine plan.
The Caverns of Abcissa are labyrinthine and winding, but eventually the party encounters a trio who will reappear several times in their journey: Ruth (who tells the truth), Guy (who'll always lie) and Clancy (who acts on fancy). These three have a habit of hanging around branching caverns and forks in the road, and Ephraim solves their puzzle thoroughly enough to determine what route to take to the center of the cave system.
Here, in a great cavern, lives a civilization of cave-men whose story sounds a lot like Evil Kant's - from another world, minding their own business until a glowing portal appeared in the sky and sucked them into the caves. The cave-men are currently on the brink of civil war after one of their number, Thag, claims to have visited the mythical "outside" and discovered a world of magic and beauty far more real than the shadows dancing on the walls of their cavern. Most of the other cave-men, led by the very practical Vur, have rejected his tale, saying that the true magic and beauty lies in accepting the real, in-cave world rather than chasing after some outside paradise - but a few of the youth have flocked to Thag's banner, including Antil, a girl with mysterious magic powers.
Only the timely arrival of the adventurers averts a civil war; the party negotiates a truce and offers to solve the dispute empirically - they will escort Vur and Antil with them through the caverns so that representatives of both sides can see whether or not the "outside" really exists. This calms most of the cave-men down, and with Vur and Antil alongside, they head onward to the underground source of the Abcissa - which, according to their research, is the nerve center of Thales' watery empire.
On the way, they encounter several dangers. First, they awake a family of hibernating bears, who are quickly dispatched but who manage to maul the frail Vur so severely that only some divine intervention mediated by Phaidros saves his life. Second, they come across a series of dimensional portals clearly linked to the stories related by Evil Kant and the cave-men. Some link directly to otherworldly seas, pouring their water into the Abcissa and causing the recent floods. Others lead to otherworldly mines and quarries, and are being worked by gangs of Water Elementals. After some discussion of the ethics of stranding the Water Elementals, the five philosophers decide to shut down as many of the portals as possible.
They finally reach the source of the Abcissa, and expecting a battle, deck themselves out in magic armor that grants immunity to water magic. As expected, they encounter Thales, who reveals the full scale of his dastardly plot - to turn the entire world into water. But his exposition is marred by a series of incongruities, including his repeated mispronunciations of his own name ("All is Water", lyrics only). And when the battle finally begins, the party dispatches Thales with minimal difficulty, and the resulting corpse is not that of a Greek philosopher at all, but rather that of Davidson's Swampman, a Metaphysical summon that can take the form of any creature it encounters and imitate them perfectly.
Before anyone has time to consider the implications of their discovery, they are attacked by the real Water Mage, who bombards them with powerful water spells to which their magic armor mysteriously offers no protection. Worse, the Mage is able to create dimensional portals at will, escaping attacks effortlessly. After getting battered by a series of magic Tsunamis that nearly kill several of the weaker party members, the adventurers are in dire straits.
Then the tide begins to turn. Antil manifests the power to go invisible and attack the Water Mage from an unexpected vantage. Cerune manifests another magic weapon, the Axe of Extension, which gives her allies the same powers over space as the Water Mage seems to possess. And with a little prompting from Cerune, Phaidros and Nomophilos realize the Water Mage's true identity. Magic armor doesn't grant protection from his water spells because they are not water at all, but XYZ, a substance physically identical to but chemically different from H2O. And his mastery of dimensional portals arises from his own origin in a different dimension, Twin Earth. He is Hilary Putnam ("All is Water, Reprise", lyrics only) who has crossed dimensions, defeated Thales, and assumed his identity in order to take over his watery empire and complete his world domination plot. With a last push of magic, the party manage to defeat Putnam, who is knocked into the raging Abcissa and drowned in the very element he sought to control.
They tie up the loose ends of the chapter by evacuating the Water Elementals from Twin Earth, leading the cave-men to the promised land of the Outside, and confronting Antil about her mysterious magic. Antil gives them the source of her power to turn invisible: the Ring of Gyges, which she found on the cave floor after an earthquake. She warns them never to use it, as it presents a temptation which their ethics might be unable to overcome.
CHAPTER FIVE: CLIMBING MOUNT IMPROBABLE
Now back on the surface, the party finds their way blocked by the towering Mount Improbable, which at first seems too tall to ever climb. But after some exploration, they find there is a gradual path sloping upward, and begin their ascent. They are blocked, however, by a regiment of uniformed apes: cuteness turns to fear when they get closer and find the apes have machine guns. They decide to negotiate, and the apes prove willing to escort them to their fortress atop the peak if they can prove their worth by answering a few questions about their religious beliefs.
Satisfied, the ape army lead them to a great castle at the top of the mountain where Richard Dawkins ("Beware the Believers", credit Michael Edmondson) and his snow leopard daemon plot their war against the gods themselves. Dawkins believes the gods to be instantiated memes - creations of human belief that have taken on a life of their own due to Aleithos' madness - and accuses them of causing disasters, poverty, and ignorance in order to increase humanity's dependence upon them and keep the belief that sustains their existence intact. With the help of his genetically engineered apes and a fleet of flying battleships, he has been waging war against all the major pantheons of polytheism simultaneously. Dawkins is now gearing up to attack his most implacable foe, Jehovah Himself, although he admits He has so far managed to elude him.
Hoping the adventurers will join his forces, he takes them on a tour of the castle, showing them the towering battlements, the flotilla of flying battleships, and finally, the dungeons. In these last are imprisoned Fujin, Japanese god of storms; Meretseger, Egyptian goddess of the flood, and even Ares, the Greek god of war (whom Dawkins intends to try for war crimes: not any specific war crime, just war crimes in general). When the party reject Dawkins' offer to join his forces (most vocally Phaidros, most reluctantly Ephraim) Dawkins locks them in the dungeons themselves.
They are rescued late at night by their old friend Theseus. Theseus lost his ship in a storm (caused by the Japanese storm god, Fujin) and joined Dawkins' forces to get revenge; he is now captain of the aerial battleships. Theseus loads the adventurers onto a flying battleship and deposits them on the far side of the mountain, where Dawkins and his apes will be unlikely to find them.
Their troubles are not yet over, however, for they quickly encounter a three man crusade consisting of Blaise Pascal, Johann Tetzel, and St. Augustine of Hippo (mounted, cavalry-style, upon an actual hippopotamus). The three have come, led by a divine vision, to destroy Dawkins and his simian armies as an abomination unto the Lord, and upon hearing that the adventurers have themselves escaped Dawkins, invite them to come along. But the five, despite their appreciation for Pascal's expository fiddle music ("The Devil and Blaise Pascal") are turned off by Tetzel's repeated attempts to sell them indulgences, and Augustine's bombastic preaching. After Phaidros gets in a heated debate with Augustine over the role of pacifism in Christian thinking, the two parties decide to go their separate ways, despite Augustine's fiery condemnations and Pascal's warning that there is a non-zero chance the adventurers' choice will doom them to Hell.
After another encounter with Ruth, Guy, and Clancy, our heroes reach the base of Mount Improbable and at last find themselves in Russellia.
CHAPTER SIX: THE PALL OVER RUSSELLIA
Russellia is, as the legends say, shrouded in constant darkness. The gloom and the shock of being back in her ancestral homeland are too much for Cerune, who breaks down and reveals her last few secrets. Before beginning the quest, she consulted the Turing Oracle in Cyberia, who told her to seek the aid of a local wizard, Zermelo the Magnificent. Zermelo gave her nine magic axes of holy fire, which he said possessed the power to break the curse over Russellia. But in desperation, she has already used three of the magic axes, and with only six left she is uncertain whether she will have the magic needed.
At that moment, Heraclitus appears in a burst of flame, seeking a debriefing on the death of his old enemy Thales. After recounting the events of the past few weeks, our heroes ask Heraclitus whether, as a Fire Mage, he can reforge the axes of holy fire. Heraclitus admits the possibility, but says he would need to know more about the axes, their true purpose, and the enemy they were meant to fight. He gives the party an enchanted matchbook, telling them to summon him by striking a match when they gather the information he needs.
Things continue going wrong when, in the midst of a discussion about large numbers, Phaidros makes a self-contradictory statement that summons a Paradox Beast. Our heroes stand their ground and manage to destroy the abomination, despite its habit of summoning more Paradox Beasts to its aid through its Principle of Explosion spell. Bruised and battered, they limp into the nearest Russellian city on their map, the town of Ravenscroft.
The people of Ravenscroft tell their story: in addition to the eternal darkness, Russellia is plagued by vampire attacks and by a zombie apocalypse, which has turned the population of the entire country, save Ravenscroft, into ravenous brain-eating zombies. Despite the burghers claiming the zombie apocalypse had been confirmed by no less a figure than Thomas Nagel, who passed through the area a century ago, our heroes are unconvinced: for one thing, the Ravenscrofters are unable to present any evidence that the other Russellians are zombies except for their frequent attacks on Ravenscroft - and the Ravenscrofters themselves attack the other towns as a "pre-emptive measure". But the Ravenscrofters remain convinced, and even boast of their plan to launch a surprise attack on neighboring Brixton the next day.
Suspicious, our heroes head to the encampment of the Ravenscroft army, where they are just in time to see Commander David Chalmers give a rousing oration against the zombie menace ("Flee! A History of Zombieism In Western Thought", credit Emerald Rain). They decide to latch on to Chalmers' army, both because it is heading the same direction they are and because they hope they may be able to resolve the conflict between Ravenscroft and Brixton before it turns violent.
They camp with the army in some crumbling ruins from the golden age of the Russellian Empire. Entering a ruined temple, they disarm a series of traps to enter a vault containing a legendary artifact, the Morningstar of Frege. They also encounter a series of statues and bas-reliefs of the Good King, in which he demonstrates his chivalry by swearing an oath to Aleithos that he will defend all those who cannot defend themselves. Before they can puzzle out the meaning of all they have seen, they are attacked by vampires, confirming the Ravenscrofters' tales; they manage to chase them away with their magic and a hare-brained idea of Phaidros' to bless their body water, turning it into holy water and burning them up from the inside.
The next morning, they sneak into Brixton before the main army, and find their fears confirmed: the Brixtonites are normal people, no different from the Russellians, and they claim that Thomas Nagel told them that they were the only survivors of the zombie apocalypse. They manage to forge a truce between Ravenscroft and Brixton, but to their annoyance, the two towns make peace only to attack a third town, Mountainside, which they claim is definitely populated by zombies this time. In fact, they say, the people of Mountainside openly admit to being zombies and don't even claim to have souls.
Once again, our heroes rush to beat the main army to Mountainside. There they find the town's leader, Daniel Dennett, who explains the theory of eliminative materialism ("The Zombies' Secret"). The party tries to explain the subtleties of Dennett's position to a bloodthirsty Chalmers, and finally all sides agree to drop loaded terms like "human" and "zombie" and replace them with a common word that suggests a fundamental humanity but without an internal Cartesian theater (one of our heroes suggests "NPC", and it sticks). The armies of the three towns agree to ally against their true common enemy - the vampires who live upon the Golden Mountain and kidnap their friends and families in their nighttime raids.
Before the attack, Nomophilos and Ephraim announce their intention to build an anti-vampire death ray. The theory is that places on the fringe of Russellia receive some sunlight, while places in the center are shrouded in endless darkness. If the towns of Russellia can set up a system of mirrors from their highest towers, they can reflect the sunlight from the borderlands into a central collecting mirror in Mountainside, which can be aimed at the vampires' hideout to flood it with daylight, turning them to ashes. Ephraim, who invested most of his skill points into techne, comes up with schematics for the mirror, and after constructing a successful prototype, Chalmers and Dennett sound the attack order.
The death ray takes out many of the vampires standing guard, but within their castle they are protected from its light: our heroes volunteer to infiltrate the stronghold, but are almost immediately captured and imprisoned - the vampires intend to sacrifice Cerune in a ritual to use her royal blood to increase their power. But the adventurers make a daring escape: arch-conservative Nomophilos uses the invisible hand of the marketplace to steal the keys out of the jailer's pocket, and Phaidros summons a five hundred pound carnivorous Christ metaphor to maul the guards. Before the party can escape the castle, they are confronted by the vampire lord himself, who is revealed to be none other than Thomas Nagel ("What Is It Like To Be A Bat?"). In the resulting battle, Nagel is turned to ashes and the three allied cities make short work of the remaining vampires, capturing the castle.
The next morning finds our heroes poring over the vampire lord's library. Inside, they find an enchanted copy of Godel Escher Bach (with the power to summon an identical enchanted copy of Godel Escher Bach) and a slew of books on Russellian history. Over discussion of these latter, they finally work out what curse has fallen over the land, and what role the magic axes play in its removal.
[spoiler alert; stop here if you want to figure it out for yourself]
The Good King's oath to defend those who could not defend themselves was actually more complicated than that: he swore an oath to the god Aleithos to defend those and only those who could not defend themselves. His enemies, realizing the inherent contradiction, attacked him, trapping Russell in a contradiction - if he defended himself, he was prohibited from doing so; if he did not defend himself, he was obligated to do so. Trapped, he was forced to break his oath, and the Mad God punished him by casting his empire into eternal darkness and himself into an endless sleep.
The nine axes of Zermelo the Magnificent embody the nine axioms of ZFC. If applied to the problem, they will allow set theory to be reformulated in a way that makes the paradox impossible, lifting the curse and waking the Good King.
Upon figuring out the mystery, the party strike the enchanted match and summon Heraclitus, who uses fire magic to reforge the Axes of Choice, Separation, and Extension. Thus armed, the party leave the Vampire Lord's castle and enter the system of caverns leading into the Golden Mountain.
CHAPTER SEVEN: THE KING UNDER THE MOUNTAIN
The party's travels through the cavern are quickly blocked by a chasm too deep to cross. Nomophilos saves the day by realizing that the enchanted copy of Godel Escher Bach creates the possibility of infinite recursion; he uses each copy of GEB to create another copy, and eventually fills the entire chasm with books, allowing the party to walk through to the other side.
There they meet Ruth, Clancy, and Guy one last time; the three are standing in front of a Logic Gate, and to open it the five philosophers must solve the Hardest Logic Puzzle Ever. In an epic feat that the bards will no doubt sing for years to come, Macx comes up with a solution to the puzzle, identifies each of the three successfully, and opens the Logic Gate.
Inside the gate is the Good King, still asleep after two centuries. His resting place is guarded by the monster he unleashed, a fallen archangel who has become a Queen Paradox Beast. The Queen summons a small army of Paradox Beast servants with Principle of Explosion, and the battle begins in earnest. Cerune stands in a corner, trying to manifest her nine magic axes, but Nomophilos uses his Conservative spell "Morning in America" to summon a Raygun capable of piercing the Queen Paradox Beast's armored exoskeleton. Macx summons a Universal Quantifier and attaches it to his Banish Paradox Beast spell to decimate the Queen's armies. Ephraim desperately tries to wake the Good King, while Phaidros simply prays.
After an intense battle, Cerune manifests all nine axes and casts them at the Queen Paradox Beast, dissolving the paradox and destroying the beast's magical defenses. The four others redouble their efforts, and finally manage to banish the Queen. When the Queen Paradox Beast is destroyed, Good King Bertrand awakens.
Bertrand is temporarily discombobulated, but eventually regains his bearings and listens to the entire adventure. Then he tells his story. The attack that triggered the curse upon him, he says, was no coincidence, but rather a plot by a sinister organization against whom he had been waging a shadow war: the Bayesian Conspiracy. He first encountered the conspiracy when their espionage arm, the Bayes Network, tried to steal a magic emerald of unknown origin from his treasury. Since then, he worked tirelessly to unravel the conspiracy, and had reached the verge of success - learning that their aim was in some way linked to a plan to gain the shattered power of the Mad God Aleithos for themselves - when the Conspiracy took advantage of his oath and managed to put him out of action permanently.
He is horrified to hear that two centuries have passed, and worries that the Bayesians' mysterious plan may be close to fruition. He begs the party to help him re-establish contact with the Conspiracy and continue figuring out their plans, which may be a dire peril to the entire world. But he expresses doubt that such a thing is even possible at this stage.
In a burst of flame, Heraclitus appears, announcing that all is struggle and that he has come to join in theirs. He admits that the situation is grim, but declares it is not as hopeless as it seems, because they do not fight alone. He invokes the entire Western canon as the inspiration they follow and the giants upon whose shoulders they stand ("Grand Finale").
Heraclitus, Good King Bertrand, and the five scholars end the adventure by agreeing to seek out the Bayesian Conspiracy and discover whether Russell's old adversaries are still active. There are nebulous plans to continue the campaign (subject to logistical issues) in a second adventure, Fermat's Last Stand.
MUSIC
LYRICS ONLY
Hobbes' Song: Let's Hear It For Leviathan
Kant's Song: I'm Evil Immanuel Kant
Thales' Song: All Is Water
Putnam's Song: All Is Water, Reprise
GOOD ARTISTS BORROW, GREAT ARTISTS STEAL
Dawkins' Song: Beware The Believers (credit: Michael Edmondson)
Chalmers' Song: Flee: A History of Zombieism In Western Thought (credit: Emerald Rain)
ORIGINAL ADAPTATIONS
Pascal's Song: The Devil and Blaise Pascal
Dennett's Song: The Zombies' Secret
Vampire Nagel's Song: What Is It Like To Be A Bat?
Heraclitus' Song: Grand Finale
Where do selfish values come from?
Human values seem to be at least partly selfish. While it would probably be a bad idea to build AIs that are selfish, ideas from AI design can perhaps shed some light on the nature of selfishness, which we need to understand if we are to understand human values. (How does selfishness work in a decision theoretic sense? Do humans actually have selfish values?) Current theory suggest 3 possible ways to design a selfish agent:
- have a perception-determined utility function (like AIXI)
- have a static (unchanging) world-determined utility function (like UDT) with a sufficiently detailed description of the agent embedded in the specification of its utility function at the time of the agent's creation
- have a world-determined utility function that changes ("learns") as the agent makes observations (for concreteness, let's assume a variant of UDT where you start out caring about everyone, and each time you make an observation, your utility function changes to no longer care about anyone who hasn't made that same observation)
Note that 1 and 3 are not reflectively consistent (they both refuse to pay the Counterfactual Mugger), and 2 is not applicable to humans (since we are not born with detailed descriptions of ourselves embedded in our brains). Still, it seems plausible that humans do have selfish values, either because we are type 1 or type 3 agents, or because we were type 1 or type 3 agents at some time in the past, but have since self-modified into type 2 agents.
But things aren't quite that simple. According to our current theories, an AI would judge its decision theory using that decision theory itself, and self-modify if it was found wanting under its own judgement. But humans do not actually work that way. Instead, we judge ourselves using something mysterious called "normativity" or "philosophy". For example, a type 3 AI would just decide that its current values can be maximized by changing into a type 2 agent with a static copy of those values, but a human could perhaps think that changing values in response to observations is a mistake, and they ought to fix that mistake by rewinding their values back to before they were changed. Note that if you rewind your values all the way back to before you made the first observation, you're no longer selfish.
So, should we freeze our selfish values, or rewind our values, or maybe even keep our "irrational" decision theory (which could perhaps be justified by saying that we intrinsically value having a decision theory that isn't too alien)? I don't know what conclusions to draw from this line of thought, except that on close inspection, selfishness may offer just as many difficult philosophical problems as altruism.
Bayes Slays Goodman's Grue
This is a first stab at solving Goodman's famous grue problem. I haven't seen a post on LW about the grue paradox, and this surprised me since I had figured that if any arguments would be raised against Bayesian LW doctrine, it would be the grue problem. I haven't looked at many proposed solutions to this paradox, besides some of the basic ones in "The New Problem of Induction". So, I apologize now if my solution is wildly unoriginal. I am willing to put you through this dear reader because:
- I wanted to see how I would fare against this still largely open, devastating, and classic problem, using only the arsenal provided to me by my minimal Bayesian training, and my regular LW reading.
- I wanted the first LW article about the grue problem to attack it from a distinctly Lesswrongian aproach without the benefit of hindsight knowledge of the solutions of non-LW philosophy.
- And lastly, because, even if this solution has been found before, if it is the right solution, it is to LW's credit that its students can solve the grue problem with only the use of LW skills and cognitive tools.
I would also like to warn the savvy subjective Bayesian that just because I think that probabilities model frequencies, and that I require frequencies out there in the world, does not mean that I am a frequentest or a realist about probability. I am a formalist with a grain of salt. There are no probabilities anywhere in my view, not even in minds; but the theorems of probability theory when interpreted share a fundamental contour with many important tools of the inquiring mind, including both, the nature of frequency, and the set of rational subjective belief systems. There is nothing more to probability than that system which produces its theorems.
Lastly, I would like to say, that even if I have not succeeded here (which I think I have), there is likely something valuable that can be made from the leftovers of my solution after the onslaught of penetrating critiques that I expect form this community. Solving this problem is essential to LW's methods, and our arsenal is fit to handle it. If we are going to be taken seriously in the philosophical community as a new movement, we must solve serious problems from academic philosophy, and we must do it in distinctly Lesswrongian ways.
"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green.
(conclusion):
There is a very high probability that a never before observed emerald will be green."
That is the inference that the grue problem threatens, courtesy of Nelson Goodman. The grue problem starts by defining "grue":
"An object is grue iff it is first observed before time T, and it is green, or it is first observed after time T, and it is blue."
So you see that before time T, from the list of premises:
"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green."
(we will call these the green premises)
it follows that:
"The first emerald ever observed was grue.
The second emerald ever observed was grue.
The third emerald ever observed was grue.
… etc.
The nth emerald ever observed was grue."
(we will call these the grue premises)
The proposer of the grue problem asks at this point: "So if the green premises are evidence that the next emerald will be green, why aren't the grue premises evidence for the next emerald being grue?" If an emerald is grue after time T, it is not green. Let's say that the green premises brings the probability of "A new unobserved emerald is green." to 99%. In the skeptic's hypothesis, by symmetry it should also bring the probability of "A new unobserved emerald is grue." to 99%. But of course after time T, this would mean that the probability of observing a green emerald is 99%, and the probability of not observing a green emerald is at least 99%, since these sentences have no intersection, i.e., they cannot happen together, to find the probability of their disjunction we just add their individual probabilities. This must give us a number at least as big as 198%, which is of course, a contradiction of the Komolgorov axioms. We should not be able to form a statement with a probability greater than one.
This threatens the whole of science, because you cannot simply keep this isolated to emeralds and color. We may think of the emeralds as trials, and green as the value of a random variable. Ultimately, every result of a scientific instrument is a random variable, with a very particular and useful distribution over its values. If we can't justify inferring probability distributions over random variables based on their previous results, we cannot justify a single bit of natural science. This, of course, says nothing about how it works in practice. We all know it works in practice. "A philosopher is someone who say's, 'I know it works in practice, I'm trying to see if it works in principle.'" - Dan Dennett
We may look at an analogous problem. Let's suppose that there is a table and that there are balls being dropped on this table, and that there is an infinitely thin line drawn perpendicular to the edge of the table somewhere which we are unaware of. The problem is to figure out the probability of the next ball being right of the line given the last results. Our first prediction should be that there is a 50% chance of the ball being right of the line, by symmetry. If we get the result that one ball landed right of the line, by Laplace's rule of succession we infer that there is a 2/3ds chance that the next ball will be right of the line. After n trials, if every trial gives a positive result, the probability we should assign to the next trial being positive as well is n+1/n +2.
If this line was placed 2/3ds down the table, we should expect that the ratio of rights to lefts should approach 2:1. This gives us a 2/3ds chance of the next ball being a right, and the fraction of Rights out of trials approaches 2/3ds ever more closely as more trials are performed.
Now let us suppose a grue skeptic approaching this situation. He might make up two terms "reft" and "light". Defined as you would expect, but just in case:
"A ball is reft of the line iff it is right of it before time T when it lands, or if it is left of it after time T when it lands.
A ball is light of the line iff it is left of the line before time T when it lands, or if it is right of the line after time T when it first lands."
The skeptic would continue:
"Why should we treat the observation of several occurrences of Right, as evidence for 'The next ball will land on the right.' and not as evidence for 'The next ball will land reft of the line.'?"
Things for some reason become perfectly clear at this point for the defender of Bayesian inference, because now we have an easy to imaginable model. Of course, if a ball landing right of the line is evidence for Right, then it cannot possibly be evidence for ~Right; to be evidence for Reft, after time T, is to be evidence for ~Right, because after time T, Reft is logically identical to ~Right; hence it is not evidence for Reft, after time T, for the same reasons it is not evidence for ~Right. Of course, before time T, any evidence for Reft is evidence for Right for analogous reasons.
But now the grue skeptic can say something brilliant, that stops much of what the Bayesian has proposed dead in its tracks:
"Why can't I just repeat that paragraph back to you and swap every occurrence of 'right' with 'reft' and 'left' with 'light', and vice versa? They are perfectly symmetrical in terms of their logical realtions to one another.
If we take 'reft' and 'light' as primitives, then we have to define 'right' and 'left' in terms of 'reft' and 'light' with the use of time intervals."
What can we possibly reply to this? Can he/she not do this with every argument we propose then? Certainly, the skeptic admits that Bayes, and the contradiction in Right & Reft, after time T, prohibits previous Rights from being evidence of both Right and Reft after time T; where he is challenging us is in choosing Right as the result which it is evidence for, even though "Reft" and "Right" have a completely symmetrical syntactical relationship. There is nothing about the definitions of reft and right which distinguishes them from each other, except their spelling. So is that it? No, this simply means we have to propose an argument that doesn't rely on purely syntactical reasoning. So that if the skeptic performs the swap on our argument, the resulting argument is no longer sound.
What would happen in this scenario if it were actually set up? I know that seems like a strangely concrete question for a philosophy text, but its answer is a helpful hint. What would happen is that after time T, the behavior of the ratio: 'Rights:Lefts' as more trials were added, would proceed as expected, and the behavior of the ratio: 'Refts:Lights' would approach the reciprocal of the ratio: 'Rights:Lefts'. The only way for this to not happen, is for us to have been calling the right side of the table "reft", or for the line to have moved. We can only figure out where the line is by knowing where the balls landed relative to it; anything we can figure out about where the line is from knowing which balls landed Reft and which ones landed Light, we can only figure out because in knowing this and and time, we can know if the ball landed left or right of the line.
To this I know of no reply which the grue skeptic can make. If he/she say's the paragraph back to me with the proper words swapped, it is not true, because In the hypothetical where we have a table, a line, and we are calling one side right and another side left, the only way for Refts:Lefts behave as expected as more trials are added is to move the line (if even that), otherwise the ratio of Refts to Lights will approach the reciprocal of Rights to Lefts.
This thin line is analogous to the frequency of emeralds that turn out green out of all the emeralds that get made. This is why we can assume that the line will not move, because that frequency has one precise value, which never changes. Its other important feature is reminding us that even if two terms are syntactically symmetrical, they may have semantic conditions for application which are ignored by the syntactical model, e.g., checking to see which side of the line the ball landed on.
In conclusion:
Every random variable has as a part of it, stored in its definition/code, a frequency distribution over its values. By the fact that somethings happen sometimes, and others happen other times, we know that the world contains random variables, even if they are never fundamental in the source code. Note that "frequency" is not used as a state of partial knowledge, it is a fact about a set and one of its subsets.
The reason that:
"The first emerald ever observed was green.
The second emerald ever observed was green.
The third emerald ever observed was green.
… etc.
The nth emerald ever observed was green.
(conclusion):
There is a very high probability that a never before observed emerald will be green."
is a valid inference, but the grue equivalent isn't, is that grue is not a property that the emerald construction sites of our universe deal with. They are blind to the grueness of their emeralds, they only say anything about whether or not the next emerald will be green. It may be that the rule that the emerald construction sites use to get either a green or non-green emerald change at time T, but the frequency of some particular result out of all trials will never change; the line will not move. As long as we know what symbols we are using for what values, observing many green emeralds is evidence that the next one will be grue, as long as it is before time T, every record of an observation of a green emerald is evidence against a grue one after time T. "Grue" changes meanings from green to blue at time T, 'green'''s meaning stays the same since we are using the same physical test to determine green-hood as before; just as we use the same test to tell whether the ball landed right or left. There is no reft in the universe's source code, and there is no grue. Green is not fundamental in the source code, but green can be reduced to some particular range of quanta states; if you had the universes source code, you couldn't write grue without first writing green; writing green without knowing a thing about grue would be just as hard as while knowing grue. Having a physical test, or primary condition for applicability, is what privileges green over grue after time T; to have a physical consistent test is the same as to reduce to a specifiable range of physical parameters; the existence of such a test is what prevents the skeptic from performing his/her swaps on our arguments.
Take this more as a brainstorm than as a final solution. It wasn't originally but it should have been. I'll write something more organized and consize after I think about the comments more, and make some graphics I've designed that make my argument much clearer, even to myself. But keep those comments coming, and tell me if you want specific credit for anything you may have added to my grue toolkit in the comments.
Writing feedback requested: activists should pursue a positive Singularity
I managed to turn an essay assignment into an opportunity to write about the Singularity, and I thought I'd turn to LW for feedback on the paper. The paper is about Thomas Pogge, a German philosopher who works on institutional efforts to end poverty and is a pledger for Giving What We Can.
I offer a basic argument that he and other poverty activists should work on creating a positive Singularity, sampling liberally from well-known Less Wrong arguments. It's more academic than I would prefer, and it includes some loose talk of 'duties' (which bothers me), but for its goals, these things shouldn't be a huge problem. But maybe they are - I want to know that too.
I've already turned the assignment in, but when I make a better version, I'll send the paper to Pogge himself. I'd like to see if I can successfully introduce him to these ideas. My one conversation with him indicates that he would be open to actually changing his mind. He's clearly thought deeply about how to do good, and may simply have not been exposed to the idea of the Singularity yet.
I want feedback on all aspects of the paper - style, argumentation, clarity. Be as constructively cruel as I know only you can.
If anyone's up for it, fee free to add feedback using Track Changes and email me a copy - mjcurzi[at]wustl.edu. I obviously welcome comments on the thread as well.
You can read the paper here in various formats.
Upvotes for all. Thank you!
FHI Essay Competition
This competition is only open to philosophy students.
Can philosophical research contribute to securing a long and prosperous future for humanity and its descendants?
What would you think about if you really wanted to make a difference?
Crucial considerations are questions or ideas that could decisively change your entire approach to an issue. What are the crucial considerations for humanity’s future? These could range from deep questions about population ethics to world government, the creation of greater than human intelligence, or the risks of human extinction.
The Future of Humanity Institute at Oxford University wants to get young philosophers thinking about these big questions. We know that choosing a PhD thesis topic is one of the big choices affecting the direction of your career, and so deserves a great deal of thought. To encourage this, we are running a slightly unusual prize competition. The format is a two page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis regarding a crucial consideration for humanity’s future. We will publish the best abstracts on our website and give a prize of £2,000 to the author of the proposal we deem the most promising or original.
Review of Lakoff & Johnson, 'Philosophy in the Flesh'
Lakoff & Johnson's Philosophy in the Flesh (1999) is an ambitious 550-page attempt to rewrite philosophy from scratch given what we now know from the cognitive sciences about how human reasoning works. After reading the first page, I had some hope it would be the book I could hand to somebody who wanted to study philosophy without being corrupted by studying 2500 years worth of bad methods and wrong answers.
Yet, while I agree with the book in broad strokes and in many particular details, it has several weaknesses that make it difficult to engage at a technical level. These problems may be part of why the book has 1400+ citations on Google Scholar even though there are almost no books or articles that engage its contents in detail.
The biggest problem is that Lakoff and Johnson cover too much ground, and therefore don't have the space to defend or even explain the precise nature of the thousands of claims they make in the book. E.g.: Several times per chapter, they claim that philosophers assume X without citing a single philosopher in the business of claiming X, or even someone else claiming that philosophers often assume X.
The majority of the book (chapters 9-24) engages directly in what Eliezer calls "dissolving the question" — at least, the part where one explains the cognitive science of how particular philosophical confusions and debates arise. While these chapters are enlightening, they depend too heavily on the earlier account of metaphor, rarely draw upon other findings in cognitive science that are likely relevant, are sparse in scientific citations, and (as I've said) rarely cite actual philosophers claiming the things they say that philosophers claim.
The authors are also too unclear about their positive project: "embodied philosophy." Nowhere do they state the propositions that make up what they call "embodied philosophy," nowhere do they explicate those propositions in precise detail, and nowhere do they defend those propositions in detail from misinterpretation and objection. The reader is left with only a vague sense of what they mean by embodied philosophy, or why it should be favored over other forms of naturalistic philosophy that have been defined and defended more precisely by their proponents. (Compare, for example, Bishop & Trout's clear explication and defense of "strategic reliabilism" in Epistemology and the Psychology of Human Judgment.)
There are only two subjects the authors treat in sufficient detail. The first is the section on how thoroughly human thought is metaphorical, which builds on their earlier Metaphors We Live By (1980). This section (chapters 4 & 5) is excellent, and worth reading if you read nothing else of the book.
The other subject treated in sufficient detail is Noam Chomsky (chapter 22). This is not surprising because it is the other major subject of Lakoff's career. Lakoff has spent decades promoting his generative semantics over Chomsky's generative grammar.
The book's deficiencies are frustrating to someone of a "scholarly technical cogsci/philosophy" bent like myself, but they may actually make the book more readable to those whose primary interest isn't technical philosophy. Philosophy in the Flesh is full of ideas, many of them correct or half-correct, and may be worth reading just to get a broad picture of what philosophy might look like when you start from scratch with the latest cognitive science.
The Apparent Reality of Physics
Follow-up to: Syntacticism
I wrote:
The only objects that are real (in a Platonic sense) are formal systems (or rather, syntaxes). That is to say, my ontology is the set of formal systems. (This is not incompatible with the apparent reality of a physical universe).
In my experience, most people default1 to naïve physical realism: the belief that "matter and energy and stuff exist, and they follow the laws of physics". This view has two problems: how do you know stuff exists, and what makes it follow those laws?
To the first - one might point at a rock, and say "Look at that rock; see how it exists at me." But then we are relying on sensory experience; suppose the simulation hypothesis were true, then that sensory experience would be unchanged, but the rock wouldn't really exist, would it? Suppose instead that we are being simulated twice, on two different computers. Does the rock exist twice as much? Suppose that there are actually two copies of the Universe, physically existing. Is there any way this could in principle be distinguished from the case where only one copy exists? No; a manifest physical reality is observationally equivalent to N manifest physical realities, as well as to a single simulation or indeed N simulations. (This remains true if we set N=0.)
So a true description requires that the idea of instantiation should drop out of the model; we need to think in a way that treats all the above cases as identical, that justifiably puts them all in the same bucket. This we can do if we claim that that-which-exists is precisely the mathematical structure defining the physical laws and the index of our particular initial conditions (in a non-relativistic quantum universe that would be the Schrödinger equation and some particular wavefunction). Doing so then solves not only the first problem of naïve physical realism, but the second also, since trivially solutions to those laws must follow those laws.
But then why should we privilege our particular set of physical laws, when that too is just a source of indexical uncertainty? So we conclude that all possible mathematical structures have Platonic existence; there is no little XML tag attached to the mathematics of our own universe that states "this one exists, is physically manifest, is instantiated", and in this view of things such a tag is obviously superfluous; instantiation has dropped out of our model.
When an agent in universe-defined-by-structure-A simulates, or models, or thinks-about, universe-defined-by-structure-B, they do not 'cause universe B to come into existence'; there is no refcount attached to each structure, to tell the Grand Multiversal Garbage Collection Routine whether that structure is still needed. An agent in A simulating B is not a causal relation from A to B; instead it is a causal relation from B to A! B defines the fact-of-the-matter as to what the result of B's laws is, and the agent in A will (barring cosmic rays flipping bits) get the result defined by B.2
So we are left with a Platonically existing multiverse of mathematical structures and solutions thereto, which can contain conscious agents to whom there will be every appearance of a manifest instantiated physical reality, yet no such physical reality exists. In the terminology of Max Tegmark (The Mathematical Universe) this position is the acceptance of the MUH but the rejection of the ERH (although the Mathematical Universe is an external reality, it's not an external physical reality).
Reducing all of applied mathematics and theoretical physics to a syntactic formal system is left as an exercise for the reader.
1That is, when people who haven't thought about such things before do so for the first time, this is usually the first idea that suggests itself.
2I haven't yet worked out what happens if a closed loop forms, but I think we can pull the same trick that turns formalism into syntacticism; or possibly, consider the whole system as a single mathematical structure which may have several stable states (indexical uncertainty) or no stable states (which I think can be resolved by 'loop unfolding', a process similar to that which turns the complex plane into a Riemann surface - but now I'm getting beyond the size of digression that fits in a footnote; a mathematical theory of causal relations between structures needs at least its own post, and at most its own field, to be worked out properly).
Syntacticism
I've mentioned in comments a couple of times that I don't consider formal systems to talk about themselves, and that consequently Gödelian problems are irrelevant. So what am I actually on about?
It's generally accepted in mathematical logic that a formal system which embodies Peano Arithmetic (PA) is able to talk about itself, by means of Gödel numberings; statements and proofs within the system can be represented as positive integers, at which point "X is a valid proof in the system" becomes equivalent to an arithmetical statement about #X, the Gödel number representing X. This is then diagonalised to produce the Gödel sentence (roughly, g="There is no proof X such that the last line of X is g", and incompleteness follows. We can also do things like defining □ ("box") as the function from S to "There is a proof X in PA whose last line is S" (intuitively, □S says "S is provable in PA"). This then also lets us define the Löb sentence, and many other interesting things.
But how do we know that □S ⇔ there is a proof of S in PA? Only by applying some meta-theory. And how do we know that statements reached in the meta-theory of the form "thus-and-such is true of PA" are true of PA? Only by applying a meta-meta-theory. There is no a-priori justification for the claim that "A formal system is in principle capable of talking about other formal systems", which claim is used by the proof that PA can talk about itself. (If I remember correctly, to prove that □ does what we think it does, we have to appeal to second-order arithmetic; and how do we know second-order arithmetic applies to PA? Either by invoking third-order arithmetic to analyse second-order arithmetic, or by recourse to an informal system.)
Note also that the above is not a strange loop through the meta-level; we justify our claims about arithmeticn by appeal to arithmeticn+1, which is a separate thing; we never find ourselves back at arithmeticn.
Thus the claim that formal systems can talk about themselves involves ill-founded recursion, what is sometimes called a "skyhook". While it may be a theorem of second-order arithmetic that "the strengthened finite Ramsey theorem is unprovable in PA", one cannot conclude from second-order arithmetic alone that the "PA" in that statement refers to PA. It is however provable in third-order arithmetic that "What second-order arithmetic calls "PA" is PA", but that hasn't gained us much - it only tells us that second- and third-order arithmetic call the same thing "PA", it doesn't tell us whether this "PA" is PA. Induct on the arithmetic hierarchy to reach the obvious conclusion. (Though note that none of this prevents the Paris-Harrington Theorem from being a theorem of n-th order arithmetic ∀n≥2)
What, then, is the motivation for the above? Well, it is a basic principle of my philosophy that the only objects that are real (in a Platonic sense) are formal systems (or rather, syntaxes). That is to say, my ontology is the setclass of formal systems. (This is not incompatible with the apparent reality of a physical universe; if this isn't obvious, I'll explain why in another post.) But if we allow these systems to have semantics, that is, we claim that there is such a thing as a "true statement", we start to have problems with completeness and consistency (namely, that we can't achieve the one and we can't prove the other, assuming PA). Tarski's undefinability theorem protects us from having to deal with systems which talk about truth in themselves (because they are necessarily inconsistent, assuming some basic properties), but if systems can talk about each other, and if systems can talk about provability within themselves (that is, if analogues to the □ function can be constructed), then nasty Gödelian things end up happening (most of which are, to a Platonist mathematician, deeply unsatisfying).
So instead we restrict the ontology to syntactic systems devoid of any semantics; the statement ""Foo" is true" is meaningless. There is a fact-of-the-matter as to whether a given statement can be reached in a given formal system, but that fact-of-the-matter cannot be meaningfully talked about in any formal system. This is a remarkably bare ontology (some consider it excessively so), but is at no risk from contradiction, inconsistency or paradox. For, what is "P∧¬P" but another, syntactic, sentence? Of course, applying a system which proves "P∧¬P" to the 'real world' is likely to be problematic, but the paradox or the inconsistency lies in the application of the system, and does not inhere in the system itself.
EDIT: I am actually aiming to get somewhere with this, it's not just for its own sake (although the ontological and epistemological status of mathematics is worth caring about for its own sake). In particular I want to set up a framework that lets me talk about Eliezer's "infinite set atheism", because I think he's asking a wrong question.
Followed up by: The Apparent Reality of Physics
You'll be who you care about
Eliezer wonders about the thread of conscious experience: "I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."
Instead of wondering whether we should be selfish towards our future selves, let's reverse the question. Let's define our future selves as agents that we can strongly influence, and that we strongly care about. There are other aspects that round out our intuitive idea of future selves (such as having the same name and possessions, and a thread of conscious experience), but this seems the most fundamental one.
In future, this may help clarify issues of personal identity once copying is widespread:
These two future copies, Mr. Jones, are they both 'you'? "Well yes, I care about both, and can influence them both."
Mr Jones Alpha, do you feel that Mr Jones Beta, the other current copy, is 'you'? "Well no, I only care a bit about him, and have little control over his actions."
Mr Evolutionary-Jones Alpha, do you feel that Mr Evolutionary-Jones Beta, the other current copy, is 'you'? "To some extent; I care strongly about him, but I only control his actions in an updateless way."
Mr Instant-Hedonist-Jones, how long have you lived? "Well, I don't care about myself in the past or in the future, beyond my current single conscious experience. So I'd say I've lived a few seconds, a minute at most. The other Mr Instant-Hedonist-Jones are strangers to me; do with them what you will. Though I can still influence them strongly, I suppose; tell you what, I'll sell my future self into slavery for a nice ice-cream. Delivered right now."
Scientist vs. philosopher on conceptual analysis
In Less Wrong Rationality and Mainstream Philosophy, Conceptual Analysis and Moral Theory, and Pluralistic Moral Reductionism, I suggested that traditional philosophical conceptual analysis often fails to be valuable. Neuroscientist V.S. Ramachandran has recently made some of the points in a polite sparring with philosopher Colin McGinn over Ramachandran's new book The Tell-Tale Brain:
Early in any scientific enterprise, it is best to forge ahead and not get bogged down by semantic distinctions. But “forging ahead” is a concept alien to philosophers, even those as distinguished as McGinn. To a philosopher who demanded that he define consciousness before studying it scientifically, Francis Crick once responded, “My dear chap, there was never a time in the early years of molecular biology when we sat around the table with a bunch of philosophers saying ‘let us define life first.’ We just went out there and found out what it was: a double helix.” In the sciences, definitions often follow, rather than precede, conceptual advances.
July 2011 review of experimental philosophy
Experimental philosophy is the mainstream academic field that is most directly attempting to dissolve (where possible) persistent philosophical problems by revealing the cognitive algorithms that generate old philosophical debates. Those who want to catch up with some of what these researchers have discovered already may want to read this July 2011 review of the field.
Also see my post Are Deontological Moral Judgments Rationalizations?
From artificial intelligence research to philosophy
Based on a sample size of three (Pearl, Yudkowsky & Drescher), it appears that AI researchers can do quite well when they turn significant attention to philosophy. Are there other examples of this? I'm thinking of people who are primarily AI researchers, but have also done long, serious work in philosophy.
States of knowledge as amplitude configurations
I am reading through the sequence on quantum physics and have had some questions which I am sure have been thought about by far more qualified people. If you have any useful comments or links about these ideas, please share.
Most of the strongest resistance to ideas about rationalism that I encounter comes not from people with religious beliefs per se, but usually from mathematicians or philosophers who want to assert arguments about the limits of knowledge, the fidelity of sensory perception as a means for gaining knowledge, and various (what I consider to be) pathological examples (such as the zombie example). Among other things, people tend to reduce the argument to the existence of proper names a la Wittgenstein and then go on to assert that the meaning of mathematics or mathematical proofs constitutes something which is fundamentally not part of the physical world.
As I am reading the quantum physics sequence (keep in mind that I am not a physicist; I am an applied mathematician and statistician and so the mathematical framework of Hilbert spaces and amplitude configurations makes vastly much more sense to me than billiard balls or waves, yet connecting it to reality is still very hard for me) I am struck by the thought that all thoughts are themselves fundamentally just amplitude configurations, and by extension, all claims about knowledge about things are also statements about amplitude configurations. For example, my view is that the color red does not exist in and of itself but rather that the experience of the color red is a statement about common configurations of particle amplitudes. When I say "that sign is red", one could unpack this into a detailed statement about statistical properties of configurations of particles in my brain.
The same reasoning seems to apply just as well to something like group theory. States of knowledge about the Sylow theorems, just as an example, would be properties of particle amplitude configurations in a brain. The Sylow theorems are not separately existing entities which are of themselves "true" in any sense.
Perhaps I am way off base in thinking this way. Can any philosophers of the mind point me in the right direction to read more about this?
Religious Behaviorism
Willard Quine described, in his article "Ontological Relativity" (Journal of Philosophy 65(7):185-212), his doctrine of the indeterminability of translation. Roughly, this says that words are meaningful (a collection of words emitted by an agent can help predict that agent's actions), but don't have meanings (any word taken by itself corresponds to nothing at all; there is no correspondence between the word "rabbit" and the Leporidae).
In Quine's words,
Seen according to the museum myth, the words and sentences of a language have their determinate meanings. To discover the meanings of the native's words we may have to observe his behavior, but still the meanings of the words are supposed to be determinate in the native's mind, his mental museum, even in cases where behavioral criteria are powerless to discover them for us. When on the other hand we recognize with Dewey that "meaning. . . is primarily a property of behavior," we recognize that there are no meanings, nor likenesses nor distinctions of meaning, beyond what are implicit in people's dispositions to overt behavior. For naturalism the question whether two expressions are alike or unlike in meaning has no determinate answer, known or unknown, except insofar as the answer is settled in principle by people's speech dispositions, known or unknown.
Quine got my hackles up by using the word "naturalism" when he meant "behaviorism", implicitly claiming that naturalistic science was synonymous (or would be, if he believed in synonyms) with behaviorism. But I'll try to remain impartial. (Quine's timing was curious; Chomsky had demolished behaviorist linguistics in 1959, nine years before Quine's article.)
Quine's basic idea is insightful. To phrase it in non-behaviorist terms: If all words are defined in terms of other words, how does meaning get into that web of words? Can we unambiguously determine the correct mapping between words and meanings?
Quine's response was to deny that that is an empirical question. He said you should not even talk about meaning; you can only observe behavior. You must remain agnostic about anything inside the head.
But it is an empirical question. With math, plus with some reasonable assumptions, you can prove that you can unambiguously determine the correct mapping even from the outside. In a world where you can tell someone to think of a square, and then use functional magnetic resonance imaging and find a pattern of neurons lit up in a square on his visual cortex, it is difficult to agree with Quine that the word "square" has no meaning.
You may protest that I'm thinking there is a homunculus inside the mind looking at that square. After all, Quine already knew that the image of a square would be imprinted in some way on the retina of a person looking at a square. But I am not assuming there is a homunculus inside the brain. I am just observing a re-presentation inside the brain. We can continue the behaviorist philosophy of saying that words are ultimately defined by behavior. But there is no particular reason to stop our analyses when we hit the skull. Behaviors outside the skull are systematically reflected in physical changes inside the skull, and we can investigate them and reason about them.
The more I tried to figure out what Quine meant - sorry, Quine - the more it puzzled me. I'm with him as far as asking whether meanings are ambiguous. But Quine doesn't just say meaning is ambiguous. He says "there are no meanings... beyond what are implicit in... behavior". The more I read, the more it seemed Quine was insisting, not that meaning was ambiguous, but that mental states do not exist - or that they are taboo. And this taboo centered on the skull.
That seemed to come from a religious frame. So I stopped trying to think of a rational justification for Quine's position, and starting looking for an emotional one. And I may have found it.
My favorite philosophers
Those interested in philosophy might wonder: Who are the favorite philosophers of someone (like me) who has a very low opinion of philosophy?
Well, ask no longer. Here are some of my favorite philosophers:
- Eliezer Yudkowsky (independent) only does philosophy because he needs to solve philosophical problems to build Friendly AI. As a philosophy outsider, he has managed – mostly on his own – to solve a great many philosophical problems correctly. There is, simply put, no philosopher with whom I agree more often. My one major complaint is that he does not write academic-style articles, citing the relevant research and speaking the same language as others and so on. On the other hand, this is partly why he has made so much fast progress. Academic papers are clear and crisp and well-footnoted and thus generous to their readers, but as a result they take a lot of effort to write. If I could fuse the minds of Yudkowsky and Bostrom, that person would be an even better philosopher. Luckily, those two minds seem to be slowly fusing on their own. (Yudkowsky is tugging Bostrom his way, and Bostrom is tugging Yudkowsky his way.)
- Nick Bostrom (Oxford) is one of today’s most important philosophers. This is not due to Kripkean superintelligence or Einsteinian revolutionary insights – though, Bostrom is no slouch in intellect or insight – but because he has devoted himself to working on the most important problems. Oddly enough, these were problems that (at the time) nobody else was working on very seriously: existential risks to humanity.
- Noam Chomsky (MIT) is an interdisciplinary genius. The most important linguist of the 20th century, he is also one of the founders of cognitive science, a major geopolitical theorist, a philosopher, and one of the most productive social activists in history. He embodies his philosophy more successfully than any other philosopher I know. Though he holds different philosophical positions than I do, in many ways his views are like mine but with an extra dose of skepticism about everything.
- Stephen Stich (Rutgers) is one of the “guardians” of good philosophy, arguing against unproductive analytic practices like heavy appeal to intuition, and working so vigorously at the border of science and philosophy that he played a founding role in the rise of experimental philosophy. He has also done a great job of mentoring younger philosophers, preparing them to go to war for productive, scientific philosophy in a land where most philosophers are still doing the pre-Quinean kind of philosophy.
- Hilary Kornblith (Massachusetts, Amherst) is a leading proponent of naturalized epistemology. He is also a leading critic of conceptual analysis, and thus another “guardian.”
- Eric Schwitzgebel (UC Riverside) is another guardian of good philosophy, and spends much of his time chastising those philosophers who have way more faith in their powers of intuition and introspection than contemporary cognitive science should allow.
- Michael Bishop (Florida State) is another guardian, and was a student of Stitch. He doesn’t just chastise philosophers for continuing to use failed methods, but offers a productive alternative grounded in the latest cognitive science and experimental psychology: what he calls “strategic reliabilism.” For him, epistemology shouldn’t be concerned with a conceptual analysis of knowledge terms, but with getting at true belief. Unfortunately, this isn’t yet obvious to most of the rest of his profession.
Sublimity vs. Youtube
The torture vs. dust specks quandary is a canonical one to LW. Off the top of my head, I can't remember anyone suggesting the reversal, one where the arguments taken by the hypothetical are positive and not negative. I'm curious about how it affects people's intuitions. I call it - as the title indicates - "Sublimity vs. Youtube1".
Suppose the impending existence of some person who is going to live to be fifty years old whatever you do2. She is liable to live a life that zeroes out on a utility scale: mediocre ups and less than shattering downs, overall an unremarkable span. But if you choose "sublimity", she's instead going to live a life that is truly sublime. She will have a warm and happy childhood enriched by loving relationships, full of learning and wonder and growth; she will mature into a merrily successful adult, pursuing meaningful projects and having varied, challenging fun. (For the sake of argument, suppose that the ripple effects of her sublime life as it affects others still lead to the math tallying up as +(1 sublime life), instead of +(1 sublime life)+(various lovely consequences).)
Or you can choose "Youtube", and 3^^^3 people who weren't doing much with some one-second period of their lives instead get to spend that second watching a brief, grainy, yet droll recording of a cat jumping into a box, which they find mildly entertaining.
Sublimity or Youtube?
1The choice in my variant scenario of "watching a Youtube video" rather than some small-but-romanticized pleasure ("having a butterfly land on your finger, then fly away", for instance) is deliberate. Dust specks are really tiny, and there's not much automatic tendency to emotionally inflate them. Hopefully Youtube videos are the reverse of that.
2I'm choosing to make it an alteration of a person who will exist either way to avoid questions about the utility of creating people, and for greater isomorphism with the "torture" option in the original.
Kantian baby rats
I've often wished for a list of cases where philosophy has proven useful, or has at least anticipated science in drawing correct conclusions. Here's one for the list:
The June 18 2010 Science has two very similar articles on how rat brains represent space. Both conclude that the brain already represents space as a grid before rat pups take their first steps into the world. Both make the point that this validates Kant's claim that space is an innate concept prior to experience.
(The next task is to make a corresponding list of cases where philosophers made incorrect conclusions; and estimate whether the number of correct conclusions is greater than chance.)
A Thought Experiment on Pain as a Moral Disvalue
Related To: Eliezer's Zombies Sequence, Alicorn's Pain
Today you volunteered for what was billed as an experiment in moral psychology. You enter into a small room with a video monitor, a red light, and a button. Before you entered, you were told that you'll be paid $100 for participating in the experiment, but for every time you hit that button, $10 will be deducted. On the monitor, you see a person sitting in another room, and you appear to have a two-way audio connection with him. That person is tied down to his chair, with what appears to be electrical leads attached to him. He now explains to you that your red light will soon turn on, which means he will be feeling excruciating pain. But if you press the button in front of you, his pain will stop for a minute, after which the red light will turn on again. The experiment will end in ten minutes.
You're not sure whether to believe him, but pretty soon the red light does turn on, and the person in the monitor cries out in pain, and starts struggling against his restraints. You hesitate for a second, but it looks and sounds very convincing to you, so you quickly hit the button. The person in the monitor breaths a big sigh of relief and thanks you profusely. You make some small talk with him, and soon the red light turns on again. You repeat this ten times and then are released from the room. As you're about to leave, the experimenter tells you that there was no actual person behind the video monitor. Instead, the audio/video stream you experienced was generated by one of the following ECPs (exotic computational processes).
- An AIXI-like (e.g., AIXI-tl, Monte Carlo AIXI, or some such) agent, programmed with the objective of maximizing the number of button presses.
- A brute force optimizer, which was programmed with a model of your mind, that iterated through all possible audio/video bit streams to find the one that maximizing the number of button presses. (As far as philosophical implications are concerned, this seems essentially identical to 1, so the reader doesn't necessarily have to go learn about AIXI.)
- A small team of uploads capable of running at a million times faster than an ordinary human, armed with photo-realistic animation software, and tasked with maximizing the number of your button presses.
- A Giant Lookup Table (GLUT) of all possible sense inputs and motor outputs of a person, connected to a virtual body and room.
Then she asks, would you like to repeat this experiment for another chance at earning $100?
Presumably, you answer "yes", because you think that despite appearances, none of these ECPs actually do feel pain when the red light turns on. (To some of these ECPs, your button presses would constitute positive reinforcement or lack of negative reinforcement, but mere negative reinforcement, when happening to others, doesn't seem to be a strong moral disvalue.) Intuitively this seems to be the obvious correct answer, but how to describe the difference between actual pain and the appearance of pain or mere negative reinforcement, at the level of bits or atoms, if we were specifying the utility function of a potentially super-intelligent AI? (If we cannot even clearly define what seems to be one of the simplest values, then the approach of trying to manually specify such a utility function would appear completely hopeless.)
One idea to try to understand the nature of pain is to sample the space of possible minds, look for those that seem to be feeling pain, and check if the underlying computations have anything in common. But as in the above thought experiment, there are minds that can convincingly simulate the appearance of pain without really feeling it.
Another idea is that perhaps what is bad about pain is that it is a strong negative reinforcement as experienced by a conscious mind. This would be compatible with the thought experiment above, since (intuitively) ECPs 1, 2, and 4 are not conscious, and 3 does not experience strong negative reinforcements. Unfortunately it also implies that fully defining pain as a moral disvalue is at least as hard as the problem of consciousness, so this line of investigation seems to be at an immediate impasse, at least for the moment. (But does anyone see an argument that this is clearly not the right approach?)
What other approaches might work, hopefully without running into one or more problems already known to be hard?
What does your web of beliefs look like, as of today?
Every few months, I post a summary of my beliefs to my blog. This has several advantages:
- It helps to clarify where I'm "coming from" in general.
- It clears up reader confusion arising from the fact that my beliefs change.
- It's really fun to look back on past posts and assess how my beliefs have changed, and why.
- It makes my positions easier to criticize, because they are clearly stated and organized into one place.
- It's an opportunity for people to very quickly "get to know me."
To those who are willing: I invite you to post your own web of beliefs. I offer my own, below, as an example (previously posted here). Because my world is philosophy, I frame my web of beliefs in those terms, but others need not do the same:
My Web of Beliefs (Feb. 2011)
Philosophy
Philosophy is not a matter of opinion. As in science, some positions are much better supported by reasons than others are. I do philosophy as a form of inquiry, continuous with science.
But I don’t have patience for the pace of mainstream philosophy. Philosophical questions need answers, and quickly.
Scientists know how to move on when a problem is solved, but philosophers generally don’t. Scientists don’t still debate the fact of evolution or the germ theory of disease just because alternatives are (1) logically possible, (2) appeal to many people’s intuitions, (3) are “supported” by convoluted metaphysical arguments, or (4) fit our use of language better. But philosophers still argue about Cartesian dualism and theism and contra-causal free will as if these weren’t settled questions.
How many times must the universe beat us over the head with evidence before we will listen? Relinquish your dogmas; be as light as a feather in the winds of evidence.
Epistemology
My epistemology is one part cognitive science, one part probability theory.
We encounter reality and form beliefs about it by way of our brains. So the study of how our brains do that is central to epistemology. (Quine would be pleased.) In apparent ignorance of cognitive science and experimental psychology, most philosophers make heavy use of intuition. Many others have failed to heed the lessons of history about how badly traditional philosophical methods fare compared to scientific methods. I have little patience for this kind of philosophy, and see myself as practicing a kind of ruthlessly reductionistic naturalistic philosophy.
I do not care whether certain beliefs qualify as “knowledge” or as being “rational” according to varying definitions of those terms. Instead, I try to think quantitatively about beliefs. How strongly should I believe P? How should I adjust my probability for P in the face of new evidence X? There is a single, exactly correct answer to each such question, and it is provided by Bayes’ Theorem. We may never know the correct answer, but we can plug estimated numbers into the equation and update our beliefs accordingly. This may seem too subjective, but remember that you are always giving subjective, uncertain probabilities. Whenever you use words like “likely” and “probable”, you are doing math. So stop pretending you aren’t doing math, and do the math correctly, according to the proven theorem of how probable P given X is – even if we are always burdened by uncertainty.1
Language
Though I was recently sympathetic to the Austin / Searle / Grice / Avramides family of approaches to language, I now see that no simple theory of meaning can capture every use (and hypothetical use) of human languages. Besides, categorizing every way in which humans use speech and writing to have an effect on themselves and others is a job for scientists, not armchair philosophers.
However, it is useful to develop an account of language that captures most of our discourse systematically – specifically for use in formal argument and artificial intelligence. To this end, I think something like the Devitt / Sterelny account may be the most useful.
A huge percentage of Anglophone philosophy is still done in service of conceptual analysis, which I see as a mostly misguided attempt to build a Super Dictionary full of definitions for common terms that are (1) self-consistent, (2) fit the facts if they are meant to, and (3) agree with our use of and intuitions about each term. But I don’t think we should protect our naive use of words too much – rather, we should use our words to carve reality at its joints, because that allows us to communicate more effectively. And effective communication is the point of language, no? If your argument doesn’t help us solve problems when you play Taboo with your key terms and replace them with their substantive meaning, then what is the point of the argument if not to build a Super Dictionary?
A Super Dictionary would be nice, but humanity has more urgent and important problems that require a great many philosophical problems to be solved. Conceptual analysis is something of a lost purpose.
Normativity
The only source of normativity I know how to justify is the hypothetical imperative: “If you desire that P, then you ought to do Y in order to realize P.” This reduces (roughly) to the prediction: “If you do Y, you are likely to objectively satisfy your desire that P.”2
For me, then, the normativity of epistemology is: “If you want to have more true beliefs and fewer false beliefs, engage in belief-forming practices X, Y, and Z.”
The normativity of logic is: “If you want to be speaking the same language as everyone else, don’t say things like ‘The ball is all green and all blue at the same time in the same way.’”
Ethics, if there is anything worth calling by that name (not that it matters much; see the language section), must also be a system of hypothetical imperatives of some kind. Alonzo Fyfe and I are explaining our version of this here.
Focus
Recently, the focus of my research efforts has turned to the normative (not technical) problems of how to design the motivational system of a self-improving superintelligent machine. My work on this will eventually be gathered here. A bibliography on the subject is here.
Settled questions in philosophy
Philosophy is notorious for not answering the questions it tackles. Plato posed most of the central questions more than two millennia ago, and philosophers still haven't come to much consensus about them. Or at least, whenever philosophical questions begin to admit of answers, we start calling them scientific questions. (Astronomy, physics, chemistry, biology, and psychology all began as branches of philosophy.)
A common attitude on Less Wrong is "Too slow! Solve the problem and move on." The free will sequence argues that the free will problem has been solved.
I, for one, am bold enough to claim that some philosophical problems have been solved. Here they are:
- Is there a God? No.
- What's the solution to the mind-body problem? Materialism.
- Do we have free will? We don't have contra-causal free will, but of course we have the ability to deliberate on alternatives and have this deliberation effect the outcome.
- What is knowledge? (How do we overcome Gettier?) What is art? How do we demarcate science from non-science? If you're trying to find simple definitions that match our intuitions about the meaning of these terms in ever case, you're doing it wrong. These concepts were not invented by mathematicians for use in a formal system. They evolved in practical use among millions of humans over hundreds of years. Stipulate a coherent meaning and start using the term to successfully communicate with others.
Why Do We Engage in Moral Simplification?
It appears to me that much of human moral philosophical reasoning consists of trying to find a small set of principles that fit one’s strongest moral intuitions, and then explaining away or ignoring the intuitions that do not fit those principles. For those who find such moral systems attractive, they seem to have the power of actually reducing the strength of, or totally eliminating those conflicting intuitions.
In Fake Utility Functions, Eliezer described an extreme version of this, the One Great Moral Principle, or Amazingly Simple Utility Function, and suggested that he was partly responsible for this phenomenon by using the word “supergoal” while describing Friendly AI. But it seems to me this kind of simplification-as-moral-philosophy has a history much older than FAI.
For example, hedonism holds that morality consists of maximizing pleasure and minimizing pain, utilitarianism holds that everyone should have equal weight in one’s morality, and egoism holds that moralist consists of satisfying one’s self-interest. None of these fits all of my moral intuitions, but each does explain many of them. The puzzle this post presents is: why do we have a tendency to accept moral philosophies that do not fit all of our existing values? Why do we find it natural or attractive to simplify our moral intuitions?
Here’s my idea: we have a heuristic that in effect says, if many related beliefs or intuitions all fit a certain pattern or logical structure, but a few don’t, the ones that don’t fit are probably caused by cognitive errors and should be dropped and regenerated from the underlying pattern or structure.
As an example where this heuristic is working as intended, consider that your intuitive estimates of the relative sizes of various geometric figures probably roughly fit the mathematical concept of “area”, in the sense that if one figure has a greater area than another, you’re likely to intuitively judge that it’s bigger than the other. If someone points out this structure in your intuitions, and then you notice that in a few cases your intuitions differ from the math, you’re likely to find that a good reason to change those intuitions.
I think this idea can explain why different people end up believing in different moral philosophies. For example, many members of this community are divided along utilitarian/egoist lines. Why should that be the case? The theory I proposed suggests two possible answers:
- They started off with somewhat different intuitions (or the same intuitions with different relative strengths), so a moral system that fits one person’s intuitions relatively well might fit anther’s relatively badly.
- They had the same intuitions to start with, but encountered the moral philosophies in different orders. If each person accepts the first moral system that fits their intuitions “well enough”, and more than one fits “well enough”, then they’ll accept the first such moral system, which changes their intuitions, causing the rest to be rejected.
I think it’s likely that both of these are factors that contribute to the apparent divergence in human moral reasoning. This seems to be another piece of bad news for the prospect of CEV, unless there are stronger converging influences in human moral reasoning that (in the limit of reflective equilibrium) can counteract these diverging tendencies.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)