Cryonics: Can I Take Door No. 3?
If you don't believe in an afterlife, then it seems you currently have two choices: cryonics or permanent death. Now, I don't believe that cryonics is pseudoscience, but it's still pretty poor odds (Robin Hanson uses an estimate of 5% here). Unfortunately, the alternative offers a chance of zero. I see five main concerns with current cryonic technology:
- There is no proven revival technology, thus no estimate of costs
- Potential damage done during vitrification which must be overcome
- Because it cannot be legally done before death, potential decay between legal death and vitrification
- Requires active maintenance at very low temperature
- No guarantee that future societies will be willing to revive
So I wonder if we can do better.
I recall reading of juvenile forms of amphibians in desert environments that could survive for decades of drought in a dormant form, reviving when water returned. One specimen had sat on a shelf in a research office for over a century (in Arizona, if I recall correctly) and was successfully revived. Note: no particular efforts were made to maintain this specimen: the dry local climate was sufficient. It was suggested at the time that this could make an alternative method of preserving organs. Now the advantages of this approach (which I refer to flippantly as "dryonics") is:
- Proven, inexpensive revival technology
- Apparently the process does not cause damage itself
- Proven revival technique may overcome legal obstacles of applying before legal death
- Requires passive maintenance at low humidity (deserts would be ideal)
- Presumably lower cost makes future revival more likely (still no guarantee, but that is a post in itself)
There is one big disadvantage of this approach, of course: no one knows how to do it (it's not entirely clear how the juvenile amphibians do it) or even if it would be possible in larger, more complex organisms. And, so far as I know, no one is working on it. But it would seem to offer a much better prospect than our current options, so I would suggest it worth investigating.
I am not a biologist, and I'm not sure where one would start developing such a technology. I frankly admit that I am sharing this in the hope that someone who does have an idea will run with it. If anyone knows of any work on these lines, or has an idea how to proceed, please send a comment or email. Or even if you have another alternative. Because right now, I don't consider our prospects good.
[Note: I am going on memory in this post; I really wish I could provide references, but there does not seem much activity along these lines that I can find. I'm not even sure what to call it: mummification? Probably too scary. Dehydration? Anyway feel free to add suggestions or link references.]
[LINK] SMBC comics: Existential Crisis Sally on "Is forgotten torture real?"
http://www.smbc-comics.com/index.php?db=comics&id=2705
Addresses questions like "If I don't remember, but it definitely happened... who suffered?" in a rather non-obvious way (non-obvious to me, anyway).
Friendly AI and the limits of computational epistemology
Very soon, Eliezer is supposed to start posting a new sequence, on "Open Problems in Friendly AI". After several years in which its activities were dominated by the topic of human rationality, this ought to mark the beginning of a new phase for the Singularity Institute, one in which it is visibly working on artificial intelligence once again. If everything comes together, then it will now be a straight line from here to the end.
I foresee that, once the new sequence gets going, it won't be that easy to question the framework in terms of which the problems are posed. So I consider this my last opportunity for some time, to set out an alternative big picture. It's a framework in which all those rigorous mathematical and computational issues still need to be investigated, so a lot of "orthodox" ideas about Friendly AI should carry across. But the context is different, and it makes a difference.
Begin with the really big picture. What would it take to produce a friendly singularity? You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").
Now let's consider how SI will approach these goals.
The evidence says that the working ontological hypothesis of SI-associated researchers will be timeless many-worlds quantum mechanics, possibly embedded in a "Tegmark Level IV multiverse", with the auxiliary hypothesis that algorithms can "feel like something from inside" and that this is what conscious experience is.
The true morality is to be found by understanding the true decision procedure employed by human beings, and idealizing it according to criteria implicit in that procedure. That is, one would seek to understand conceptually the physical and cognitive causation at work in concrete human choices, both conscious and unconscious, with the expectation that there will be a crisp, complex, and specific answer to the question "why and how do humans make the choices that they do?" Undoubtedly there would be some biological variation, and there would also be significant elements of the "human decision procedure", as instantiated in any specific individual, which are set by experience and by culture, rather than by genetics. Nonetheless one expects that there is something like a specific algorithm or algorithm-template here, which is part of the standard Homo sapiens cognitive package and biological design; just another anatomical feature, particular to our species.
Having reconstructed this algorithm via scientific analysis of human genome, brain, and behavior, one would then idealize it using its own criteria. This algorithm defines the de-facto value system that human beings employ, but that is not necessarily the value system they would wish to employ; nonetheless, human self-dissatisfaction also arises from the use of this algorithm to judge ourselves. So it contains the seeds of its own improvement. The value system of a Friendly AI is to be obtained from the recursive self-improvement of the natural human decision procedure.
Finally, this is all for naught if seriously unfriendly AI appears first. It isn't good enough just to have the right goals, you must be able to carry them out. In the global race towards artificial general intelligence, SI might hope to "win" either by being the first to achieve AGI, or by having its prescriptions adopted by those who do first achieve AGI. They have some in-house competence regarding models of universal AI like AIXI, and they have many contacts in the world of AGI research, so they're at least engaged with this aspect of the problem.
Upon examining this tentative reconstruction of SI's game-plan, I find I have two major reservations. The big one, and the one most difficult to convey, concerns the ontological assumptions. In second place is what I see as an undue emphasis on the idea of outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers. This is supposed to be a way to finesse philosophical difficulties like "what is consciousness anyway"; you just simulate some humans until they agree that they have solved the problem. The reasoning goes that if the simulation is good enough, it will be just as good as if ordinary non-simulated humans solved it.
I also used to have a third major criticism, that the big SI focus on rationality outreach was a mistake; but it brought in a lot of new people, and in any case that phase is ending, with the creation of CFAR, a separate organization. So we are down to two basic criticisms.
First, "ontology". I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse, for two reasons. First, like anyone else, their ventures into AI will surely begin with programs that work within very limited and more down-to-earth ontological domains. Second, at least some of the AI's world-model ought to be obtained rationally. Scientific theories are supposed to be rationally justified, e.g. by their capacity to make successful predictions, and one would prefer that the AI's ontology results from the employment of its epistemology, rather than just being an axiom; not least because we want it to be able to question that ontology, should the evidence begin to count against it.
For this reason, although I have campaigned against many-worlds dogmatism on this site for several years, I'm not especially concerned about the possibility of SI producing an AI that is "dogmatic" in this way. For an AI to independently assess the merits of rival physical theories, the theories would need to be expressed with much more precision than they have been in LW's debates, and the disagreements about which theory is rationally favored would be replaced with objectively resolvable choices among exactly specified models.
The real problem, which is not just SI's problem, but a chronic and worsening problem of intellectual culture in the era of mathematically formalized science, is a dwindling of the ontological options to materialism, platonism, or an unstable combination of the two, and a similar restriction of epistemology to computation.
Any assertion that we need an ontology beyond materialism (or physicalism or naturalism) is liable to be immediately rejected by this audience, so I shall immediately explain what I mean. It's just the usual problem of "qualia". There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality. The problematic "belief in materialism" is actually the belief in the completeness of current materialist ontology, a belief which prevents people from seeing any need to consider radical or exotic solutions to the qualia problem. There is every reason to think that the world-picture arising from a correct solution to that problem will still be one in which you have "things with states" causally interacting with other "things with states", and a sensible materialist shouldn't find that objectionable.
What I mean by platonism, is an ontology which reifies mathematical or computational abstractions, and says that they are the stuff of reality. Thus assertions that reality is a computer program, or a Hilbert space. Once again, the qualia are absent; but in this case, instead of the deficient ontology being based on supposing that there is nothing but particles, it's based on supposing that there is nothing but the intellectual constructs used to model the world.
Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are. And thus computation has been the way in which materialism has tried to restore the mind to a place in its ontology. This is the unstable combination of materialism and platonism to which I referred. It's unstable because it's not a real solution, though it can live unexamined for a long time in a person's belief system.
An ontology which genuinely contains qualia will nonetheless still contain "things with states" undergoing state transitions, so there will be state machines, and consequently, computational concepts will still be valid, they will still have a place in the description of reality. But the computational description is an abstraction; the ontological essence of the state plays no part in this description; only its causal role in the network of possible states matters for computation. The attempt to make computation the foundation of an ontology of mind is therefore proceeding in the wrong direction.
But here we run up against the hazards of computational epistemology, which is playing such a central role in artificial intelligence. Computational epistemology is good at identifying the minimal state machine which could have produced the data. But it cannot by itself tell you what those states are "like". It can only say that X was probably caused by a Y that was itself caused by Z.
Among the properties of human consciousness are knowledge that something exists, knowledge that consciousness exists, and a long string of other facts about the nature of what we experience. Even if an AI scientist employing a computational epistemology managed to produce a model of the world which correctly identified the causal relations between consciousness, its knowledge, and the objects of its knowledge, the AI scientist would not know that its X, Y, and Z refer to, say, "knowledge of existence", "experience of existence", and "existence". The same might be said of any successful analysis of qualia, knowledge of qualia, and how they fit into neurophysical causality.
It would be up to human beings - for example, the AI's programmers and handlers - to ensure that entities in the AI's causal model were given appropriate significance. And here we approach the second big problem, the enthusiasm for outsourcing the solution of hard problems of FAI design to the AI and/or to simulated human beings. The latter is a somewhat impractical idea anyway, but here I want to highlight the risk that the AI's designers will have false ontological beliefs about the nature of mind, which are then implemented apriori in the AI. That strikes me as far more likely than implanting a wrong apriori about physics; computational epistemology can discriminate usefully between different mathematical models of physics, because it can judge one state machine model as better than another, and current physical ontology is essentially one of interacting state machines. But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
In a phrase: to use computational epistemology is to commit to state-machine materialism as your apriori ontology. And the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can. Something about the ontological constitution of consciousness makes it possible for us to experience existence, to have the concept of existence, to know that we are experiencing existence, and similarly for the experience of color, time, and all those other aspects of being that fit so uncomfortably into our scientific ontology.
It must be that the true epistemology, for a conscious being, is something more than computational epistemology. And maybe an AI can't bootstrap its way to knowing this expanded epistemology - because an AI doesn't really know or experience anything, only a consciousness, whether natural or artificial, does those things - but maybe a human being can. My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology. But transcendental phenomenology is very unfashionable now, precisely because of apriori materialism. People don't see what "categorial intuition" or "adumbrations of givenness" or any of the other weird phenomenological concepts could possibly mean for an evolved Bayesian neural network; and they're right, there is no connection. But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea. Fortunately, 21st-century physics, if not yet neurobiology, can provide alternative hypotheses in which complexity of state originates from something other than concatenation of parts - for example, entanglement, or from topological structures in a field. In such ideas I believe we see a glimpse of the true ontology of mind, one which from the inside resembles the ontology of transcendental phenomenology; which in its mathematical, formal representation may involve structures like iterated Clifford algebras; and which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.
Of course this is why I've talked about "monads" in the past, but my objective here is not to promote neo-monadology, that's something I need to take up with neuroscientists and biophysicists and quantum foundations people. What I wish to do here is to argue against the completeness of computational epistemology, and to caution against the rejection of phenomenological data just because it conflicts with state-machine materialism or computational epistemology. This is an argument and a warning that should be meaningful for anyone trying to make sense of their existence in the scientific cosmos, but it has a special significance for this arcane and idealistic enterprise called "friendly AI". My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story. A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads. You need to do the impossible one more time, and make your plans bearing in mind that the true ontology is something more than your current intellectual tools allow you to represent.
Seeking a "Seeking Whence 'Seek Whence'" Sequence
One of the sharpest and most important tools in the LessWrong cognitive toolkit is the idea of going meta, also called seeking whence or jumping out of the system, all terms crafted by Douglas Hofstadter. Though popularized by Hofstadter and repeatedly emphasized by Eliezer in posts like "Lost Purposes" and "Taboo Your Words", Wikipedia indicates that similar ideas have been around in philosophy since at least Anaximander in the form of the Principle of Sufficient Reason (PSR). I think it'd be only appropriate to seek whence this idea of seeking whence, taking a history of ideas perspective. I'd also like analyses of where the theme shows up and why it's appealing and so on, since again it seems pretty important to LessWrong epistemology. Topics that I'd like to see discussed are:
- How conservation of probability in Bayesian probability theory and conservation of phase space volume in statistical mechanics are related—a summary of Eliezer's posts on the topic would be great.
- How conservation of probability &c. are related to other physical/mathematical laws, e.g. Noether's theorem and quantum mechanics' continuity equation.
- The history of the idea of conservation laws; whether the discovery of conservation laws was fueled by PSR-like philosophical-like concerns (e.g. Leibniz?), by lower level intuitive concerns, or other means.
- How conservation of probability &c. are related to the idea of seeking whence [pdf] (e.g., "follow the improbability").
- How the PSR relates to conservation of probability &c. and to seeking whence.
- How going meta and seeking whence are related/equivalent.
- Which philosophers have used something like the PSR (e.g. Spinoza, Leibniz) and which haven't; those who haven't, what their reasons were for not using it.
- What kinds of conclusions are typically reached via the PSR or have historically been justified by the PSR, and whether those conclusions fit with LW's standard conclusions. If it disagrees with LW's standard conclusions, where does the PSR not apply or not apply as strongly; alternatively, why standard LW conclusions might be mistaken.
- Whether Schopenhauer's four-fold division of the PSR makes sense. (Schopenhauer's a relatively LW-friendly continentalesque philosopher.) A summary of any criticisms of his four-fold division.
- What makes the PSR, going meta, "JOOTS"-ing and seeking whence appealing, from a metaphysical, epistemological, pragmatic, and psychological perspective. What sorts of environments or problem sets select for it. (The Baldwin effect and similar phenomena might be relevant.)
- What going meta / seeking whence looks like at different levels of organization; how one jumps out of systems at varying levels.
- Eliezer's rule of derivative validity from CFAI and how it relates to the PSR; an analysis of how the (moral, or perhaps UDT-like decision-policy-centric) PSR might be relevant to Friendliness philosophy, e.g. as compared with CEV-like proposals [pdf].
- How latent Platonic nodes in TDT [pdf] (p. 78) relate to the PSR.
- A generalization of CFAI's causal validity semantics to timeless validity semantics in the spirit of the generalization of CDT to TDT, or perhaps even further generalizations of causal validity semantics in the spirit of Updateless Decision Theory or eXceptionless Decision Theory. (ETA: Whoops, Eliezer already discussed the acausal level, but seems to have only mentioned Platonic forms as an afterthought. Maybe ignore this bullet point.)
- How the PSR and the rule of derivative validity relate to Robin Hanson's idea of pre-rationality and Wei Dai's questions about extending pre-rationality to include past selves' utility functions—whether this elucidates the relation between XDT and UDT.
- Where Hofstadter picked up the idea of "going meta" and what led him to think it was important. What led Eliezer to rely on it so much and emphasize the importance of avoiding lost purposes.
[Link] RSA Animate: extremely entertaining LW-relevant cartoons
It's a brilliant idea: a lecture by a cool modern thinker, illustrated by word-by-word doodles on a whiteboard. Excellent at pulling you along the train of thought and absolutely disallowing boredom.
The lectures' content is pretty great too, although there's a definite left-wing, populist bent that's exploting today's post-crisis hot button issues (they got Zizek, for god's sake) - some might not like it. Regardless, it's all very amusing and enlightening. Been linked to before in a comment or to, but it deserves a headline.
You can start here: http://www.youtube.com/watch?v=dFs9WO2B8uI&feature=relmfu (But they're all worth watching!)
What Would You Like To Read? A Quick Poll
In our discussion of academic papers, Lukeprog argued that lots of smart people preferred to read ideas in academic paper format. Based on my observations, I mostly disagree. But that's just anecdotal evidence. Let's use Science!
Suppose someone at the Singularity Institute thought up a cool new idea: it could be about rationality, Friendly AI, decision theory, making money, or any of the other topics we discuss here on LW. Explaining it takes about ten pages, and it's nontechnical enough that it can be explained to a general audience of non-mathematicians. Which of the following explanations would you be most likely to actually sit down and read through?
- A post on Less Wrong or another friendly blog
- A book chapter, available both on Kindle and in physical book form
- A mailing list post, made available through a public archive
- An academic paper, downloadable over the Internet as a PDF
- A static HTML page on the Singularity Institute's website
- A page on a Singularity Institute or Less Wrong wiki
- A speech, downloadable as an audio file
- A PowerPoint or other presentation format
EDIT: To state the obvious, this poll will be biased in favor of blog postings, since it's on a blog. However, I still think it'll provide data that's much better than anecdotal guessing. I've emailed a few rationalist mailing lists to try and counteract this effect.
Beer with Charlie Stross in Munich
From Charlie Stross' blog:
I'm in Munich this week, and I plan to be drinking in the Paulaner Brauhaus(Kapuzinerplatz 5, 80337 München; click here for map) from 7pm on Monday 18th. All welcome! (Yes, I will sign books if you bring them.) If in doubt, look for the plush Cthulhu!
Poly marriage?
A thought occurred to me today as I skimmed an article in a rationality forum where the subject of gay marriage cropped up; seeing as the issue has been hotly contested in various public fora and especially the courts, what about poly? After all, many if not all the arguments for gay marriage apply to poly marriage as well.
Questions for LWers who are currently in a such a relationship, or have an opinion to share:
Do polies want to marry each other or do such relationships not lend themselves to permanence above a threshold of partners? Should polies campaign for the right for a civil union anyway? what are the up and down sides of this? etc
[Link] SMBC on choosing your simulations carefully
I'm increasingly impressed by the power of Zach Wiener's comic to demonstrate in a few images why hard problems are hard. It would be a vast task, but perhaps it would be useful to create an index of such problem-demonstrating comics to add to the Wiki, giving us something to point newbies at which would be less intimidating than formal Sequence postings. I get the impression that a common hurdle is just to get people to accept that problems of AI (and simulation, ethics, what have you) are actually difficult.
Boltzmann Brains and Anthropic Reference Classes (Updated)
Summary: There are claims that Boltzmann brains pose a significant problem for contemporary cosmology. But this problem relies on assuming that Boltzmann brains would be part of the appropriate reference class for anthropic reasoning. Is there a good reason to accept this assumption?
Nick Bostrom's Self Sampling Assumption (SSA) says that when accounting for indexical information, one should reason as if one were a random sample from the set of all observer's in one's reference class. As an example of the scientific usefulness of anthropic reasoning, Bostrom shows how the SSA rules out a particular cosmological model suggested by Boltzmann. Boltzmann was trying to construct a model that is symmetric under time reversal, but still accounts for the pervasive temporal asymmetry we observe. The idea is that the universe is eternal and, at most times and places, at thermodynamic equilibrium. Occasionally, there will be chance fluctuations away from equilibrium, creating pockets of low entropy. Life can only develop in these low entropy pockets, so it is no surprise that we find ourselves in such a region, even though it is atypical.
The objection to this model is that smaller fluctuations from equilibrium will be more common. In particular, fluctuations that produce disembodied brains floating in a high entropy soup with the exact brain state I am in right now (called Boltzmann brains) would be vastly more common than fluctuations that actually produce me and the world around me. If we reason according to SSA, the Boltzmann model predicts I am one of those brains and all my experiences are spurious. Conditionalizing on the model, the probability that my experiences are not spurious is minute. But my experiences are in fact not spurious (or at least, I must operate under the assumption that they are not if I am to meaningfully engage in scientific inquiry). So the Boltzmann model is heavily disconfirmed. [EDIT: As AlexSchell points out, this is not actually Bostrom's argument. The argument has been made by others. Here, for example.]
Now, no one (not even Boltzmann) actually believed the Boltzmann model, so this might seem like an unproblematic result. Unfortunately, it turns out that our current best cosmological models also predict a preponderance of Boltzmann brains. They predict that the universe is evolving towards an eternally expanding cold de Sitter phase. Once the universe is in this phase, thermal fluctuations of quantum fields will lead to an infinity of Boltzmann brains. So if the argument against the original Boltzmann model is correct, these cosmological models should also be rejected. Some people have drawn this conclusion. For instance, Don Page considers the anthropic argument strong evidence against the claim that the universe will last forever. This seems like the SSA's version of Bostrom's Presumptuous Philosopher objection to the Self Indication Assumption, except here we have a presumptuous physicist. If your intuitions in the Presumptuous Philosopher case lead you to reject SIA, then perhaps the right move in this case is to reject SSA.
But maybe SSA can be salvaged. The rule specifies that one need only consider observers in one's reference class. If Boltzmann brains can be legitimately excluded from the reference class, then the SSA does not threaten cosmology. But Bostrom claims that the reference class must at least contain all observers whose phenomenal state is subjectively indistinguishable from mine. If that's the case, then all Boltzmann brains in brain states sufficiently similar to mine such that there is no phenomenal distinction must be in my reference class, and there's going to be a lot of them.
Why accept this subjective indistinguishability criterion though? I think the intuition behind it is that if two observers are subjectively indistinguishable (it feels the same to be either one), then they are evidentially indistinguishable, i.e. the evidence available to them is the same. If A and B are in the exact same brain state, then, according to this claim, A has no evidence that she is in fact A and not B. And in this case, it is illegitimate for her to exclude B from her anthropic reference class. For all she knows, she might be B!
But the move from subjective indistinguishability to evidential indistinguishability seems to ignore an important point: meanings ain't just in the head. Even if two brains are in the exact same physical state, the contents of their representational states (beliefs, for example) can differ. The contents of these states depend not just on the brain state but also on the brain's environment and causal history. For instance, I have beliefs about Barack Obama. A spontaneously congealed Boltzmann brain in an identical brain state could not have those beliefs. There is no appropriate causal connection between Obama and that brain, so how could its beliefs be about him? And if we have different beliefs, then I can know things the brain doesn't know. Which means I can have evidence the brain doesn't have. Subjective indistinguishability does not entail evidential indistinguishability.
So at least this argument for including all subjectively indistinguishable observers in one's reference class fails. Is there another good reason for this constraint I haven't considered?
Update: There seems to be a common misconception arising in the comments, so I thought I'd address it up here. A number of commenters are equating the Boltzmann brain problem with radical skepticism. The claim is that the problem shows that we can't really know we are not Boltzmann brains. Now this might be a problem some people are interested in. It is not one that I am interested in, nor is it the problem that exercises cosmologists. The Boltzmann brain hypothesis is not just a physically plausible variant of the Matrix hypothesis.
The purported problem for cosmology is that certain cosmological models, in conjunction with the SSA, predict that I am a Boltzmann brain. This is not a problem because it shows that I am in fact a Boltzmann brain. It is a problem because it is an apparent disconfirmation of the cosmological model. I am not actually a Boltzmann brain, I assure you. So if a model says that it is highly probable I am one, then the observation that I am not stands as strong evidence against the model. This argument explicitly relies on the rejection of radical skepticism.
Are we justified in rejecting radical skepticism? I think the answer is obviously yes, but if you are in fact a skeptic then I guess this won't sway you. Still, if you are a skeptic, your response to the Boltzmann brain problem shouldn't be, "Aha, here's support for my skepticism!" It should be "Well, all of the physics on which this problem is based comes from experimental evidence that doesn't actually exist! So I have no reason to take the problem seriously. Let me move on to another imaginary post."
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)