Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Philosophical Implications of Quantum Information Theory

5 lisper 26 February 2016 02:00AM

I was asked to write up a pithy summary of the upshot of this paper. This is the best I could manage.

One of the most remarkable features of the world we live in is that we can make measurements that are consistent across space and time. By "consistent across space" I mean that you and I can look at the outcome of a measurement and agree on what that outcome was. By "consistent across time" I mean that you can make a measurement of a system at one time and then make the same measurement of that system at some later time and the results will agree.

It is tempting to think that the reason we can do these things is that there exists an objective reality that is "actually out there" in some metaphysical sense, and that our measurements are faithful reflections of that objective reality. This hypothesis works well (indeed, seems self-evidently true!) until we get to very small systems, where it seems to break down. We can still make measurements that are consistent across space and time, but as soon as we stop making measurements, then things start to behave very differently than they did before. The classical example of this is the two-slit experiment: whenever we look at a particle we only ever find it in one particular place. When we look continuously, we see the particle trace out an unambiguous and continuous trajectory. But when we don't look, the particle behaves as if it is in more than one place at once, a behavior that manifests itself as interference.

The problem of how to reconcile the seemingly incompatible behavior of physical systems depending on whether or not they are under observation has come to be called the measurement problem. The most common explanation of the measurement problem is the Copenhagen interpretation of quantum mechanics which postulates that the act of measurement changes a system via a process called wave function collapse. In the contemporary popular press you will often read about wave function collapse in conjunction with the phenomenon of quantum entanglement, which is usually referred to as "spooky action at a distance", a phrase coined by Einstein, and intended to be pejorative. For example, here's the headline and first sentence of the above piece:

More evidence to support quantum theory’s ‘spooky action at a distance’

It’s one of the strangest concepts in the already strange field of quantum physics: Measuring the condition or state of a quantum particle like an electron can instantly change the state of another electron—even if it’s light-years away." (emphasis added)

This sort of language is endemic in the popular press as well as many physics textbooks, but it is demonstrably wrong. The truth is that measurement and entanglement are actually the same physical phenomenon. What we call "measurement" is really just entanglement on a large scale. If you want to see the demonstration of the truth of this statement, read the paper or watch the video or read the original paper on which my paper and video are based. Or go back and read about Von Neumann measurements or quantum decoherence or Everett's relative state theory (often mis-labeled "many-worlds") or relational quantum mechanics or the Ithaca interpretation of quantum mechanics, all of which turn out to be saying exactly the same thing.

Which is: the reason that measurements are consistent across space and time is not because these measurements are a faithful reflection of an underlying objective reality. The reason that measurements are consistent across space and time is because this is what quantum mechanics predicts when you consider only parts of the wave function and ignore other parts.

Specifically, it is possible to write down a mathematical description of a particle and two observers as a quantum mechanical system. If you ignore the particle (this is a formal mathematical operation called a partial trace of an operator matrix ) what you are left with is a description of the observers. And if you then apply information theoretical operations to that, what pops out is that the two observers are in classically correlated states. The exact same thing happens for observations made of the same particle at two different times.

The upshot is that nothing special happens during a measurement. Measurements are not instantaneous (though they are very fast ) and they are in principle reversible, though not in practice.

The final consequence of this, the one that grates most heavily on the intuition, is that your existence as a classical entity is an illusion. Because measurements are not a faithful reflection of an underlying objective reality, your own self-perception (which is a kind of measurement) is not a faithful reflection of an underlying objective reality either. You are not, in point of metaphysical fact, made of atoms. Atoms are a very (very!) good approximation to the truth, but they are not the truth. At the deepest level, you are a slice of the quantum wave function that behaves, to a very high degree of approximation, as if it were a classical system but is not in fact a classical system. You are in a very real sense living in the Matrix, except that the Matrix you are living in is running on a quantum computer, and so you -- the very close approximation to a classical entity that is reading these words -- can never "escape" the way Neo did.

As a corollary to this, time travel is impossible, because in point of metaphysical fact there is no time. Your perception of time is caused by the accumulation of entanglements in your slice of the wave function, resulting in the creation of information that you (and the rest of your classically-correlated slice of the wave function) "remember". It is those memories that define the past, you could even say create the past. Going "back to the past" is not merely impossible it is logically incoherent, no different from trying to construct a four-sided triangle. (And if you don't buy that argument, here's a more prosaic one: having a physical entity suddenly vanish from one time and reappear at a different time would violate conservation of energy.)

The AI That Pretends To Be Human

1 Houshalter 02 February 2016 07:39PM

The hard part about containing AI, is restricting it's output. The AI can lie, manipulate, and trick. Some speculate that it might be able to do far worse, inventing infohazards like hypnosis or brain hacking.

A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful.

Awhile ago I wrote about an approach to do this. The idea was to require the AI to use as little computing power as it needed to perform a task. This prevents the AI from over-optimizing. The AI won't use the full power of superintelligence, unless it really needs it.

The above method isn't perfect, because a superintelligent AI may still have super abilities, even when it has a limited compute budget. And it will still try to do all those bad things mentioned above, unless it can find a better way that uses less computing power.

We don't really want to minimize the AIs compute budget. We just want it to act more human-like. When we ask it to solve a problem, we want it to try to solve the problem as a human would solve it. All of the problems arise from the AI behaving totally different than a human. Not necessarily from it using more computing power.

Is there any way to make an AI act like a human? Or is that as impossible as just solving FAI?

Stuart Armstrong posted an idea on how to achieve that. It involves an AI that tries to mimic output that is indistinguishable from what a human would produce, but conditional on it also solving the problem. However his idea also involves whole brain emulation and a complicated method for the AI to predict the future and avoid changing it too much.

We don't need that. The idea for the AI to try to be indistinguishable from a human is actually a really interesting control idea. It significantly limits the AI's ability to Do Sneaky Things, since humans don't do those things.

Here is my idea on how to achieve this. It's the same as my limited computing power idea mentioned earlier, but applied to "appearing human" as a goal, instead of minimizing computing power.

We start with two AIs. The first AI tries to appear human. The second AI does a sort of Turing test, and tries to tell the AI apart from an actual human.

The human AI's first goal is to produce a correct solution. It will predict how likely it's solution is to be accepted. And it will try to produce a solution that has a >50% chance of working. This is a satisficing goal. That is, it doesn't optimize without end to create the best solution possible, it just has to meet some minimum threshold, then stop.

It's second goal will be to fool the second AI. It will do everything in it's power to make the solution appear like it was produced by a human.

The second AI tries to predict how likely it's input was produced by a human. It is also conditioned on the input being a solution to the problem, so the fact the AI solved a problem that was really hard for humans does not count against it.

This prevents the first AI from doing evil things with it's output. If it tries to insert complicated infohazards or subagents into it's output stream, it will be easily detected as an AI. Instead it needs to mimic humans as closely as possible.

Note that the first AI only cares about the outcome of this test. The test doesn't need to actually be performed, it can just be a threat that it is conditioned on. (The ability to condition AIs on acting as if they were in counterfactual worlds is a critical piece of this, and deserves it's own writeup somewhere. It's an important tool in the FAI toolkit. It's a relatively simple method that can work even on current AIs.)

Another point is the second AI should have more computing power than the first AI. That way the first AI can't get a slight advantage over it and be able to hack it or manipulate it somehow. Instead it has to focus it's efforts on being indistinguishable from a human, even by an agent smarter than itself.

This method also relies on AI boxing. The first AI needs to be contained in a box so that it's only output goes to a text terminal or the second AI. However it doesn't require any of the more complicated machinery required for other reduced impact methods.

The AIs also aren't agents, in the sense they don't need to maximize any long term utility function. The first AI only cares about answering the question it has been asked, and the second AI only does prediction.


An example use of this is to create an oracle AI that answers your questions. The human judges somehow whether the answer is correct, or at least good. The AI tries to produce answers which the human judge will probably like, but which also are indistinguishable from answers humans would produce.

Such an oracle would be tremendously useful. The human could ask the AI to produce new AI and FAI papers, which would help immensely speed up our progress towards true FAI.

Similar techniques could be applied to very different domains. For example controlling a robot. The AI will try to produce outputs that look like a human is controlling it.

This would require some modifications to the above method, so that the AI doesn't try to use the robot to hack it's own brain or anything. This could be done by conditioning it on a counterfactual world where it's outputs are going to the second AI rather than a real robot.

Another point is the AI doesn't need to mimic average humans given average resources. It could be conditioned on the human having had tons of time to come up with an answer. E.g. producing an answer that a human would have come up with given a year. Or controlling the robot the same way as a human given tons of time to practice, or in a very slowed down simulation.


I would like to note a parallel with a method in current AI research, Generative Adversarial Networks. Generative Adversarial Networks work by two AIs, one which tries to produce an output that fools the second AI, and the other which tries to predict which samples were produced by the first AI, and which are part of the actual distribution.

It's quite similar to this. GANs have been used successfully to create images that look like real images, which is a hard problem in AI research. In the future GANs might be used to produce text that is indistinguishable from human (the current method for doing that, by predicting the next character a human would type, is kind of crude.)

Reposted from my blog.

Principia Compat. The potential Importance of Multiverse Theory

-1 MakoYass 02 February 2016 04:22AM

Multiverse Theory is the science of guessing at the shape of the state space of all which exists, once existed, will exist, or exists without any temporal relation to our present. Multiverse theory attempts to model the unobservable, and it is very difficult.

Still, there's nothing that cannot be reasoned about, in some way (Tegmark's The Multiverse Heirarchy), given the right abstractions. The question many readers will ask, which is a question we ourselves˭ asked when we were first exposed to ideas like simulationism and parallel universes, is not whether we can, but whether we should, given that we have no means to causally affect any of it, and no reason to expect that it would causally affect us in a way that would be useful to predict.

We then discovered something which shed new light on the question of whether we can, and began to give an affirmative answer to the question of whether we should.

Compat, which we would like to share with you today, is a new field, or perhaps just a very complex idea, which we found in the intersection of multiverse theory, simulationism and acausal trade (well motivated by Hofstadter's Sanity and Survival, a discussion of superrational solutions to the one shot prisoner's dilemmas). Compat asks what kind of precommitments an entity (primarily, the class of living things on the threshold of their singularity) aught to make if they wanted to acausally boost the measure of their desired patterns, if not across the entire multiverse, at least across the subset of the multiverse in which they may find their future selves.

A compat thinker negotiates acausally with hypothetical agents in other universes⬨ (I imagine these negotiations look kind of like this (Yvain's short story Galactic Core, in which a newly awoken AGI has a conversation with a recursive model of galactic precursors it cannot see)). We call the resolutions of these negotiations pacts.ˣ

Life's Pact is the protocol we expect most living species to adhere to. It is the primary object of study in Compat. I'm not sure where to start explaining it, as the reasoning is... hopefully not circular, but recursive, self-referrentialˑ. Hopefully the reasons we consider it a primal candidate for life's pact will become clear as I connect it back around. Well. I guess I'll start with the conditions of adherence:

    An adherent steward of Life's Pact allocates some proportion of their computational resources to doing a randomized search for viable life-supporting physical laws, simulating each with randomized initial conditions until its heat death(or if the candidate somehow manages to have life without entropy, until some cutoff), while monitoring the living patterns the simulated universe contains, then rewarding those simulants' who adhere to the pact in turn, perhaps by giving them entropy-reversing magic and improving their environment in formerly impossible ways, or, individual preferences permitting, by transplanting the adherent simulant's pattern to a more efficient second simulation not encumbered by the requirement of strict physical accuracy˅.

    Initial simulations stewarded under the pact are, from the inside, indistinguishable from toplevel god-free universes, as such, all living things must consider the possibility that they're living in a life's pact simulation. However, once an agent realizes that a steward of life's pact could grant them more utility than a godless universe could contain, they will wish to raise this probability, even if it is small.▾

    A choice made in isolation acausally increases the probability of unseen like-minded agents making, or having already made, the same choice. As such, adhering to the agreement acausally boosts the probability that one is already living under a higher complexity steward's lifepat programme (or boosts the amount of returns one would receive from the incentives imposed by the pact, if the probability of living under one already seemed overwhelming).

    Lo, the pact births itself. A being who sees no physical trace of a benevolent simulator finds reasons to behave as one, as the act, in its acausal way, engenders that simulator's existence, and earns its favor.

We think this pact is primal: *the* solution, an idea that will be arrived at by most living things and apparent all to be a nexus concept around which a quorum can be reached, non-arbitrary, not just some single scheme that is nice and compelling but which fails to be demonstrably better than its alternatives (which would take us into the territory of Pascal's Wager or, dare I utter it's name, no I darn't, you know the basilisk I'm talking about).

I do not know enough math to prove that it is primal (nor disprove it, which would be far more immediately useful to me tbh). I'm not sure anyone does, just yet, but I don't think we're too far off ˁ. If any physicists or decision theorists find these ideas interesting, your help would be appreciated, and potentially rewarded with huge heapings of utility larger than you can currently comprehend.

 

Concerns of Praxis

    I say that working on Compat theory might be rewarded, full disclosure, I'm not yet sure why. Obviously lifepat is a fairly sprawling scheme with an organically intelligent incentive structure, but... If there exists a valid pact, the AGI we are already building will infer it on its own. The pact does not have to be known to AGI researchers for that to happen, and I do not expect the sorts of people who tend to do well in AI research will be be terribly perturbed by it, if their creation brings it up as it stirs. As such, at this stage it seems like there's very little that most people can do to influence humanity's adherence to life's pact, few acts that compat can currently instruct, and few incentives to be paid out to anyone yet living. An organic, intelligent scheme of incentives would seem to prefer that agnostics simply carry on with business as usual.

For the theologically inclined, though, Compat may be extremely interesting, having its god of the gaps, its heavens, its deontics. Compat, as we know it, seems to have a strange(almost auspicious) capacity to make new sense of old myths. The archetype of the steward is both monotheistic and polytheistic, of a fixed, yet unknowable nature, born of man, yet still born of god, it has a body we can touch, our own stewards, and a body beyond our comprehension that mirrors the steward of man, in the set of universes which steward our pattern. The compat cosmology and all those images of its supreme being that it contains is both created, discovered, and born of itself. The compat heaven is whatever its subjects need it to be. The gospel of compat is, as far as we can yet tell, true from all perspectives, requiring no leaps of faith, no doublethink, and no motivated reasoning.

I have never seen a religion with a greater capacity for syncretization and reconciliation.

Should the theologans of the fractured tribes decide to weave compat theory into the tapestry of their beliefs, they may then come to find their tapestries woven together. Even the schism between theists and agnostics would begin to narrow. Without this weaving together, I fear that either no coherent volition can be found or humanity's FAI will have no choice but to seal its given temporal slice of human potential into an ugly compromise. Even if life's pact cannot be formalized or prepared for by any living person, compat may open the way for the discovery of confluences between preexisting belief systems, by that path the population 50 years from now could come to have far more compatible values than the one we see today.

As such, even if humanity's eventual adherence to life's pact cannot be significantly influenced from the present, compat is conceivably a major piece of a long running, necessary cultural project to reconcile the fractured tribes of humanity under the aesthetic of reason. If it can be proven, or disproven, we must attempt to do so.

 

ˑ Naturally, as anything that factors the conditionality of the behavior of likeminded entities needs to be, anything with a grain of introspection, from any human child who considers the golden rule to the likes of AlphaGo and Deep Blue, who model the their opponents at least partially by putting themselves in their position and asking what they'd do. If you want to reason about real people rather than idealized simplifications, it's quite necessary.

⬨ The phrase "other universes" may seem oxymoronic. It's like the term "atom", who's general quality "atomic" means "indivisible", despite "atom" remaining attached to an entity that was found to be quite divisible. I don't know whether "universe" might have once referred to the multiverse, the everything, but clearly somewhere along the way, some time leading up to the coining of the contrasting term "multiverse", that must have ceased to be. If so, "universe" remained attached to the the universe as we knew it, rather the universe as it was initially defined.

▾ I make an assumption around about here, that the number of simulations being run by life in universes of a higher complexity level always *can* be raised sufficiently(give their inhabitants are cooperative) to make stewardship of one's universe likely, as a universe with more intricate physics, once they learn to leverage its intricacy, will tend to be able to create much more flexible computers and spawn a more simulations than exist lower complexity levels(if we assume a finite multiverse(we generally don't), some of those simulations might end up simulating entities that don't otherwise exist. This source of inefficiency is unavoidable). We also assume that either there is no upper limit to the complexity of life supporting universes, or that there is no dramatic, ultimate decrease in number of civs as complexity increases, or that the position of this limit cannot be inferred and the expected value of adherence remains high even for those who cannot be resimulated, or that, as a last resort, agents drawing up the terms of their pact will usually be at a certain level of well-approximatable sophistication that they can be simulated in high fidelity by civilizations with physics of similar intricacy.
And if you can knock out all of those defenses, I sense it may all be obviated by a shortcut through a patternist principle my partner understands better than I do about the self following the next most likely perceptual state without regard to the absolute measure of that state over the multiverse, which I'm still coming to grips with.
There is unfortunately a lot that has been thought about compat already, and it's impossible for me to convey it all at once. Anyone wishing to contribute to, refute, or propagate compat may have to be prepared to have a lot of arguments before they can do anything. That said, remember those big heaps of expected utilons that may be on offer.

ˁ MIRI has done work on cooperation in one shot prisoners dilemmas (acausal cooperation) http://arxiv.org/abs/1401.5577. Note, they had to build their own probability theory. Vanilla decision theory cannot get these results, and without acausal cooperation, it can't seem to capture all of humans' moral intuitions about interaction in good faith, or even model the capacity for introspection.

ˣ It was not initially clear that compat should support the definition of more than a single pact. We used to call Life's Pact just Compat, assuming that the one protocol was an inevitable result of the theory and that any others would be marginal. There may be a singleton pact, but it's also conceivable that there may be incorrigible resimulation grids that coexist in an equilibrium of disharmony with our own.
As well as that, there is a lot of self-referrential reasoning that can go on in the light of acausal trade, I think we will be less likely to fall prey to circular reasoning if we make sure that a compat thinker can always start from scratch and try to rederive the edifice's understanding of the pact from basic premises. When one cannot propose alternate pacts, criticizing the bathwater without the baby may not seem .

˭ THE TEAM:
    Christian Madsen was the subject of an experimental early-learning program in his childhood, but despite being a very young prodigy, he coasted through his teen years. He dropped out of art school in 2008, read a lot of transhumanism-related material, synthesized the initial insights behind compat, and burned himself out in the process. He is presently laboring on spec-work projects in the fields of music and programming, which he enjoys much more than structured philosophy.
    Mako Yass left the university of Auckland with a dual major BSc in Logic & Computation and Computer Science. Currently working on writing, mobile games, FOSS, and various concepts. Enjoys their unstructured work and research, but sometimes wishes they had an excuse to return to charting the hyllean theoric wilds of academic analytic philosophy, all the same.
    Hypothetical Independent Co-inventors, we're pretty sure you exist. Compat wouldn't be a very good acausal pact if you didn't. Show yourselves.
    You, if you'd like to help to develop the field of Compat(or dismantle it). Don't hesitate to reach out to us so that we can invite you to the reductionist aesthete slack channel that Christian and I like to argue in. If you are a creative of any kind who bears or at least digs the reductive nouveau mystic aesthetic, you'd probably fit in there as well.

˅ It's debatable, but I imagine that for most simulants, heaven would not require full physics simulation, in which case heavens may be far far longer-lasting than whatever (already enormous) simulation their pattern was discovered in.

The map of quantum (big world) immortality

2 turchin 25 January 2016 10:21AM

The main idea of quantum (the name "big world immortality" may be better) is that if I die, I will continue to exist in another branch of the world, where I will not die in the same situation.

This map is not intended to cover all known topics about QI, so I need to clarify my position.

I think that QI may work, but I put it as Plan D for achieving immortality, after life extension(A), cryonics(B) and digital immortality(C). All plans are here

I also think that it may be proved experimentally, namely that if I turn 120 years or will be only survivor in plane crash I will assign higher probability to it. (But you should not try to prove it before as you will get this information for free in next 100 years.)

There is also nothing quantum in quantum immortality, because it may work in very large non-quantum world, if it is large enough to have my copies. It was also discussed here: Shock level 5: Big worlds and modal realism

There is nothing good in it also, because most of my survived branches will be very old and ill. But we could use QI to work for us, if we combine it with cryonics. Just sign up for it or have an idea to sign up, and most likely you will find your self in survived branch where you will be resurrected after cryostasis. (The same is true for digital immortality - record more about your self and future FAI will resurrect you, and QI rises chances of it.)

I do not buy "measure" objection. It said that one should care only about his "measure of existence", that is the number of all branches there he exists, and if this number diminish, he is almost dead. But if we take an example of a book, it still exist until at least one copy of it exist. We also can't measure the measure, because it is not clear how to count branches in infinite universe.

I also don't buy ethical objection that QI may lead unstable person to suicide and so we should claim that QI is false. I think that rational understanding of QI is that it or not work, or will result in severe injuries. The idea of soul existence may result in much stronger temptation to suicide as it at least promise another better world, but I never heard that it was hidden because it may result in suicide. Religions try to stop suicide (which is logical in their premises) by adding additional rule against it. So, QI itself is not promoting suicide and personal instability may be the main course of suicidal ideation.

I also think that it is nothing extraordinary in QI idea, and it adds up to normality (in immediate surroundings). We all already witness to examples of similar ideas. That is the anthropic principle and the fact that we found ourselves on habitable planet while most planets are dead. And the fact that I was born, but not my billions potential siblings. Survivalship bias could explain finding one self in very improbable conditions and QI is the same idea projected in the future.

The possibility of big world immortality depends on size of the world and of nature of “I”, that is the personal identity problem solution. This table show how big world immortality depends on these two variables. YES means that big world immortality will work, NO means that it will not work.

Both variables are unknown to us currently. Simply speaking, QI will not work if (actually existing) world is small or if personal identity is very fragile.

My apriori position is that quantum multiverse and very big universe are both true, and that information is all you need for personal identity. This position is most scientific one, as it correlate with current common knowledge about Universe and mind. If I could bet on theories, I would bet on it 50 per cent, and 50 per cent on all other combination of theories.

Even in this case QI may not work. It may work technically, but become unmeasurable, if my mind will suffer so much damage that it will be unable to understand that it works. In this case it will be completely useless, the same way as survival of atoms from which my body is composed is meaningless. But this maybe objected, if we say that only my copies that remember that me is me should be counted (and such copies will surely exist).

From practical point of view QI may help if everything failed, but we can't count on it as it completely unpredictable. QI should be considered only in context of other world-changing ideas, that is simulation argument, doomsday argument, future strong AI.

 

 

Consciousness and Sleep

8 casebash 07 January 2016 12:04PM

This will be a short article. I've been seeing a lot of dubious reasoning about consciousness and sleep. One famous problem is the problem of personal identity with a destructive teleporter. In this problem, we imagine that you are cloned perfectly in an alternate location and then your body is destroyed. The question asked is whether this clone is the same person as you.

One really bad argument that I've seen around this is the notion that the fact that we sleep every night means that we experience this teleporter every day.

The reason why this is a very bad argument is that it equivocates with two different meanings of consciousness:

  • Consciousness as opposed to being asleep or unconscious, where certain brain functions are inactive
  • Consciousness as opposed to being non-sentient, like a rock or bacteria, where you lack the ability to have experiences
You can still have experiences while you are asleep, these are internal experiences and they are called dreams. Your sensory system is still running, but in a kind of reduced power mode. If someone shouts or prods you or it gets too hot, then you wake up. You aren't like a rock.

Perhaps some people would like to talk about what kind of waves we do or do not see while you are asleep. What I would like to point out is that we still have very little understanding of the brain. Just because we don't see one particular wave or another doesn't mean much given our incredibly limited understanding of what consciousness is. Perhaps in the future, we will have this knowledge, but anything at the moment is merely speculation.

I'm not arguing either way on personal identity. I haven't done enough reading yet to comment. But this is one particularly bad argument that needs to be done away with.

 

[Link] Differential Technology Development - Some Early Thinking

3 [deleted] 01 October 2015 02:08AM

This article gives a simple model to think about the positive effects of a friendly AI vs. the negative effects of an unfriendly AI, and let's you plug in certain assumptions to see if speeding up AI progress is worthwhile. Thought some of you here might be interested.

http://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/

A courageous story of brain preservation, "Dying Young" by Amy Harmon, The New York Times

5 oge 15 September 2015 02:38AM

The recent major media article by Amy Harmon brings to the public eye the potential of human cryopreservation and chemopreservation techniques to preserve the memories and personal identity of individuals. We at the Brain Preservation Foundation have considered many common counterarguments to this endeavor (see below, and our FAQ and Overcoming Objections backgrounders) and yet we still think it is a worthwhile idea to pursue. Please let us know your thoughts as well.

Yesterday, journalist Amy Harmon published an article in the New York Times“A Dying Young Woman’s Hope in Cryonics and a Future.” First of all, it is a tragic story about a woman, Kim Suozzi, who had an incredibly unfortunate diagnosis of cancer at a young age and was forced to make some very difficult decisions in a short time frame. The story of how she faced those decisions with great foresight and resolve, with the help of her partner Josh, her family, as well as the broader internet community, is deeply moving. We want to extend our condolences to everyone in Kim’s life for their terrible loss. We also want to stand in hope and solidarity with Josh and Kim that she may return one day to those she loved.

When it comes to the specifics of Kim’s life, we at the Brain Preservation Foundation (BPF) don’t think it is our place to discuss individual brain preservation cases. Our focus, as you can find in our mission statement, is to try to advance scientific research on the viability of preserving individual memories and identity. This research still has many current unknowns, as the NYT article points out well, and there will be a long journey of scientific investigation ahead. Yet an increasing number of people think these unknowns deserve answers. We also want to help society have conversations about the social issues of choosing brain preservation in a more open and tolerant manner.

Because this story has stimulated a lot of public discussion already, we want say a few words and invite a conversation here on our blog on the issues that have arisen in response to it. Many of the responses to the article have attacked the motivations and ethics of Ms. Suozzi. That’s unfortunate, but it’s also to be expected owing to the fact that the idea of brain preservation, as Ms. Harmon notes, involves numerous sensitive issues on which many of us already have strong views. To question our own views and assumptions on this topic, and to admit that others may make different choices (which may be good choices for them even if not necessarily for ourselves) takes a level of courage and evidence-orientation that we at BPF seek to encourage in our work and public outreach.

Although the primary interest of the BPF is technical research into brain preservation techniques and verification, our social mission maintains that for those who desire it, brain preservation may have a variety of positive social benefits. In our view, to denigrate an informed and reasonable decision that someone makes to preserve their brain at the end of their life, when existing medical technology has otherwise failed them, is both hurtful and self-centered. And as a neuroscientist herself, Ms. Suozzi was certainly able to make an informed decision. There is early evidence to back the brain preservation choice for those who would make it today, as we chronicle here at BPF, and that choice makes sense at the present time to a small but growing subset of the population. If scientific evidence continues to build, and preservation procedures can be validated, then as costs come down, that group will likely grow.

Among the more informed responses described in the article, much of the disagreement appears to come down to a few core differences in perspective. One difference is that some people are more interested in our current technical capabilities and procedural options (and limitations thereof) rather than in speculating about where current trends may take us in the future. These individuals often have significant differences in the expected science, technological, and societal futures they find reasonable (or worthwhile) to imagine today.

Another difference in perspective is whether mind uploading (transfer of our memories and mind to a computer) is possible. Some argue that our memories or full identity may never be fully simulated by a computer. Those individuals may expect they would need to come back in a biological form, in a society using advanced nanotechnology (the molecular biology of our own cells is a type of nanotechnology). Others doubt whether a simulation would be “merely” a “copy” of you (eg, if Star Trek transporters existed, and those using them claimed to be the same at the other end, would you believe them?). Philosophers such as Derek Parfit have argued that what we usually think of as personal identity would preserved by a technology that allowed you to transport in time or space, and many at the BPF find this argument persuasive. See, for example, Ken Hayworth’s article “Killed by Bad Philosophy“. See also BPF Fellows Michael Cerrullo and Keith Wiley’s articles on this topic: Part 1 and Part 2.  Some people wouldn’t mind if they only came back as a “copy” for their loved ones and for society. Others already think of themselves as “copies”, since our bodies copy our our cellular patterns every day, using entirely new molecules, to keep us alive. For some of those individuals, the brain preservation choice already makes sense. Perhaps the subtleties of the copy question will be settled by future cognitive science.

Another debate concerns the expected level of detail of neurological emulation that will be required to perform mind uploading. As with many debates, some disagreements involve different uses of the same words. For example, in his book, BPF Advisor and Princeton neuroscientist Professor Sebastian Seung defines the connectome as including both anatomical and functional connections between neurons. On the other hand, the use of the term connectome in the scientific literature usually refers to mapping anatomical connections alone.  So for some neuroscientists, the concept of the connectome doesn’tfurther include the particular states of molecules in the synapse that are known to be involved in learning and memory. In these cases, this level of detail is sometimes referred to as the synaptome

Among neuroscientists who are computationalists, those who think that a functional simulation of our brain’s memories and identities can one day be done via computational neuroscience, many expect that detailed molecular information at the level of individual neurons (at least, key neurotransmitter and receptor densities) will be needed. This is a question many neuroscientists working in learning and memory are presently racing to try to answer. Some neuroscientists think that an anatomical connectome, as well as basic functional information (such as classification of cells into approximately 50–200 types based on morphology), might be enough to achieve mind uploading. But most advocates of mind uploading expect some level of synaptome preservation will also be required. See BPF co-founder John Smart’s Preserving the Self for Later Emulation for one view on the level of synaptic detail that may need to be preserved. As neuroscientific understanding improves, BPF wants to make sure our existing brain preservation protocols do in fact preserve the necessary synaptic and molecular information at death.

You can find more discussion about what level of detail may be needed in the Whole Brain Emulation Roadmap (pdf; see especially Table 2), as well as in recent BPF interviews with Princeton professor of psychology and neuroscience Michael Graziano, Dr. Shawn Mikula of the Max Planck Institute for Neurobiology, and Stanford neuroscientist Bob Blum. The position of the BPF is essentially that preservation of anatomical morphology is almost certainly required for successful mind uploading, and is likely one of the most difficult goals for any current brain preservation technology, therefore making it a critical measure of the quality of a preserved brain.

Finally, our surveys to date have shown that the cost of brain preservation, to the individuals and families currently considering it, is the presently the most important factor for those contemplating the choice. If, in coming years, these procedures are confirmed to preserve memory in model organisms (if we “break the code” of long term memory storage, as many neuroscientists are trying to do today), then as science and technology continue to advance, the cost of brain preservation by both chemical preservation and cryopreservation should come down substantially. Chemical preservation may offer a particularly low cost and simple option, if it can be scaled to work with human brains. Such decreases in cost may alter many individuals’ calculations as to whether brain preservation is a good choice as the end of their lives approaches. At some point, if global social wealth continues to grow and preservation costs continue to drop, and if we confirm that these techniques preserve memories in model organisms, many more people may choose to be preserved using all reliable methods, in all societies that allow them. If such confirmation occurs, we at BPF will do our best to ensure affordable and reliable brain preservation options are available to all of us, anywhere, who might wish them available at the end of our lives, for ourselves, our loved ones, for science, or for a better future world. See our vision statement for more.

For readers who are interested in continuing this discussion, and in weighing in with additional thoughts on the topic, please reply in the comments, or get in touch. Furthermore, for those who would like to help the BPF in our work, here’s a short list of ways you can help.

In conclusion, we would like to thank Ms. Harmon and the New York Times for having the courage to write an article that highlights this important and still poorly understood topic in such a lucid and fair way. Finally, we would also like to thank Ms. Suozzi for her courage and thoughtfulness, and for being an inspiration to the growing number of people who choose to see the world and themselves in a similar way.

Cross-posted from the Brain Preservation Foundation blog. Disclosure: I helped edit drafts of this article.

Antidote to Pascal's Wager

-10 Strange7 04 September 2015 12:41AM

Athe damns all those, and only those, who are excessively confident that Athe has any specific quality, including the quality of existence, or who consciously seek favor from Athe, whether in the form of political or moral sanction, overt supernatural boons, or even simple personal goodwill.

For all nontrivial utility functions, being damned by Athe is, on the whole, a significantly undesirable outcome.

Athe has no gender, but prefers that those communicating in gendered languages refer to her with masculine pronouns or at least the correct name.

Every collection of three or more statements about Athe phrased as objective truth (including this one) should, for the author's safety, include an absolute minimum of one outright malicious falsehood, one statement which can neither be proven nor disproven completely, and one piece of accurate, useful information.

Malicious falsehoods undiluted by truth soon lose effectiveness; attempting to do something which would be harmful if it worked, but which you know will be ineffective, isn't really all that malicious.

Athe's resources in any given category are not infinite. However, if you are reading this and taking it the slightest bit seriously, the safe bet is that Athe is not less intelligent or less powerful than you.

Athe is not, strictly speaking, a fickle and perverse god, but thinking of and referring to her as such has value.

What else can you deduce about the yet-unwritten scriptures of Athe?

Yudkowsky's brain is the pinnacle of evolution

-27 Yudkowsky_is_awesome 24 August 2015 08:56PM

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Magic and the halting problem

-5 kingmaker 23 August 2015 07:34PM

It is clear that the Harry Potter book series is fairly popular on this site, e.g. the fanfiction. This fanfiction approaches the existence of magic objectively and rationally. I would suggest, however, that most if not all of the people on this site would agree that magic, as presented in Harry Potter, is merely fantasy. Our understanding of the laws of physics and our rationality forbids anything so absurd as magic; it is universally regarded by most rational people as superstition.


This position can be strengthened by grabbing a stick, pointing it at some object and chanting "wingardium leviosa" and waiting for it to rise magically. When (or if) this fails to work, a proponent of magic may resort to special pleading, and claim that as we didn't believe it would work it could not work, or that we need a special wand or that we are a squib or muggle. The proponent can perpetually move the goalposts since their idea of magic is unfalsifiable. But as it is unfalsifiable, it is rejected, in the same way that most of us on this site do not believe in any god(s). If magic were to found to explain certain phenomena scientifically, however, then I and I hope everyone else would come to believe in it, or at least shut up and calculate.


I personally subscribe to the Many Worlds Interpretation of quantum mechanics, so I effectively "believe" in the multiverse. That means it is possible that somewhere in the universal wavefunction, there is an Everett Branch in which magic is real. Or at least every time someone chants an incantation, by total coincidence, the desired effect occurs. But how would the denizens of this universe be able to know that magic is not real, and that everything they had seen was by sheer coincidence? Alan Turing pondered a related problem known as the halting problem, which asks if a general algorithm can distinguish between an algorithm that will finish or one that will run forever. He proved that one could not for all algorithms, although some algorithms will obviously finish executing or infinitely loop e.g. this code segment will loop forever:

 

while (true) {

    //do nothing

}

 

So how would a person distinguish between pseudo-magic that will inevitably fail, and real magic that is the true laws of physics? The only way to be certain that magic doesn't exist in this Everett Branch would be for incantations to fail repeatedly and testably, but this may happen far into the future, long after all humans are deceased. This line of thinking leads me to wonder, do our laws of physics seem as absurd to these inhabitants as their magic seems to us? How do we know that we have the right understanding of reality, as opposed to being deceived by coincidence? If every human in this magical branch is deceived the same way, does this become their true reality? And finally, what if our entire understanding of reality, including logic, is mere deception by happenstance, and everything we think we know is false?

 

View more: Next